Final Project Logbook – April 18

Feb 5 – Researched project Ideas – 2 hours

Feb 7 – Wrote Project proposal – 3 hours

Feb 17 – Further research of learning algorithms – 1 hour

Feb 19 – Wrote code for:   – 3 hours

  • Reading the dataset
  • Partitioning the data into training and testing sets
  • Using K-fold cross validation

Feb 21 – Wrote biweekly update – 2 hours

Feb 28 – Researched Random Forests, and Implementation of Random Forests through Scikit Learn – 4 Hours

March 3 – Implementation of code for running Random Forests – 3 hours

March 7 – Wrote Biweekly update – 2 hours

March 9 – Reading/Researching SVMs in[1] – 3 hours

March 14 –  Researched possible libraries to use for my base models – 2 hours

March 20 – Implemented Linear SVMs through Scikit learn – 2 hour

March 21 – Wrote Biweekly update and responded to feedback on my project – 2 hours

March 27 – Implemented Analysis in code – 4 hours

April 3 – Researched and implemented data scaling and pre-processing – 1 hour

April 4 – Implemented Gaussian SVM and analysis[1][2] – 2 hours

April 7 – Wrote and recorded project demo – 2 hours

April 7 – Wrote Biweekly update – 1 hour

April 13 – Researched and implemented and tested the neural network model – 4 hours

April 15 – Started writing my final report – 4 hours

April 18 – Finalized final report – 8 hours

April 18 – Responded to peer questions, updated the website, and wrote final logbook – 20 minutes

Bi-Weekly Update #4

During the last two weeks, I first implemented the analysis functions for random forests and linear SVMs, but I hit a roadblock because my functions weren’t properly reading the data. I eventually realized that I needed to add data scaling to my features, and encode the string values in the dataset into binary features. 

Although I was now able to run my analysis, the results I saw from linear SVMs were lackluster and although Random Forests performed marginally better, I wanted to see if I could get better results with a different model.I then researched and implemented Gaussian SVMs, which ended up performing similarly to Random forests. 

During the following two weeks I plan to research and implement neural networks, and experiment again with the other models’ hyperparameters to see if they can be further optimized. Then I plan to write my final report and summarize all of my findings.

Current Results

Figure 1(Left): Tuned Linear SVM Confusion Matrix

Figure 2(Right): Tuned Random Forest Confusion Matrix

The Linear SVM had an accuracy of 0.6 on the test set, which clearly indicates that the data is not at all linearly separable.

The Tuned Random Forest had an accuracy of 0.81, which although much better, is still not near the desired precision, especially since it let 8762 attacks go undetected(19% of all attacks in the test set)(See Figure 2).

Figure 3: Tuned Gaussian SVM Confusion Matrix

The tuned Gaussian SVM had the best performance of the three models, but was only marginally better than the tuned Random Forest model with an accuracy of 0.82. The Gaussian SVM did however catch many more attacks, only letting 7229 through this time(15% of all attacks)(See Figure 3).

Overall the models that I have tested thus far have been somewhat effective, though I believe I can achieve a much higher accuracy yet.

Project Logbook – Apr 7

Feb 5 – Researched project Ideas – 2 hours

Feb 7 – Wrote Project proposal – 3 hours

Feb 17 – Further research of learning algorithms – 1 hour

Feb 19 – Wrote code for:   – 3 hours

  • Reading the dataset
  • Partitioning the data into training and testing sets
  • Using K-fold cross validation

Feb 21 – Wrote biweekly update – 2 hours

Feb 28 – Researched Random Forests, and Implementation of Random Forests through Scikit Learn – 4 Hours

March 3 – Implementation of code for running Random Forests – 3 hours

March 7 – Wrote Biweekly update – 2 hours

March 9 – Reading/Researching SVMs in[1] – 3 hours

March 14 –  Researched possible libraries to use for my base models – 2 hours

March 20 – Implemented Linear SVMs through Scikit learn – 2 hour

March 21 – Wrote Biweekly update and responded to feedback on my project – 2 hours

March 27 – Implemented Analysis in code – 4 hours

April 3 – Researched and implemented data scaling and pre-processing – 1 hour

April 4 – Implemented Gaussian SVM and analysis[1][2] – 2 hours

April 7 – Wrote and recorded project demo – 2 hours

April 7 – Wrote Biweekly update – 1 hour

References

[1] Machine Learning, Tom Mitchell, url: https://www.cs.cmu.edu/~tom/files/MachineLearningTomMitchell.pdf, accessed April 4, 2025

[2] SGD Classifier, Scikit Learn, url: 1.4. Support Vector Machines — scikit-learn 1.6.1 documentation, accessed April 4, 2025

Bi-Weekly Update #3

During these last two weeks, I focussed on implementing SVMs. I again used Tom Mitchell’s “Machine Learning”[1] as a base for my research and understanding of SVMs. After some research I decided a linear kernel would be appropriate for my purposes, as the dataset is quite large so training multiple gaussian kernels SVMs, as is necessary for analyzing performance, would be extremely time consuming. After some research I decided to use Scikit learn’s SGDClassifier[2] as the base for my model, as it is the fastest option I found. I have now implemented the model, though I have yet to finish the analysis portion of my code.

During the next two weeks I will focus on completing the analysis portion of my code, then preparing the demo video, which will likely feature an explanation of my code, and my reasoning behind choosing my hyperparameter configurations.

Project Logbook – Mar 21

Feb 5 – Researched project Ideas – 2 hours

Feb 7 – Wrote Project proposal – 3 hours

Feb 17 – Further research of learning algorithms – 1 hour

Feb 19 – Wrote code for:   – 3 hours

  • Reading the dataset
  • Partitioning the data into training and testing sets
  • Using K-fold cross validation

Feb 21 – Wrote biweekly update – 2 hours

Feb 28 – Researched Random Forests, and Implementation of Random Forests through Scikit Learn – 4 Hours

March 3 – Implementation of code for running Random Forests – 3 hours

March 7 – Wrote Biweekly update – 2 hours

March 9 – Reading/Researching SVMs in[1] – 3 hours

March 14 –  Researched possible libraries to use for my base models – 2 hours

March 20 – Implemented SVMs through Scikit learn – 2 hour

March 21 – Wrote Biweekly update and responded to feedback on my project – 2 hours

References

[1] Machine Learning, Tom Mitchell, url: https://www.cs.cmu.edu/~tom/files/MachineLearningTomMitchell.pdf, accessed March 21, 2025

[2] SGD Classifier, Scikit Learn, url: SGDClassifier — scikit-learn 1.6.1 documentation, accessed March 21, 2025

Bi-Weekly Update #2

During these last two weeks I focused on researching my first chosen learning algorithm – Random Forests. First I found a textbook[1] to gain a solid background of Random Forests, and how they work. After I had gained a solid understanding, I researched machine learning libraries that I could use to implement Random Forests in my code. I found Skit Learn’s RandomForestClassifier and I spent some time learning what each of its parameters’ function. Earlier this week I wrote the function in my code to implement random forests on the given dataset. 

Before the next progress report I plan to research and implement SVMs in my code, and add analysis for the performance of the two methods.

Project Logbook – Mar 7

Feb 5 – Researched project Ideas – 2 hours

Feb 7 – Wrote Project proposal – 3 hours

Feb 17 – Further research of learning algorithms – 1 hour

Feb 19 – Wrote code for:   – 3 hours

  • Reading the dataset
  • Partitioning the data into training and testing sets
  • Using K-fold cross validation

Feb 21 – Wrote biweekly update – 2 hours

Feb 28 – Researched Random Forests, and Implementation of Random Forests through Scikit Learn – 4 Hours

March 3 – Implementation of code for running Random Forests – 3 hours

March 7 – Wrote Biweekly update – 2 hours

References

[1] Machine Learning, Tom Mitchell, url: https://www.cs.cmu.edu/~tom/files/MachineLearningTomMitchell.pdf, accessed Feb 28, 2025

[2] RandomForestClassifier, Scikit Learn, url: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html, accessed Feb 28, 2025