This contains mostly just code from the class. The class was based off of Stanford cs231n course. That course has 3 parts, of which we mainly did the first 2.
All of the code from the first 2 parts were written without any ML libraries such as pytorch to really get an understanding for the linear algebra going on within the models.
In part 1, we were given no baseline code, and had to write all of the helper functions we used from scratch. We learnt about KNNs, Regression models, Linear Classifiers, and basic Neural Networks.
For the second part, we basically just followed the Stanford assignment, which meant we were given many helper functions and classes and just had to fill in the important code for the models. We learnt about Batch Normalization, Dropout, and Pooling layers, along with CNNs.
Then, for part 3, we focused on text models, using architectures like RNNs and LSTMs, ultimately working up to Transformers. The earlier assignments in this part has no helper code given, but later parts like Transformers had a lot of helper code given.
For the final project, Thomas and I worked on looking into 2048. Specifically, using Reinforcement Learning to create a bot to play 2048. Our project was quite rushed, I wish we had more time to explore things further before the symposium. Nevertheless, you can learn more about it in the TA_2048 folder.