Documenting my learning in various machine learning topics. I'm collecting the materials I use to learn, as they are at the time of my learning, to make the learning process reproducible more easily. While this leads to some code redundancy, it makes it more likely you can duplicate my path. I also note interesting trends, topics, hardware, people, projects, etc.
To learn a topic, you can try to work with the repositories in the same order I did, and extract the same lessons. Everything here within a model type is listed in the order of experimentation.
- CNN
- GNN 1.
- LLM
- RAG
- Science (Including Agentic RAG)
- Agents
- RL
- OAI
- Anthropic
- DeepSeek
- Groq
- inference.net
- OpenRouter
- Ollama (local inference provider)
- Nvidia Jetson
- Groq LPU (not publicly sold)
- Truffle Jetson-based, I believe
- Cerebras CS-3 (not publicly sold)
- Etched Sohu Transformer ASIC