8 results for “topic:deep-learning-theory”
A curated list of papers of interesting empirical study and insight on deep learning. Continually updating...
codebase for "A Theory of the Inductive Bias and Generalization of Kernel Regression and Wide Neural Networks"
Code for "Information-Theoretic Local Minima Characterization and Regularization"
PyTorch implementation for "Unveiling Induction Heads: Provable Training Dynamics and Feature Learning in Transformers", NeurIPS 2024
Code to reproduce the paper "Deconstructing the Goldilocks Zone of Neural Network Initialization"
From-scratch NumPy implementations of core deep learning architectures (DNN, CNN, RNN). A journey into the first principles of AI.
Technical portfolio for Stanford CS229 (Summer 2020). First-principles approach to ML: Math-heavy derivations, NumPy-from-scratch implementations, and research project.
🧠 Build and understand core deep learning architectures from scratch using NumPy, exploring the fundamentals behind AI through hands-on coding.