Arnav Garg
arnavgarg1
Tech Lead Mananger, AI @ Rubrik | Previously: ML Lead at Predibase, ML Scientist at Atlassian, Core Contributor to Ludwig.ai
Languages
Repos
17
Stars
8
Forks
1
Top Language
Python
Loading contributions...
Top Repositories
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
Train transformer language models with reinforcement learning.
Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
Utils for Unsloth
Repositories
17No description provided.
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
Train transformer language models with reinforcement learning.
No description provided.
Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
Utils for Unsloth
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Fully open reproduction of DeepSeek-R1
No description provided.
OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning
No description provided.
No description provided.
No description provided.
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Easy and Efficient Quantization for Transformers
Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning
python version of raincloud