Repos
164
Stars
0
Forks
0
Top Language
Python
Loading contributions...
Repositories
164Developer Asset Hub for NVIDIA Nemotron — A one-stop resource for training recipes, usage cookbooks, datasets, and full end-to-end reference examples to build with Nemotron models
Training library for Megatron-based models with bidirectional Hugging Face conversion capability
Open-source library for scalable, reproducible evaluation of AI models and benchmarks.
No description provided.
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 500+ LLMs (Qwen3, Qwen3-MoE, Llama4, InternLM3, GLM4, Mistral, Yi1.5, DeepSeek-R1, ...) and 200+ MLLMs (Qwen2.5-VL, Qwen2.5-Omni, Qwen2-Audio, Ovis2, InternVL3, Llava, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, DeepSeek-VL2, Phi4, GOT-OCR2, ...).
The server portion of a distributed ledger purpose-built for decentralized identity.
Plenum Byzantine Fault Tolerant Protocol
A clean, modular SDK for building AI agents with OpenHands V1.
🙌 OpenHands: Code Less, Make More
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with OpenTelemetry, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
The absolute trainer to light up AI agents.
Hyperledger Aries is infrastructure for blockchain-rooted, peer-to-peer interactions
No description provided.
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
The Solidity Contract-Oriented Programming Language
Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"
Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends
Fully open reproduction of DeepSeek-R1
Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)
Dolomite Engine is a library for pretraining/finetuning LLMs
Aries Framework .NET for building multiplatform SSI services
Everything needed to build applications that interact with an Indy distributed identity ledger.
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Development repository for the Triton language and compiler
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
A high-throughput and memory-efficient inference and serving engine for LLMs
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Proximal Policy Optimization.
Recipes to train reward model for RLHF.