84 results for “topic:mixtral”
Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
AI Chatbots in terminal for free
A snappy, keyboard-centric terminal user interface for interacting with large language models. Chat with ChatGPT, Claude, Llama 3, Phi 3, Mistral, Gemma and more.
A simple, performant and scalable Jax LLM!
Mac app for Ollama
[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.
Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.
中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)
🏗️ Fine-tune, build, and deploy open-source LLMs easily!
Design, conduct and analyze results of AI-powered surveys and experiments. Simulate social science and market research with large numbers of AI agents and LLMs.
Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
On-device LLM Inference Powered by X-Bit Quantization
The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"
A Ruby gem for interacting with Ollama's API that allows you to run open source AI LLMs (Large Language Models) locally.
Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).
Bypass restricted and censored content on AI chat prompts 😈
GPT-4 level function calling models for real-world tool using use cases
Build LLM-powered robots in your garage with MachinaScript For Robots!
AI stack for interacting with LLMs, Stable Diffusion, Whisper, xTTS and many other AI models
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
lcpp is a dart implementation of llama.cpp used by the mobile artificial intelligence distribution (maid)
Self-host a ChatGPT-style web interface for Ollama 🦙
PasLLM - LLM inference engine in Object Pascal (synced from my private work repository)
This application uses LLMs like DeepSeek, GPT-5, Claude, Gemini or Llama, Mixtral (locally) in order to generate text based on the user input. The user input is used to retrieve relevant information from the database and then the retrieved information is used to generate the text. This approach combines power of LLMs and access to source documen
Deploy a RESTful API Server to interact with Ollama and Stable Diffusion
A powerful library for interacting with the Herc.ai API.
Easy "1-line" calling of all LLMs from OpenAI, MS Azure, AWS Bedrock, GCP Vertex, and Ollama
An unofficial C#/.NET SDK for accessing the Mistral AI API