30 results for “topic:phi-2”
Sample to envision intelligent apps with Microsoft's Copilot stack for AI-infused product experiences.
Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型,支持接入langchain加载本地知识库做检索增强生成RAG。Training your own Phi2 small chat model from scratch.
Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
LLM inference in Fortran
Collection of Basic Prompt Templates for Various Chat LLMs (Chat LLM 的基础提示模板集合)
Examples of RAG using LangChain with local LLMs - Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Test server code for Phi-2 model. support OpenAI API spec
Microsoft Phi 2 Streamlit App, deployed on HuggingFace Spaces is based on the Microsoft Phi 2 small language model (SLM) for text generation.
Build a Conversational AI System that can answer questions by retrieving the answers from a document.
Examples of RAG using Llamaindex with local LLMs in Linux - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Co:Here Inference configurations
Fine tune Phi 2 for persona grounded chat
No description provided.
This project demonstrates the fine-tuning of the Phi-2 language model for mental health-focused applications such as sentiment analysis, therapy assistance, and early detection of mental health concerns. Through a carefully curated dataset and a detailed notebook, this repository bridges the gap between AI and mental wellness .
Code submission for the "Specializing Large Language Models for Telecom Networks by ITU AI/ML in 5G" challenge
No description provided.
This repository contains a Python script for a Telegram bot that integrates with OpenAI's API or other compatible REST APIs (such as Jan https://jan.ai/). It's designed to provide an interactive AI experience through Telegram, using simple chat functionalities.
A comprehensive toolkit for fine-tuning Microsoft's Phi-2 and Phi-3.5 language models, featuring memory-efficient training, interactive chat, and model comparison capabilities.
This project is an AI-powered algebra tutor using the Phi-3 Mini model. It provides personalized learning through interactive chat, adapting to the student's level and offering detailed step-by-step solutions. Built with Streamlit for an engaging educational experience.
This repository contains the source code used for finetuning the LLM phi-2 with several frameworks, such as DPO.
A novel 3D attention mechanism inspired by HDD platter geometry, featuring circular context wrapping and depth stacking to mitigate boundary effects in long-context language modeling.
FineTune Microsoft Phi-2 with your own data
Edge LLM for In-Vehicle Deployment - Optimized small language model for automotive conversational AI
Fine-tuning Microsoft’s Phi-2 Machine Learning Model with DPO
Flask API for generating text with the Phi-2 model from Hugging Face Transformers.
Colab notebook for finetuning Microsoft's Phi-2-3B LLM for solving mathematical word problems using QLoRA
A comprehensive evaluation of LLM trustworthiness (toxicity, bias, and privacy) using the DecodingTrust framework on Microsoft Phi-2.
FPGA-based acceleration of TinyLLaVA-Phi-2-SigLIP-3.1B inference on AMD Alveo U280 using Vitis HLS.
A production-ready intelligent chatbot combining semantic retrieval and fine-tuned LLMs for optimal response quality and latency