syheliel
master @PKU undergraduate @ECNU
Languages
Repos
28
Stars
1
Forks
1
Top Language
Rust
Loading contributions...
Top Repositories
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.
WhiteFox: White-Box Compiler Fuzzing Empowered by Large Language Models (OOPSLA 2024)
CKGFuzzer: LLM-Based Fuzz Driver Generation Enhanced By Code Knowledge Graph
Automatic DNN generation for fuzzing and more
Repositories
28No description provided.
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.
WhiteFox: White-Box Compiler Fuzzing Empowered by Large Language Models (OOPSLA 2024)
CKGFuzzer: LLM-Based Fuzz Driver Generation Enhanced By Code Knowledge Graph
Automatic DNN generation for fuzzing and more
JPF is an extensible software analysis framework for Java bytecode. jpf-core is the basis for all JPF projects; you always need to install it. It contains the basic VM and model checking infrastructure, and can be used to check for concurrency defects like deadlocks, and unhandled exceptions like NullPointerExceptions and AssertionErrors.
MLIR grammar for tree-sitter
NumPy & SciPy for GPU
No description provided.
No description provided.
Repository that gets synchronized with the wiki on jpf-core
No description provided.
No description provided.
No description provided.
No description provided.
No description provided.
一个基于SpringBoot开发的个人博客。集成了:博客前台,后台管理。
No description provided.
基于Vue+SpringBoot构建的博客项目
No description provided.
A enumerator for MLIR, relying on the information given by IRDL.
纯c++的全平台llm加速库,支持python调用,chatglm-6B级模型单卡可达10000+token / s,支持glm, llama, moss基座,手机端流畅运行
No description provided.
No description provided.
No description provided.
No description provided.
Advanced Fuzzing Library - Slot your Fuzzer together in Rust! Scales across cores and machines. For Windows, Android, MacOS, Linux, no_std, ...
vscode extension for tinyram