Jeremy Dyer
jdye64
Stream it, query it, and ship it! Apache NiFi PMC and Nvidia GPU wizard.
Languages
Repos
110
Stars
139
Forks
127
Top Language
Python
Loading contributions...
Top Repositories
Additional convenience processors not found in core Apache NiFi
Apache NiFi Docker Environment
Combination of Dockerized Hortonworks projects and other Hadoop ecosystem components
Device Registry for all components of Apache NiFi
Network Video Recorder for IP based security cameras
Simple Rust baremetal Kernel for BCM2837 (Raspberry PI)
Repositories
110NVIDIA Ingest is a set of microservices for parsing hundreds of thousands of complex, messy unstructured PDFs and other enterprise documents into metadata and text to embed into retrieval systems.
Additional convenience processors not found in core Apache NiFi
Slimmed down nv-ingest
No description provided.
Combination of Dockerized Hortonworks projects and other Hadoop ecosystem components
Distributed PDF parsing pipeline using Ray and NVIDIA NeMo
NeMo: a toolkit for conversational AI
Network Video Recorder for IP based security cameras
Scalable toolkit for data curation
Simple Rust baremetal Kernel for BCM2837 (Raspberry PI)
:house_with_garden: Open source home automation that puts local control and privacy first.
Mirror of Apache Nifi Minifi CPP
Repository consisting of scripts that assist with managing land when you are physically not nearby
Extensible SQL Lexer and Parser for Rust
NIM Agent Blueprint for multimodal PDF extraction
Apache NiFi Docker Environment
PandasAI is the Python library that integrates Gen AI into pandas, making data analysis conversational
:mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
Device Registry for all components of Apache NiFi
Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
xLights is a sequencer for Lights. xLights has usb and E1.31 drivers. You can create sequences in this object oriented program. You can create playlists, schedule them, test your hardware, convert between different sequencers.
A generative AI extension for JupyterLab
Fast multi-threaded, hybrid-out-of-core query engine focussing on DataFrame front-ends
NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
⚡ Building applications with LLMs through composability ⚡
LlamaIndex (formerly GPT Index) is a data framework for your LLM applications
No description provided.
No description provided.