shaharia-lab/smarty-pants
An open-source platform democratizing Generative AI Applications. Build and deploy AI-powered tools without extensive expertise. Features flexible data integration, semantic search, and LLM integration. Create sophisticated AI applications using your private knowledge bases.
SmartyPants
Build and manage AI solutions fast without complicated learning curve about AI. An open source initiative by Shaharia Lab Oร
What is SmartyPants?
SmartyPants AI is an intelligent, AI-driven platform that seamlessly integrates multiple data sources, embedding models, and LLM providers. It offers powerful semantic search capabilities and an intuitive chat interface, allowing users to easily configure and interact with various AI models. Whether you're building a smart chatbot or need advanced data processing and querying, SmartyPants provides a flexible, user-friendly solution for your AI-powered applications.
Why SmartyPants?
We named this project SmartyPants. A lighthearted name implying the system is incredibly intelligent, able to handle complex queries with ease.
Key Features:
- Multi-source data integration and embedding generation
- Configurable LLM and embedding models
- Semantic search functionality
- Built-in chat interface and chatbot creation tools
- Easy-to-use API for seamless integration
Empower your projects with SmartyPants โ where AI meets simplicity!
Installation
- Pre-requisites:
- PostgreSQL 13 or higher
with pgvector extension enabled
for vector search capabilities.
- PostgreSQL 13 or higher
Use as a Docker Image
To use this application with Docker, follow these steps:
Pulling the Docker Images
-
For the backend:
docker pull ghcr.io/shaharia-lab/smarty-pants-backend:$VERSION -
For the frontend:
docker pull ghcr.io/shaharia-lab/smarty-pants-frontend:$VERSION
Replace $VERSION with the desired version tag. All available versions can be
found here.
Running the Backend
To run the backend, you need to set the following environment variables:
DB_HOST: Database hostDB_PORT: Database portDB_USER: Database userDB_PASS: Database passwordDB_NAME: Database name
Run the backend with this command:
docker run \
--name smarty-pants-backend \
-p 8080:8080 \
-e DB_HOST=<value> \
-e DB_PORT=<value> \
-e DB_USER=<value> \
-e DB_PASS=<value> \
-e DB_NAME=<value> \
-e DB_MIGRATION_PATH=<value> \
ghcr.io/shaharia-lab/smarty-pants-backend:$VERSION start
Replace <value> with the appropriate values for your environment.
Running the Frontend
To run the frontend, use this command:
docker run \
--name smarty-pants-frontend \
-e NEXT_PUBLIC_API_BASE_URL="http://localhost:8080" \
-p 3000:3000 \
ghcr.io/shaharia-lab/smarty-pants-frontend:$VERSION
Make sure to configure any necessary network settings to allow the frontend to communicate with the backend.
Environment Variables
| Variable | Required | Default value | Description |
|---|---|---|---|
APP_NAME |
No | smarty-pants-ai |
Name of the application |
DB_HOST |
Yes | "localhost" |
Database host address |
DB_PORT |
Yes | 5432 |
Database port number |
DB_USER |
Yes | "app" |
Database user name |
DB_PASS |
Yes | "pass" |
Database password |
DB_NAME |
Yes | "app" |
Database name |
API_PORT |
No | 8080 |
Port number for the API server |
API_SERVER_READ_TIMEOUT_IN_SECS |
No | 10 |
API server read timeout in seconds |
API_SERVER_WRITE_TIMEOUT_IN_SECS |
No | 30 |
API server write timeout in seconds |
API_SERVER_IDLE_TIMEOUT_IN_SECS |
No | 120 |
API server idle timeout in seconds |
TRACING_ENABLED |
No | false |
Enable or disable tracing |
OTLP_TRACER_HOST |
No | "localhost" |
OpenTelemetry Protocol (OTLP) tracer host |
OTLP_TRACER_PORT |
No | 4317 |
OTLP tracer port |
OTEL_METRICS_ENABLED |
No | false |
Enable or disable OpenTelemetry metrics |
OTEL_METRICS_EXPOSED_PORT |
No | 2223 |
Port to expose OpenTelemetry metrics |
COLLECTOR_WORKER_COUNT |
No | 1 |
Number of collector workers |
GRACEFUL_SHUTDOWN_TIMEOUT_IN_SECS |
No | 30 |
Graceful shutdown timeout in seconds |
PROCESSOR_WORKER_COUNT |
No | 1 |
Number of processor workers |
PROCESSOR_BATCH_SIZE |
No | 2 |
Batch size for the processor |
PROCESSOR_INTERVAL_IN_SECS |
No | 10 |
Processor interval in seconds |
PROCESSOR_RETRY_ATTEMPTS |
No | 3 |
Number of processor retry attempts |
PROCESSOR_RETRY_DELAY_IN_SECS |
No | 5 |
Delay between processor retry attempts in seconds |
PROCESSOR_SHUTDOWN_TIMEOUT_IN_SECS |
No | 10 |
Processor shutdown timeout in seconds |
PROCESSOR_REFRESH_INTERVAL_IN_SECS |
No | 60 |
Processor refresh interval in seconds |
ENABLE_AUTH |
No | false |
Enable authentication system for the application |
GOOGLE_OAUTH_CLIENT_ID |
No | "" |
Google OAuth client ID |
GOOGLE_OAUTH_CLIENT_SECRET |
No | "" |
Google OAuth client secret |
GOOGLE_OAUTH_REDIRECT_URL |
No | "" |
Google OAuth redirect URL |
SUPER_ADMIN_EMAIL |
Yes | "" |
Email address of super admin |
๐ Development
Prerequisites
Backend
- Go 1.22 or higher
- PostgreSQL 13 or higher with pgvector
extension enabled
Frontend
- Node.js 20 or higher
Running Locally
-
Clone the repository and navigate to the project directory.
git clone git@github.com:shaharia-lab/smarty-pants.git cd smarty-pants -
Start a compatible PostgreSQL database with the
pgvectorextension enabled. We have included adocker-compose.yml
file in the root directory to help you set up the database. Run the following command to start the database:docker-compose -f docker-compose.yml up -d
-
Create a
.envfile in the root directory and add the required environment variables. Load the environment variables
using the following command:export $(grep -v '^#' .env | xargs)
-
Run the backend:
go run . start
If you want to run the frontend, follow these steps:
-
Install the dependencies:
cd frontend npm install -
Start the frontend:
npm run dev
๐ Observability
To enable observability, set the TRACING_ENABLED environment variable to true. This will enable tracing. You can also
enable metrics by setting the OTEL_METRICS_ENABLED environment variable to true. The metrics will be exposed on the OTEL_METRICS_EXPOSED_PORT port.
You can run any OpenTelemetry Protocol (OTLP) compatible tools to visualize the trace. We recommend using Jaeger.
docker run -d --name jaeger \
-e COLLECTOR_OTLP_ENABLED=true \
-p 16686:16686 \
-p 4317:4317 \
-p 4318:4318 \
jaegertracing/all-in-one:latest๐ค Contributing
Contributions are welcome! Please follow the guidelines outlined in the CONTRIBUTING file.
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
