GitHunt
ME

MetaVAI/Prompt-Tech

A lightweight playground for experimenting with prompt engineering techniques using the Gemini API. Test prompts, compare strategies, and analyze model responses with structured experiment tracking.

Prompt Engineering Playground

This project is for experimenting with prompt engineering techniques for Large Language Models (LLMs) using the Gemini API.

Structure

  • src/: Python source code
    • main.py: Main script to run experiments.
    • prompt_tech/: Core modules for the project.
      • api.py: Handles interaction with the Gemini API.
      • runner.py: Manages the execution of experiments and saving results.
  • prompts/: Store prompt templates and examples in text files.
  • data/: Input data for your experiments (e.g., CSV files).
  • results/: Output of your experiments (e.g., JSON files with model responses).
  • tests/: Tests for your experiment code.

Setup

  1. Install dependencies:

    uv pip install -r requirements.txt
    
  2. Set up your environment variables:

    • Create a .env file in the root of the project.
    • Add your Gemini API key to the .env file:
      GEMINI_API_KEY="YOUR_API_KEY"
      

Usage

  1. Add your prompts:

    • You can add new prompts and techniques to the prompts dictionary in src/main.py.
    • For more complex prompts, you can save them as text files in the prompts/ directory and read them in your code.
  2. Run the experiments:

    python src/main.py
    
  3. Analyze the results:

    • The results of the experiments will be saved in results/experiment_results.json.
    • This file will contain the prompt, the model's response, the technique used, and the latency for each experiment.

Languages

Jupyter Notebook97.8%Python2.2%

Contributors

Created July 14, 2025
Updated February 25, 2026