From 503b17f012047847b0f99b730b6345e7d086a679 Mon Sep 17 00:00:00 2001 From: Vidhu Mathur <74452705+Kranium2002@users.noreply.github.com> Date: Mon, 29 Jul 2024 02:06:42 +0530 Subject: [PATCH] Add README --- README.md | 112 ++++++++++++------------------------------------------ 1 file changed, 24 insertions(+), 88 deletions(-) diff --git a/README.md b/README.md index af38737..fbb09e1 100644 --- a/README.md +++ b/README.md @@ -1,103 +1,39 @@ -# OptimAI +**Title:** Project Name -OptimAI is a powerful Python module designed to optimize your code by analyzing its performance and providing actionable suggestions. It leverages a large language model (LLM) to give you detailed insights and recommendations based on the profiling data collected during the execution of your code. This module supports various kinds of profilers from the [perfwatch package](https://github.com/Khushiyant/perfwatch). +**Description:** +This project is a brief description of what the project is about. -## Features +**Getting Started:** -- Custom decorators to optimize functions with ease. -- Integration with perfwatch for performance profiling. -- Capture and analyze stdout, function execution time, network usage, function calls, CPU/GPU usage, etc using [perfwatch](https://github.com/Khushiyant/perfwatch). -- Seamless integration with various LLMs for code optimization suggestions. -- Support for OpenAI, Google Gemini, HuggingFace (offline), ollama and Anthropic. -- Optimized prompts for best performance on any LLM using [dspy](https://github.com/stanfordnlp/dspy) +**Prerequisites:** -## Installation +* Node.js v16 or higher +* npm or yarn -You can install OptimAI using pip: +**Installation:** -```bash -pip install optimizeai -``` +1. Clone the repository: `git clone https://github.com/your-username/project-name.git` +2. Navigate to the project directory: `cd project-name` +3. Install dependencies using npm: `npm install` (or `yarn install`) -## Setup +**Usage:** -To use OptimAI, you need to configure it with your preferred LLM provider and API key. Supported LLM providers include Google (Gemini models), OpenAI, Ollama, HuggingFace and Anthropic. For Ollama you need to have Ollama installed and the model artifacts also need to be downloaded previously. +1. Start the project: `npm start` (or `yarn start`) +2. Open the project in your favorite code editor +3. Follow the instructions in the `README.md` or project documentation +4. Feel free to create issues or pull requests on GitHub -1. **Select the LLM Provider**: - - For Google Gemini models: `llm = "google"` - - For OpenAI models: `llm = "openai"` - - For Hugging Face offline: `llm = "huggingface"` - - For Anthropic models: `llm = "anthropic"` - - For local Ollama models: `llm = "ollama"` +**Contributing:** -2. **Choose the Model**: - - Example: `model = "gpt-4"`, `model = "gemini-1.5-flash"`, `model = "codegemma"`, or any other model specific to the chosen LLM provider. +Contributions are welcome! Please read the [contributing guidelines](CONTRIBUTING.md) before submitting a pull request. -3. **Set the API Key**: - - Use the corresponding API key for the selected LLM provider. No API key required for local Huggingface Inference and Ollama. +**License:** -## Sample Code +This project is licensed under the [MIT License](LICENSE). -Here's a basic example demonstrating how to use OptimAI to optimize a function: +**Support:** -```python -from optimizeai.decorators.optimize import optimize -from optimizeai.config import Config -from dotenv import load_dotenv -import time -import os +If you have any questions or need help, feel free to: -# Load environment variables -load_dotenv() -llm = os.getenv("LLM") -key = os.getenv("API_KEY") -model = os.getenv("MODEL") - -# Configure LLM -llm_config = Config(llm=llm, model=model, key=key) -perfwatch_params = ["line", "cpu", "time"] - -# Define a test function to be optimized -@optimize(config=llm_config, profiler_types=perfwatch_params) -def test(): - for _ in range(10): - time.sleep(0.1) - print("Hello World!") - pass - -if __name__ == "__main__": - test() -``` - -### Setting Environment Variables - -You can set the environment variables (`LLM`, `API_KEY`, `MODEL`) in a `.env` file for ease of use: - -``` -LLM=google -API_KEY=your_google_api_key -MODEL=gemini-1.5-flash -``` - -## Upcoming Features - -- **Improved Context for Code Optimization**: Enhance the context provided to the LLM for more accurate and relevant optimization recommendations. -- **Report Generation**: Proper optimization report will be generated. -- **Support for a Better Config**: A better config support is coming through which you can set various llm parameters. - -## Contributing - -We welcome contributions to OptimAI! If you have an idea for a new feature or have found a bug, please open an issue on GitHub. If you'd like to contribute code, please fork the repository and submit a pull request. - -### Steps to Contribute - -1. Fork the repository. -2. Create a new branch (`git checkout -b feature-branch`). -3. Make your changes. -4. Commit your changes (`git commit -m 'Add new feature'`). -5. Push to the branch (`git push origin feature-branch`). -6. Open a pull request. - -## License - -OptimAI is licensed under the MIT License. See the [LICENSE](LICENSE) file for more details. +* Create an issue on GitHub +* Send an email to [your-email@address.com](mailto:your-email@address.com) \ No newline at end of file