Skip to content
#

ai-optimization

Here are 19 public repositories matching this topic...

Prompt Booster: A comprehensive tool for optimizing LLM prompts with version control, A/B testing, and template management. Supports multiple AI providers (OpenAI, Gemini, DeepSeek, Qwen, etc.) across web and desktop platforms. Increase your AI prompt effectiveness with professional engineering tools.

  • Updated Jun 21, 2025
  • TypeScript

Hackathon winner at AI Engineer World Fair Hackathon: Transforming code, one function at a time, to reduce digital carbon footprints and create a more sustainable digital world.

  • Updated Apr 3, 2025
  • Python

Dive into advanced quantization techniques. Learn to implement and customize linear quantization functions, measure quantization error, and compress model weights using PyTorch for efficient and accessible AI models.

  • Updated May 22, 2024
  • Jupyter Notebook

Optimize Google Chrome with installation tweaks, registry adjustments, flags, debloating, file compression, and AI optimizations. Reduce memory and CPU usage for faster performance and improved search results.

  • Updated Mar 21, 2025
  • Batchfile

Breakthrough polymer-enhanced fusion framework achieving 8.32× WEST tokamak performance with LQG physics integration. Grid parity achieved at .03-0.05/kWh. Complete simulation suite for HTS materials, AI coil optimization, liquid metal divertors, and economic analysis.

  • Updated Jun 21, 2025
  • Python

Agentic Workflow Evaluation: Text Summarization Agent. This project includes an AI agent evaluation workflow using a text summarization model with OpenAI API and Transformers library. It follows an iterative approach: generate summaries, analyze metrics, adjust parameters, and retest to refine AI agents for accuracy, readability, and performance.

  • Updated Feb 23, 2025
  • Python

This repository explores OpenAI’s o1 model, a cutting-edge AI designed for abstract reasoning, coding, and vision-based tasks. It provides insights into o1’s strengths, advanced prompting techniques, task delegation, and real-world applications, enabling developers to build intelligent, high-performance AI-driven solutions.

  • Updated Feb 5, 2025
  • Jupyter Notebook

The course teaches how to fine-tune LLMs using Group Relative Policy Optimization (GRPO)—a reinforcement learning method that improves model reasoning with minimal data. Learn RFT concepts, reward design, LLM-as-a-judge evaluation, and deploy jobs on the Predibase platform.

  • Updated Jun 13, 2025
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the ai-optimization topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-optimization topic, visit your repo's landing page and select "manage topics."

Learn more