An evolutionary algorithm system for evolving and optimizing prompts using genetic algorithms. Alpha Evolve combines evaluator code, evolution loops, and prompt templates to automatically improve prompt quality.
- Evaluator Module: Comprehensive fitness evaluation with multiple criteria (length, clarity, specificity, completeness)
- Evolution Loop: Full genetic algorithm implementation with selection, crossover, and mutation
- Prompt Templates: Template system for structured prompt generation and evolution
- Integrated System: Complete Alpha Evolve system combining all components
Evaluates fitness of prompts using multiple criteria:
- Length optimization
- Clarity scoring
- Specificity measurement
- Completeness assessment
Genetic algorithm implementation:
- Population initialization
- Fitness evaluation
- Selection (tournament selection)
- Crossover (single-point)
- Mutation (swap, insert, delete, replace)
- Elitism
- Convergence detection
Template management system:
- Template registration and management
- Variable extraction and filling
- Template evolution operations
- Predefined templates for common use cases
Main integration module combining all components:
- Unified API for prompt evolution
- Template-based prompt generation
- Result saving/loading
- Custom evaluator support
# Clone or download the repository
cd hackathon-science
# No external dependencies required!
# Uses only Python standard library (Python 3.7+)from alpha_evolve import AlphaEvolve
# Initialize Alpha Evolve
alpha = AlphaEvolve(
population_size=30,
mutation_rate=0.15,
crossover_rate=0.7,
elite_size=3,
max_generations=20
)
# Evolve prompts from seed
seed_prompts = [
"Write a Python function to calculate fibonacci numbers",
"Create a function that computes fibonacci sequence"
]
best = alpha.evolve_prompts(seed_prompts=seed_prompts, verbose=True)
print(f"Best prompt: {best.genome}")
print(f"Fitness: {best.fitness}")# Evolve using a template
best = alpha.evolve_prompts(
template_name='code_generation',
template_variables={
'language': 'Python',
'task': 'sorts a list',
'requirements': 'be efficient',
'output_format': 'the sorted list'
}
)def my_evaluator(prompt: str, context: dict) -> float:
# Your custom evaluation logic
score = 0.0
if 'Python' in prompt:
score += 0.3
if 'function' in prompt:
score += 0.3
if len(prompt) > 50:
score += 0.4
return min(score, 1.0)
alpha.set_custom_evaluator(my_evaluator)evaluation = alpha.evaluate_prompt("Your prompt here")
print(evaluation)
# Output: {
# 'fitness': 0.85,
# 'length_score': 0.9,
# 'clarity_score': 0.8,
# ...
# }Run the example script:
python alpha_evolve.pyThis demonstrates:
- Evolving prompts from seed
- Using templates
- Evaluating prompts
- Getting top results
evolve_prompts(seed_prompts, template_name, template_variables, context, verbose)- Evolve promptsevaluate_prompt(prompt, context)- Evaluate a single promptregister_template(name, template, template_type, description, variables)- Register new templateget_best_prompts(n)- Get top N evolved promptsget_evolution_history()- Get evolution statistics historyset_custom_evaluator(evaluation_function)- Set custom evaluation functionsave_results(filepath, best_individual)- Save results to JSONload_results(filepath)- Load results from JSON
evaluate(individual, context)- Evaluate fitnessbatch_evaluate(individuals, context)- Evaluate multiple promptsget_evaluation_details(individual, context)- Get detailed breakdown
initialize_population(seed_prompts)- Initialize populationevolve(context, verbose)- Run evolution loopget_best_individuals(n)- Get top individualsget_history()- Get evolution history
register_template(name, template, template_type, description, variables)- Register templateget_template(name)- Get template by nameevolve_template(name, operations)- Evolve a template
population_size: Number of individuals per generation (default: 50)mutation_rate: Probability of mutation (default: 0.1)crossover_rate: Probability of crossover (default: 0.7)elite_size: Number of top individuals to preserve (default: 5)max_generations: Maximum generations to run (default: 100)
Default criteria weights:
- Length: 0.1
- Clarity: 0.3
- Specificity: 0.3
- Completeness: 0.3
Customize by creating an Evaluator with custom criteria:
from evaluator import Evaluator
custom_evaluator = Evaluator({
'length': 0.2,
'clarity': 0.4,
'specificity': 0.2,
'completeness': 0.2
})code_generation: For code generation tasksproblem_solving: For analytical problem-solvingcreative_writing: For creative writing tasksdata_analysis: For data analysis tasks
MIT License - feel free to use and modify!
Contributions welcome! Areas for improvement:
- Additional mutation operators
- More sophisticated crossover strategies
- Advanced evaluation metrics
- Visualization tools
- Parallel evolution support