Transform AI-generated text into authentic, human-like content that bypasses AI detection systems.
AI Humanizer is a powerful tool that transforms AI-generated text into natural, human-sounding content. Built on research findings about what bypasses AI detectors in 2025, it focuses on:
- Perplexity variation - Unpredictable word choices
- Burstiness - Mixing short and long sentences dramatically
- Human quirks - Contractions, filler words, incomplete thoughts
- Emotional authenticity - Personal voice and natural speech patterns
- π― High Success Rate - Designed to achieve <10% AI detection scores
- π Batch Processing - Process entire CSV files efficiently
- πΎ Resume Support - Continue from where you left off if interrupted
- π§ͺ Testing Scripts - Verify your setup before full runs
- π Progress Tracking - Real-time progress bars with tqdm
- π¨ Multiple Prompt Versions - Optimized prompts for different use cases
- Python 3.8+ installed on your system
- Ollama installed and running (Download here)
- Llama3 model pulled in Ollama
# Install Ollama, then run:
ollama pull llama3:8b- Clone the repository:
git clone https://github.com/Mohit1053/Humanizer.git
cd Humanizer- Install dependencies:
pip install -r requirements.txt- Verify your setup:
python test_humanizer.py-
Quick Test (5 samples)
python test_humanizer.py
Test the humanizer on 5 sample rows to verify everything works.
-
Generate Test Samples
python generate_test_samples.py
Creates formatted samples ready for AI detection testing.
-
Full CSV Processing
python humanize_v2.py
Process your entire CSV file with humanized text.
Edit the configuration section in any script:
# Input/Output
INPUT_CSV = "your_input_file.csv"
OUTPUT_CSV = "your_output_file.csv"
# Model Settings
MODEL = 'llama3:8b'
BATCH_SIZE = 50
# Processing
START_ROW = 0 # Resume from a specific row
DELAY_MIN = 0.5 # Rate limiting
DELAY_MAX = 1.0Humanizer/
βββ humanize_v2.py # Main humanizer script (optimized)
βββ test_humanizer.py # Quick test script (5 samples)
βββ generate_test_samples.py # Generate formatted test samples
βββ generate_samples.py # Alternative sample generator
βββ quick_test.py # Quick testing utilities
βββ single_test.py # Test single text transformation
βββ optimize_prompts.py # Prompt optimization experiments
βββ humanize_csv.py # Legacy version
βββ final_pledges_merged.csv # Sample input data
βββ test_results.csv # Test output results
βββ TEST_SAMPLES_READY.md # Pre-generated test samples
βββ README.md # This file
After humanizing your text, test it at:
See TEST_SAMPLES_READY.md for pre-generated samples ready for testing.
The humanizer uses carefully crafted prompts that instruct the LLM to:
- Preserve exact emotional meaning - Keep the core message intact
- Add natural speech patterns - "honestly", "like", "you know", "I mean"
- Use heavy contractions - I'm, can't, won't, it's, that's, don't
- Mix sentence lengths - Very short (3-5 words) to long rambling ones
- Sound conversational - Like venting to a close friend
- Include self-doubt - Natural human uncertainty
Original (AI-generated):
Surgery prep is torture, thinking through every step as if it's my final test. Each incision, each stitch might mean life or death.
Humanized:
Honestly, prep for surgery is just brutal, you know? Like, I'm literally thinking through every single step like it's my final exam or something. Each incision, each stitch - I mean, what if I mess up and someone dies on the table?
- Processing Speed: ~50 rows per minute (with rate limiting)
- AI Detection Score: Typically 0-10% on GPTZero and ZeroGPT
- Batch Efficiency: Resume capability prevents data loss
- Model Used: Llama3 8B (free, runs locally)
| Script | Purpose | Use When |
|---|---|---|
test_humanizer.py |
Quick 5-row test | First-time setup verification |
humanize_v2.py |
Main production script | Processing full datasets |
generate_test_samples.py |
Create test samples | Need samples for AI detector testing |
single_test.py |
Test single text | Experimenting with individual texts |
optimize_prompts.py |
Prompt experiments | Developing new prompt versions |
# Check if Ollama is running
ollama list
# Start Ollama service if needed
ollama serve# Pull the required model
ollama pull llama3:8b
# Verify installation
ollama list- Check CSV file path is correct
- Ensure the pledge column name matches your CSV
- Try testing with
test_humanizer.pyfirst
Contributions are welcome! Feel free to:
- Report bugs
- Suggest new features
- Submit pull requests
- Share prompt improvements
We welcome contributions! Please see our Contributing Guidelines for details on:
- How to fork and set up the project
- Code style guidelines
- Commit message format
- Pull request process
See CHANGELOG.md for a detailed history of changes.
This project is licensed under the MIT License - see the LICENSE file for details.
This tool is for educational and research purposes. Always ensure your use case complies with relevant terms of service and ethical guidelines.
- Built with Ollama for local LLM inference
- Uses Meta's Llama3 model
- Research-based approach to AI detection bypassing
For questions or feedback, please open an issue on GitHub.
Made with β€οΈ by Mohit1053