Multi-agent system for converting natural language event descriptions into valid HED (Hierarchical Event Descriptors) annotations. Part of the Annotation Garden Initiative.
pip install hedit# Initialize with your OpenRouter API key (get one at https://openrouter.ai)
hedit init --api-key sk-or-v1-xxx
# Generate HED annotation from natural language
hedit annotate "participant pressed the left button with their index finger"
# Annotate from an image
hedit annotate-image stimulus.png
# Validate an existing HED string
hedit validate "Sensory-event, Visual-presentation"
# Check API health
hedit health| Command | Description |
|---|---|
hedit init |
Configure API key and preferences |
hedit annotate "text" |
Convert natural language to HED |
hedit annotate-image <file> |
Generate HED from an image |
hedit validate "HED-string" |
Validate a HED annotation |
hedit health |
Check API status |
hedit config show |
Display current configuration |
# Use JSON output for scripting
hedit annotate "red circle appears" -o json
# Specify HED schema version
hedit annotate "button press" --schema 8.3.0
# Use a different API endpoint
hedit annotate "stimulus" --api-url https://api.annotation.garden/hedit-devConfig files are stored in ~/.config/hedit/:
config.yaml- Settings (API URL, model, schema version)credentials.yaml- API key (stored securely)
- Natural Language to HED: Describe events in plain English, get valid HED annotations
- Image Annotation: Annotate visual stimuli directly from image files
- Multi-Stage Validation: AI agents generate, validate, evaluate, and refine annotations
- Bring Your Own Key: Uses OpenRouter API; you control your LLM costs and model choice
- JSON Output: Easy integration with scripts and pipelines
- HED Schema Support: Works with official HED schemas (8.x)
HEDit uses a multi-agent architecture powered by LangGraph:
- Annotation Agent: Generates initial HED tags from your description
- Validation Agent: Checks HED syntax and tag validity
- Evaluation Agent: Assesses faithfulness and suggests improvements
- Assessment Agent: Identifies missing elements for completeness
The agents work in feedback loops, automatically refining the annotation until it passes all validation checks.
- HED Standard - Learn about HED annotations
- OpenRouter - Get an API key for LLM access
- GitHub Issues - Report bugs or request features
For running your own HEDit API server, see the deployment options below.
Deploy with Docker for production use with GPU acceleration. See DEPLOYMENT.md and deploy/README.md
Run the API server locally for development. See Local Development Setup below
flowchart LR
Start([Input]) --> Schema[(JSON Schema<br/>Vocabulary)]
Schema --> Ann[Annotation Agent]
Ann --> Val{Validation}
Val -->|Invalid<br/>retry| ValErr[Syntax Errors]
ValErr --> Ann
Val -->|Max attempts| Fail[Failed]
Val -->|Valid| Eval{Evaluation}
Eval -->|Not faithful| EvalErr[Tag Suggestions<br/>Extension Warnings]
EvalErr --> Ann
Eval -->|Faithful| Assess{Assessment}
Assess -->|Incomplete| AssErr[Missing Elements]
AssErr -.Optional.-> Ann
Assess -->|Complete| Success[Final Annotation]
AssErr -->|Report| Success
Success --> End([Output])
Fail --> End
style Start fill:#e1f5ff
style End fill:#e1f5ff
style Ann fill:#fff4e1
style Val fill:#ffe1e1
style Eval fill:#e1ffe1
style Assess fill:#f0e1ff
style Success fill:#e1ffe1
style Fail fill:#ffe1e1
style Schema fill:#e8e8e8
Legend:
- π Solid arrows: Automatic loops
- β―β― Dotted arrows: Optional refinement
- π΅ Input/Output | π‘ Annotation | π΄ Validation | π’ Evaluation | π£ Assessment
-
Annotation Loop (Automatic):
- Generates HED annotation using short-form tags
- Uses complete HED syntax rules (parentheses, curly braces, #, /)
- Considers extensionAllowed tags for extensions
- Maximum validation attempts: 5 (configurable)
-
Validation Loop (Automatic):
- Checks syntax and tag validity
- Provides specific error codes and messages
- Loops back to annotation agent if errors found
- Stops if max attempts reached
-
Evaluation Loop (Automatic):
- Assesses faithfulness to original description
- Validates tags against JSON schema vocabulary
- Suggests closest matches for invalid tags
- Warns about non-portable tag extensions
- Loops back if not faithful
-
Assessment Loop (Optional):
- Final completeness check
- Identifies missing dimensions
- Can trigger optional refinement or report only
- Configurable behavior
- Docker with NVIDIA Container Toolkit
- Docker Compose
- All dependencies (Python, Node.js, HED schemas, validator) included in image!
- Python 3.11+
- CUDA-capable GPU (for LLM serving)
- Node.js 18+ (for HED JavaScript validator)
- Conda (recommended)
# Clone repository
cd /path/to/hedit
# Build and start (auto-pulls model and includes all HED resources)
docker-compose up -d
# Monitor first start (~10-20 min for model download)
docker-compose logs -f
# Verify
curl http://localhost:38427/health
# Open frontend
open frontend/index.html- Create conda environment:
source ~/miniconda3/etc/profile.d/conda.sh
conda create -n hedit python=3.11 -y
conda activate hedit- Install dependencies:
pip install -e ".[dev]"- Clone HED resources (if not using Docker):
# NOTE: Using forked hed-schemas with fix for JSON inheritance attributes
# TODO: Revert to hed-standard/hed-schemas once upstream fix is merged
git clone -b fix/json-inheritance-attributes https://github.com/neuromechanist/hed-schemas.git ../hed-schemas
git clone https://github.com/hed-standard/hed-javascript.git ../hed-javascript
cd ../hed-javascript && npm install && npm run build && cd -uvicorn src.api.main:app --reload --host 0.0.0.0 --port 38427POST /annotate: Generate HED annotation from natural languagePOST /validate: Validate HED annotationGET /health: Health check- API URL:
http://localhost:38427
pytestruff check src/ tests/
ruff format src/ tests/pytest --cov=src --cov-report=htmlhedit/
βββ src/
β βββ agents/ # LangGraph agent implementations
β βββ validation/ # HED validation integration
β βββ utils/ # Helper utilities
β βββ api/ # FastAPI backend
βββ frontend/ # Web UI
βββ tests/ # Test suite
βββ docs/ # Documentation
βββ .context/
βββ plan.md # Project roadmap
MIT