Skip to content

MateoLoz/Agent

Repository files navigation

AI Agent with Edge Gateway

🌍 Available Languages:

This project implements a production-ready AI agent backend built with Node.js + TypeScript, exposed via a webhook and fronted by a Cloudflare Worker acting as an edge gateway.

The focus of the project is clean architecture, testability, and real-world DevOps practices, rather than framework-heavy abstractions.


🧠 Architecture Overview

Client
  ↓
Cloudflare Worker (Edge Gateway)
  ↓
Node.js Backend (Webhook)
  ↓
AI Agent (LLM)
  • Backend: Handles business logic and AI agent execution
  • Agent: Encapsulates LLM interaction and output validation
  • Worker: Lightweight edge gateway (routing, validation, forwarding)
  • CI: Automated tests on every Pull Request

🚀 Tech Stack

  • Node.js 20 (LTS)
  • TypeScript
  • Express
  • Vitest (unit testing)
  • Supertest (API testing)
  • Cloudflare Workers (Wrangler)
  • Docker (multi-stage build)
  • Docker Compose (local orchestration)
  • GitHub Actions (CI)

🤖 AI Agent Design

The AI agent:

  • Uses a system prompt to control behavior
  • Produces structured JSON output
  • Validates responses before returning them
  • Fails fast on invalid or empty model responses

The agent logic is framework-agnostic and fully testable.

Design decision: The OpenAI Agents SDK was intentionally not used to keep the agent portable, testable, and vendor-neutral.


🧪 Testing Strategy

Unit Tests

  • AI agent logic tested with Vitest
  • OpenAI SDK is fully mocked (no external calls)

API Tests

  • /ping endpoint tested using Supertest
  • Express app tested in-memory (no open ports)

CI

  • Tests run automatically on every Pull Request using GitHub Actions
  • No real API keys or external services are required in CI

🐳 Docker Setup

The backend is packaged using a multi-stage Docker build for minimal image size and fast startup.

Build & Run (Backend only)

docker build -t ai-backend .
docker run -p 3000:3000 --env-file .env ai-backend

Health Check

curl http://localhost:3000/ping

Expected response:

{ "message": "Pong!", server_message: "Server running smoothly" }

🔧 Local Development (Full Stack)

For local development, the backend and the Cloudflare Worker can be orchestrated together using Docker Compose.

docker-compose up --build
  • Backend: http://localhost:3000
  • Worker (local): http://localhost:8787

Note: In production, the Worker runs on Cloudflare Edge and is not containerized.


📁 Project Structure

.
├─ src/
│  ├─ agents/        # AI agent logic
│  ├─ services/      # OpenAI client wrapper
│  ├─ prompts/       # System prompts
│  └─ server.ts      # Express app
├─ worker/           # Cloudflare Worker
├─ tests/            # API & integration tests
├─ Dockerfile
├─ docker-compose.yml
├─ .dockerignore
└─ .github/workflows # CI

🧩 Design Principles

  • Separation of concerns (agent, API, edge)
  • Testability first (mocked LLM, deterministic tests)
  • No vendor lock-in
  • Production-ready DevOps practices
  • Fail fast, validate everything

📌 Notes

  • No real API keys are used in CI
  • LLM calls are mocked during tests
  • The Worker is intentionally lightweight

🧠 Author

Built as a real-world example of an AI Engineer / Backend Engineer project focused on architecture, testing, and deployment best practices.

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published