Search for a command to run...
Agentick v1.0 — Initial Release A universal benchmark for evaluating AI agents across 37 procedurally generated gridworld tasks spanning 6 capability categories: navigation, planning, reasoning, memory, generalization, and multi-agent coordination. Highlights 37 tasks with 4 difficulty levels each (easy/medium/hard/expert), fully procedural generation with deterministic seeding Gymnasium API — drop-in compatible with any RL, LLM, VLM, or hybrid agent Multi-modal observations — ASCII, natural language, isometric pixel, flat 2D grid, and state dict Oracle agents for all tasks — hand-coded optimal policies for trajectory generation, solvability verification, and score upper bounds Agent harness for LLM/VLM evaluation with OpenAI, Gemini, and HuggingFace backends Reproducible evaluation via deterministic train/eval seed splits (2000 train, 25 eval per task-difficulty) Data collection pipeline with HuggingFace Datasets export for SFT/behavior cloning Interactive webapp — play tasks yourself in the browser across all observation modes Quick Start git clone https://github.com/roger-creus/agentick.git && cd agentick uv sync --extra all uv run agentick webapp # Play in browser uv run agentick list-tasks # See all 37 tasks Links Blog Documentation Leaderboard Task Gallery