After building dozens of AI agents, I realized most frameworks were overengineered. What if building agents could be as simple as chaining functions? That’s why I built AgentU - a minimalist Python framework where >> chains steps and & runs them in parallel.
The Problem with Existing Frameworks
Most AI agent frameworks make you learn complex concepts: orchestrators, tasks, execution modes, message passing. But at the end of the day, you’re just trying to:
- Call some tools
- Chain them together
- Maybe run some in parallel
Why should that require hundreds of lines of boilerplate?
AgentU: Radical Simplicity
from agentu import Agent
# Create an agent
agent = Agent("assistant").with_tools([search_products, check_inventory])
# Direct execution - call a specific tool
result = await agent.call("search_products", {"query": "laptop"})
# Natural language - let the LLM figure it out
result = await agent.infer("Find me laptops under $1500 and check stock")
That’s it. No orchestrators. No complex setup. Just agents and tools.
Workflows in 3 Lines
Here’s where AgentU gets powerful:
# Sequential: researcher → analyst → writer
workflow = researcher("Find AI trends") >> analyst("Analyze") >> writer("Summarize")
# Parallel: run 3 searches concurrently
workflow = search("AI") & search("ML") & search("Crypto")
# Combined: parallel then merge
workflow = (search("AI") & search("ML") & search("Crypto")) >> analyst("Compare")
result = await workflow.run()
>> chains steps sequentially. & runs them in parallel. That’s the entire workflow API.
Real-World Example: Automated Code Review
Let me show you a real use case - automated PR reviews that run code analysis and tests in parallel:
import asyncio
from agentu import Agent
def get_pr_diff(pr_number: int) -> str:
"""Fetch PR changes from GitHub."""
return github_api.get_diff(pr_number)
def run_tests(branch: str) -> dict:
"""Run test suite."""
return {"passed": 47, "failed": 2, "coverage": 94.2}
def post_comment(pr_number: int, comment: str) -> bool:
"""Post review comment to GitHub."""
return github_api.create_comment(pr_number, comment)
async def main():
# Setup agents with specific tools
reviewer = Agent("reviewer", model="gpt-4").with_tools([get_pr_diff])
tester = Agent("tester").with_tools([run_tests])
commenter = Agent("commenter").with_tools([post_comment])
# Parallel: review code + run tests (saves time!)
workflow = reviewer("Review PR #247") & tester("Run tests on PR #247")
code_review, test_results = await workflow.run()
# Natural language: synthesize findings
summary = await commenter.infer(
f"Create a review comment for PR #247. "
f"Code review: {code_review}. Tests: {test_results}. "
f"Be constructive and specific."
)
# Post to GitHub
await commenter.call("post_comment", {"pr_number": 247, "comment": summary})
print("✓ Review posted to PR #247")
asyncio.run(main())
What this does:
- Reviews code and runs tests in parallel (cuts time in half)
- Uses
infer()to write human-quality review comments via LLM - Posts directly to GitHub
- Zero manual work - runs on every PR
Memory: Simple and Persistent
# Store memories with importance weighting
agent.remember("Customer prefers email communication", importance=0.9)
agent.remember("Uses dark mode", importance=0.5)
# Search memories semantically
memories = agent.recall(query="communication preferences")
# Returns: ["Customer prefers email communication"]
Stored in SQLite. Searchable. Persistent across sessions. No vector database complexity unless you need it.
Lambda Control for Precise Data Flow
Need fine-grained control over how data flows between steps?
workflow = (
researcher("Find top AI companies")
>> analyst(lambda prev: f"Extract names from: {prev['result']}")
>> writer(lambda prev: f"Write report about: {prev['companies']}")
)
Lambdas let you transform output from one step before passing to the next.
LLM Support: Auto-Detection + Flexibility
AgentU auto-detects available models from Ollama:
# Auto-detect (uses first available Ollama model)
Agent("assistant")
# Or specify explicitly
Agent("assistant", model="qwen3")
# Works with OpenAI
Agent("assistant", model="gpt-4", api_key="sk-...")
# Works with vLLM, LM Studio, anything OpenAI-compatible
Agent("assistant", model="mistral", api_base="http://localhost:8000/v1")
Version 1.0.1 added automatic Ollama model detection via the /api/tags endpoint, so you don’t have to hardcode model names.
MCP Integration
Connect to Model Context Protocol servers for extended capabilities:
agent.with_mcp(["http://localhost:3000"])
agent.with_mcp([{"url": "https://api.com/mcp", "headers": {"Auth": "token"}}])
REST API in One Line
Want to expose your agent as an API?
from agentu import serve
serve(agent, port=8000)
Auto-generates Swagger docs at /docs. Includes endpoints for:
/execute- direct tool execution/process- natural language inference/tools- list available tools/memory/remember- store memories/memory/recall- search memories
curl -X POST localhost:8000/execute \
-d '{"tool_name": "search_products", "params": {"query": "laptop"}}'
Design Philosophy
AgentU was built on three principles:
- Minimalism: If you can do it in one line, you should only need one line
- Composability: Agents are functions. Functions compose. So should agents.
- No Magic: Every operator (
>>,&) has clear, predictable behavior
The 1.0.0 release removed the entire Orchestrator system and replaced it with simple operators. 400+ lines of code deleted. The API got cleaner and more powerful.
What Changed in 1.0
Version 1.0 was a major simplification:
Removed:
Orchestrator,ExecutionMode,Task,Messageclassesadd_tool(),add_tools(),add_agent(),add_agents()methodsexecute_tool()andprocess_input()methods
Added:
- Workflow operators (
>>sequential,¶llel) Agent.__call__()to create steps- Lambda support for data flow control
- Auto-detection of Ollama models (v1.0.1)
The result? More power, less code.
Getting Started
pip install agentu
Check out the examples:
git clone https://github.com/hemanth/agentu
cd agentu
python examples/basic.py # Simple agent
python examples/workflow.py # Workflows
python examples/memory.py # Memory system
python examples/api.py # REST API
When to Use AgentU
AgentU is perfect for:
- Automation workflows (CI/CD, code review, data pipelines)
- Research agents (gather → analyze → summarize)
- Customer support (routing, memory, context)
- Prototyping (get from idea to working agent in minutes)
It’s not a full-fledged production platform like LangChain. If you need a sprawling ecosystem of integrations, look elsewhere. But if you want to build agents fast with clean code, AgentU is for you.
Conclusion
Building AI agents shouldn’t require learning a new framework every month. It should feel like writing Python.
AgentU gives you:
- 3-line workflows with
>>and& - Direct execution with
call() - Natural language routing with
infer() - Persistent memory with
remember()/recall() - REST API with
serve()
That’s the entire API. No orchestrators. No message buses. Just agents doing what agents do best.
Try it: github.com/hemanth/agentu
The sleekest way to build AI agents is here.
About Hemanth HM
Hemanth HM is a Sr. Staff Engineer at PayPal, Google Developer Expert, TC39 delegate, FOSS advocate, and community leader with a passion for programming, AI, and open-source contributions..