# Agent Framework Thoughts
- According to Anthropic blog post [here](https://www.anthropic.com/research/building-effective-agents) it's better to
- Start simple, take time to analyse if agents are really needed.
- Understand the difference between
- (i) **Workflow**: augmented LLM (with retrieval, tools and memories) organized in pipeline with semi-rigid structure and loops to achive a set of tasks. The execution can be planned and behaviour is predictable
- (ii) **Agent**: open-ended augmented LLM that define itself the structure of execution it needs to achieve his task.
- Same Mindset in Huggingface [Blostpost](https://huggingface.co/blog/smolagents) about SmolAgents
![[publish/assets/Pasted image 20250109103415.png]]
They produced a great table to understand the "agent spectrum": how much of an agent you current LLM orchestration is.
There is a lot of Agent Framework out there. Anthropic suggest to not use any and instead go directly low level to implement your LLM calls. Here is a list of some framework I've seen and what I think about it:
- **OpenAI** Swarm: too much research oriented and linked to OpenAI
- **PydanticAI**: a lot of good feedback but not compatible yet with AWS Bedrock
- **Smolagent**: great but dependent on E2B for CodeAgent which is an absolute no go. (Paid sandboxed code execution environnement for agent). Can't use it before they allow local solutions
- **Langchain/Langraph**: known to be a production nightmare, stay very far away.
- **Haystack**: What I use so far for RAG pipelines. But might be too over-engineered for easily create and modify agents... They provide a Cookbook however to replicate OpenAI Swarm ideas.