|
|
||
|---|---|---|
| assets | ||
| cookbook | ||
| data/PaulGrahamEssaysLarge | ||
| docs | ||
| pocketflow | ||
| tests | ||
| .gitignore | ||
| LICENSE | ||
| README.md | ||
| setup.py | ||
README.md
Pocket Flow - LLM Framework in 100 Lines
A 100-line minimalist LLM framework for (Multi-)Agents, Prompt Chaining, RAG, etc.
-
Install via
pip install pocketflow, or just copy the source codes (only 100 lines) -
If the 100 lines feel terse and you’d prefer a friendlier intro, check this out
Documentation: https://the-pocket.github.io/PocketFlow/
Why Pocket Flow?
Pocket Flow is designed to be the framework used by LLMs. In the future, LLM projects will be self-programmed by LLMs themselves: Users specify requirements, and LLMs will design, build, and maintain. To build LLM projects with LLMs assistants (ChatGPT, Claude, Cursor.ai, etc.):
(🫵 Click to expand) Use Claude to build LLM apps
-
Set project custom instructions. For example:
1. check "tool.md" and "llm.md" for the required functions. 2. design the high-level (batch) flow and nodes in artifact using mermaid 3. design the shared memory structure: define its fields, data structures, and how they will be updated. Think out aloud for above first and ask users if your design makes sense. 4. Finally, implement. Start with simple, minimalistic codes without, for example, typing. Write the codes in artifact. -
Ask it to build LLM apps (Sonnet 3.5 strongly recommended)!
Help me build a chatbot based on a directory of PDFs.
(🫵 Click to expand) Use ChatGPT to build LLM apps
-
Try the GPT assistant. However, it uses older models, which are good for explaining but not that good at coding.
-
For stronger coding capabilities, consider sending the docs to more advanced models like O1.
-
Paste the docs link (https://github.com/the-pocket/PocketFlow/tree/main/docs) to Gitingest.
-
Then, paste the generated contents into your O1 prompt, and ask it to build LLM apps.
-
How does it work?
The 100 lines capture what we see as the core abstraction of LLM frameworks: a Graph that breaks down tasks into multiple (LLM) steps, with branching and recursion for agent-like decision-making, and a Shared Store that communicates across graph nodes.
From there, it’s easy to implement popular design patterns ike (Multi-)Agents, Prompt Chaining, RAG, etc.
- To learn more details, please check out the documentation
- For a more in-depth dive on the design choices, check out the essay