pocketflow/README.md

43 lines
2.0 KiB
Markdown

<h1 align="center">Mini LLM Flow</h1>
![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)
[![Docs](https://img.shields.io/badge/docs-latest-blue)](https://zachary62.github.io/miniLLMFlow/)
A [100-line](minillmflow/__init__.py) minimalist LLM framework for agents, task decomposition, RAG, etc.
- Install via ```pip install minillmflow```, or just copy the [source](minillmflow/__init__.py) (only 100 lines)
- **Pro tip:** Build LLM apps with LLMs assistants (ChatGPT, Claude, etc.) via [this prompt](assets/prompt)
Documentation: https://zachary62.github.io/miniLLMFlow/
## Why Mini LLM Flow?
Mini LLM Flow is designed to **be the framework used by LLM assistants**. In the future, LLM app development will be heavily **LLM-assisted**: Users specify requirements, and LLM assistants design, build, and maintain themselves. Current LLM assistants:
1. **👍 Shine at Low-level Implementation**
LLMs excel at APIs, tools, chunking, prompting, etc. These don't belong in a general-purpose framework; they're too specialized to maintain and optimize.
2. **👎 Struggle with High-level Paradigms**
Paradigms like MapReduce, task decomposition, and agents are powerful. However, designing these elegantly remains challenging for LLMs.
The ideal framework for LLM assistants should:
(1) Remove specialized low-level implementations.
(2) Keep high-level paradigms to program against.
Hence, I built this minimal (100-line) framework so LLMs can focus on what matters.
Mini LLM Flow is also a great learning resource, as many frameworks abstract too much away.
<div align="center">
<img src="/assets/minillmflow.jpg" width="400"/>
</div>
## Example LLM apps
- Beginner Tutorial: [Text summarization for Paul Graham Essay + QA agent](https://colab.research.google.com/github/zachary62/miniLLMFlow/blob/main/cookbook/demo.ipynb)
- Have questions for this tutorial? Ask LLM assistants through [this prompt](https://chatgpt.com/share/676f16d2-7064-8000-b9d7-f6874346a6b5)