Go to file
zachary62 fd54f325bc docs 2024-12-28 04:05:08 +00:00
assets docs 2024-12-27 21:57:50 +00:00
cookbook demo 2024-12-27 20:45:43 +00:00
data/PaulGrahamEssaysLarge add examples 2024-12-27 05:29:24 +00:00
docs docs 2024-12-28 04:05:08 +00:00
minillmflow add examples 2024-12-27 05:29:24 +00:00
tests add examples 2024-12-27 05:29:24 +00:00
.gitignore add examples 2024-12-27 05:29:24 +00:00
LICENSE Create LICENSE 2024-12-26 00:44:17 -05:00
README.md docs 2024-12-28 04:05:08 +00:00
setup.py add examples 2024-12-27 05:29:24 +00:00

README.md

Mini LLM Flow

License: MIT

A 100-line minimalist LLM framework for agents, task decomposition, RAG, etc.

  • Install via pip install minillmflow, or just copy the source (only 100 lines)

  • Pro tip: Build LLM apps with LLMs assistants (ChatGPT, Claude, etc.) via this prompt

Why Mini LLM Flow?

In the future, LLM apps will be developed by LLMs: users specify requirements, and LLMs design, build, and maintain on their own. Current LLMs:

  1. 👍 Shine at Low-level Implementation: With proper docs, LLMs can handle APIs, tools, chunking, prompt wrapping, etc. These are hard to maintain and optimize for a general-purpose framework.

  2. 👎 Struggle with High-level Paradigms: Paradigms like MapReduce, task decomposition, and agents are powerful for development. However, designing these elegantly remains challenging for LLMs.

To enable LLMs to develop LLM app, a framework should (1) remove specialized low-level implementations, and (2) keep high-level paradigms to program against. Hence, I built this framework that lets LLMs focus on what matters. It turns out 100 lines is all you need.

Example LLM apps