Go to file
zachary62 dbbdbba8c1 async 2024-12-28 23:02:36 +00:00
assets docs 2024-12-27 21:57:50 +00:00
cookbook demo 2024-12-27 20:45:43 +00:00
data/PaulGrahamEssaysLarge add examples 2024-12-27 05:29:24 +00:00
docs async 2024-12-28 23:02:36 +00:00
minillmflow add examples 2024-12-27 05:29:24 +00:00
tests add examples 2024-12-27 05:29:24 +00:00
.gitignore add examples 2024-12-27 05:29:24 +00:00
LICENSE Create LICENSE 2024-12-26 00:44:17 -05:00
README.md async 2024-12-28 23:02:36 +00:00
setup.py add examples 2024-12-27 05:29:24 +00:00

README.md

Mini LLM Flow

License: MIT Docs

A 100-line minimalist LLM framework for agents, task decomposition, RAG, etc.

  • Install via pip install minillmflow, or just copy the source (only 100 lines)

  • Pro tip: Build LLM apps with LLMs assistants (ChatGPT, Claude, etc.) via this prompt

Documentation: https://zachary62.github.io/miniLLMFlow/

Why Mini LLM Flow?

Mini LLM Flow is designed to be the framework used by LLM assistants. In the future, LLM app development will be heavily LLM-assisted: Users specify requirements, and LLM assistants design, build, and maintain themselves. Current LLM assistants:

  1. 👍 Shine at Low-level Implementation
    LLMs excel at APIs, tools, chunking, prompting, etc. These don't belong in a general-purpose framework; they're too specialized to maintain and optimize.

  2. 👎 Struggle with High-level Paradigms
    Paradigms like MapReduce, task decomposition, and agents are powerful. However, designing these elegantly remains challenging for LLMs.

The ideal framework for LLM assistants should:
(1) Remove specialized low-level implementations.
(2) Keep high-level paradigms to program against.
Hence, I built this minimal (100-line) framework so LLMs can focus on what matters.

Mini LLM Flow is also a great learning resource, as many frameworks abstract too much away.

Example LLM apps