update docs
This commit is contained in:
parent
229c644b46
commit
060c22daf1
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
layout: default
|
||||
title: "Task Decomposition"
|
||||
parent: "Paradigm"
|
||||
nav_order: 2
|
||||
---
|
||||
|
||||
# Task Decomposition
|
||||
|
||||
Many real-world tasks are too complex for one LLM call. The solution is to decompose them into multiple calls as a [Flow](./flow.md) of Nodes.
|
||||
|
||||
### Example: Article Writing
|
||||
|
||||
```python
|
||||
class GenerateOutline(Node):
|
||||
def exec(self, topic):
|
||||
prompt = f"Create a detailed outline for an article about {topic}"
|
||||
return call_llm(prompt)
|
||||
|
||||
class WriteSection(Node):
|
||||
def exec(self, section):
|
||||
prompt = f"Write content for this section: {section}"
|
||||
return call_llm(prompt)
|
||||
|
||||
class ReviewAndRefine(Node):
|
||||
def exec(self, draft):
|
||||
prompt = f"Review and improve this draft: {draft}"
|
||||
return call_llm(prompt)
|
||||
|
||||
# Connect nodes
|
||||
outline = GenerateOutline()
|
||||
write = WriteSection()
|
||||
review = ReviewAndRefine()
|
||||
|
||||
outline >> write >> review
|
||||
|
||||
# Create flow
|
||||
writing_flow = Flow(start=outline)
|
||||
writing_flow.run({"topic": "AI Safety"})
|
||||
```
|
||||
|
|
@ -41,19 +41,19 @@ We model the LLM workflow as a **Nested Directed Graph**:
|
|||
|
||||
- [LLM Wrapper](./llm.md)
|
||||
- [Tool](./tool.md)
|
||||
- Chunking
|
||||
|
||||
|
||||
|
||||
> We do not provide built-in implementation for low-level details. Example implementations are provided as reference.
|
||||
> We do not provide built-in implementations.
|
||||
> Example implementations are provided as reference.
|
||||
{: .warning }
|
||||
|
||||
|
||||
## High-Level Paradigm
|
||||
|
||||
- [Structured Output](./structure.md)
|
||||
- Task Decomposition
|
||||
- Map Reduce
|
||||
- RAG
|
||||
- [Task Decomposition](./decomp.md)
|
||||
- [Map Reduce](./mapreduce.md)
|
||||
- [RAG](./rag.md)
|
||||
- Chat Memory
|
||||
- Agent
|
||||
- Multi-Agent
|
||||
|
|
@ -62,3 +62,4 @@ We model the LLM workflow as a **Nested Directed Graph**:
|
|||
## Example Projects
|
||||
|
||||
- [Summarization + QA agent for Paul Graham Essay](./essay.md)
|
||||
- More coming soon...
|
||||
|
|
|
|||
|
|
@ -53,7 +53,7 @@ def call_llm(prompt):
|
|||
pass
|
||||
```
|
||||
|
||||
> ⚠️ May overlap with Node retries by caching LLM responses
|
||||
> ⚠️ Caching conflicts with Node retries, as retries yield the same result.
|
||||
{: .warning }
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
layout: default
|
||||
title: "Map Reduce"
|
||||
parent: "Paradigm"
|
||||
nav_order: 3
|
||||
---
|
||||
|
||||
# Map Reduce
|
||||
|
||||
Process large inputs by splitting them into chunks using [BatchNode](./batch.md), then combining results.
|
||||
|
||||
### Example: Document Summarization
|
||||
|
||||
```python
|
||||
class MapSummaries(BatchNode):
|
||||
def prep(self, shared):
|
||||
text = shared["text"]
|
||||
return [text[i:i+10000] for i in range(0, len(text), 10000)]
|
||||
|
||||
def exec(self, chunk):
|
||||
return call_llm(f"Summarize this chunk: {chunk}")
|
||||
|
||||
def post(self, shared, prep_res, exec_res_list):
|
||||
shared["summaries"] = exec_res_list
|
||||
|
||||
class ReduceSummaries(Node):
|
||||
def prep(self, shared):
|
||||
return shared["summaries"]
|
||||
|
||||
def exec(self, summaries):
|
||||
return call_llm(f"Combine these summaries: {summaries}")
|
||||
|
||||
def post(self, shared, prep_res, exec_res):
|
||||
shared["final_summary"] = exec_res
|
||||
|
||||
# Connect nodes
|
||||
map_node = MapSummaries()
|
||||
reduce_node = ReduceSummaries()
|
||||
|
||||
map_node >> reduce_node
|
||||
|
||||
# Create flow
|
||||
summarize_flow = Flow(start=map_node)
|
||||
summarize_flow.run(shared)
|
||||
```
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
layout: default
|
||||
title: "RAG"
|
||||
parent: "Paradigm"
|
||||
nav_order: 4
|
||||
---
|
||||
|
||||
# RAG (Retrieval Augmented Generation)
|
||||
|
||||
For certain LLM tasks like answering questions, providing context is essential.
|
||||
Use [vector search](./tool.md) to find relevant context for LLM responses.
|
||||
|
||||
## Example: Question Answering
|
||||
|
||||
```python
|
||||
class PrepareEmbeddings(Node):
|
||||
def prep(self, shared):
|
||||
texts = shared["texts"]
|
||||
embeddings = [get_embedding(text) for text in texts]
|
||||
shared["search_index"] = create_index(embeddings)
|
||||
|
||||
class AnswerQuestion(Node):
|
||||
def prep(self, shared):
|
||||
question = input("Enter question: ")
|
||||
query_embedding = get_embedding(question)
|
||||
indices, _ = search_index(shared["search_index"], query_embedding, top_k=1)
|
||||
relevant_text = shared["texts"][indices[0][0]]
|
||||
return question, relevant_text
|
||||
|
||||
def exec(self, inputs):
|
||||
question, context = inputs
|
||||
prompt = f"Question: {question}\nContext: {context}\nAnswer: "
|
||||
return call_llm(prompt)
|
||||
|
||||
def post(self, shared, prep_res, exec_res):
|
||||
print(f"Answer: {exec_res}")
|
||||
|
||||
# Connect nodes
|
||||
prep = PrepareEmbeddings()
|
||||
qa = AnswerQuestion()
|
||||
|
||||
prep >> qa
|
||||
|
||||
# Create flow
|
||||
qa_flow = Flow(start=prep)
|
||||
qa_flow.run(shared)
|
||||
```
|
||||
Loading…
Reference in New Issue