add workflow tutorial

This commit is contained in:
zachary62 2025-03-22 11:46:25 -04:00
parent 14b1cf0a69
commit 666207eff1
8 changed files with 327 additions and 0 deletions

View File

@ -69,6 +69,7 @@ From there, it's easy to implement popular design patterns like ([Multi-](https:
| [Agent](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-agent) | ☆☆☆ <br> *Dummy* | A research agent that can search the web and answer questions | | [Agent](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-agent) | ☆☆☆ <br> *Dummy* | A research agent that can search the web and answer questions |
| [Streaming](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-llm-streaming) | ☆☆☆ <br> *Dummy* | A real-time LLM streaming demo with user interrupt capability | | [Streaming](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-llm-streaming) | ☆☆☆ <br> *Dummy* | A real-time LLM streaming demo with user interrupt capability |
| [Parallel](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-parallel-batch) | ★☆☆ <br> *Beginner* | A parallel execution demo that shows 3x speedup | | [Parallel](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-parallel-batch) | ★☆☆ <br> *Beginner* | A parallel execution demo that shows 3x speedup |
| [Workflow](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-workflow) | ★☆☆ <br> *Beginner* | A writing workflow that outlines, writes content, and applies styling |
| [Supervisor](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-supervisor) | ★☆☆ <br> *Beginner* | Research agent is getting unreliable... Let's build a supervision process| | [Supervisor](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-supervisor) | ★☆☆ <br> *Beginner* | Research agent is getting unreliable... Let's build a supervision process|
| [Thinking](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-thinking) | ★☆☆ <br> *Beginner* | Solve complex reasoning problems through Chain-of-Thought | | [Thinking](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-thinking) | ★☆☆ <br> *Beginner* | Solve complex reasoning problems through Chain-of-Thought |
| [Memory](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-chat-memory) | ★☆☆ <br> *Beginner* | A chat bot with short-term and long-term memory | | [Memory](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-chat-memory) | ★☆☆ <br> *Beginner* | A chat bot with short-term and long-term memory |

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 85 KiB

View File

@ -0,0 +1,129 @@
# Article Writing Workflow
A PocketFlow example that demonstrates an article writing workflow using a sequence of LLM calls.
## Features
- Generate a simple outline with up to 3 main sections using YAML structured output
- Process each section independently using batch processing
- Write concise (100 words max) content for each section in simple terms
- Apply a conversational, engaging style to the final article
## Getting Started
1. Install the required dependencies:
```bash
pip install -r requirements.txt
```
2. Set your OpenAI API key as an environment variable:
```bash
export OPENAI_API_KEY=your_api_key_here
```
3. Run the application with a default topic ("AI Safety"):
```bash
python main.py
```
4. Or specify your own topic:
```bash
python main.py Climate Change
```
## How It Works
The workflow consists of three sequential nodes:
```mermaid
graph LR
Outline[Generate Outline] --> Write[Batch Write Content]
Write --> Style[Apply Style]
```
Here's what each node does:
1. **Generate Outline**: Creates a simple outline with up to 3 main sections using YAML structured output
2. **Write Simple Content**: Processes each section in parallel (as a BatchNode), writing a concise 100-word explanation for each
3. **Apply Style**: Rewrites the combined content in a conversational, engaging style
## Files
- [`main.py`](./main.py): Main entry point for running the article workflow
- [`flow.py`](./flow.py): Defines the flow that connects the nodes
- [`nodes.py`](./nodes.py): Contains the node classes for each step in the workflow
- [`utils.py`](./utils.py): Utility functions including the LLM wrapper
- [`requirements.txt`](./requirements.txt): Lists the required dependencies
## Example Output
```
=== Starting Article Workflow on Topic: AI Safety ===
===== OUTLINE (YAML) =====
sections:
- Understanding AI Safety
- Challenges in Ensuring AI Safety
- Strategies for Mitigating AI Risks
===== PARSED OUTLINE =====
1. Understanding AI Safety
2. Challenges in Ensuring AI Safety
3. Strategies for Mitigating AI Risks
=========================
Parsed 3 sections: ['Understanding AI Safety', 'Challenges in Ensuring AI Safety', 'Strategies for Mitigating AI Risks']
===== SECTION CONTENTS =====
--- Understanding AI Safety ---
Understanding AI safety is about ensuring that artificial intelligence systems work safely and as intended. Just like you wouldn't want a car to suddenly speed up on its own, we want AI to be predictable and reliable. For example, if an AI were to help cook, we need to make sure it identifies ingredients correctly and doesn't start a fire. By focusing on AI safety, we aim to prevent accidents and ensure these systems help rather than harm us.
--- Challenges in Ensuring AI Safety ---
Making sure AI is safe involves several challenges. Imagine teaching a robot to understand commands correctly; if it misinterprets instructions, things could go wrong. It's like teaching a toddler to cross the street safely—they need to understand when and where it's safe to walk. Similarly, AI must be programmed to make safe decisions. Ensuring AI doesn't act unpredictably and behaves as intended, even in new situations, is crucial. Balancing innovation and safety is key, just like making sure a car is fast but also has reliable brakes to prevent accidents.
--- Strategies for Mitigating AI Risks ---
Mitigating AI risks is about making sure AI technologies help us without causing harm. It's like having seat belts in cars: they allow us to drive safely by minimizing dangers. To manage AI risks, we can use guidelines and rules to ensure AI behaves as expected. Training AI with diverse data is crucial so it doesn't develop biases, much like teaching children to respect different cultures. Additionally, we can create "off switches" for AI systems, similar to remote controls, to turn them off if they start acting unexpectedly. These steps help us safely enjoy the benefits AI offers.
===========================
===== FINAL ARTICLE =====
Hey there! Have you ever wondered about the safety of artificial intelligence and how it fits into our world? It's a bit like making sure a pet behaves itself—you want your dog to fetch the ball, not run off with your slippers! At its heart, understanding AI safety means ensuring these high-tech systems do what they're supposed to without causing a ruckus. Just as you wouldn't want your car to suddenly speed up without warning, we hope for AI to be as reliable as your morning coffee brewing on schedule. Imagine an AI assistant in your kitchen—it should know the difference between sugar and salt, and definitely not turn your peaceful cooking session into a fire drill. So, by focusing on AI safety, we're aiming for a world where these systems help us, without creating chaos.
Now, navigating the challenges of AI safety? That's quite the adventure! Picture this: you're trying to teach a robot your way of doing things. It's like teaching a toddler to cross a busy street. The little one needs to know when to stop, when to go, and how to manage all the things happening around them. Similarly, our AI pals need to be programmed to make safe decisions, even if they're seeing the world for the first time through their digital eyes. It's this delicate dance between innovation and safety—like crafting a sports car that's both exhilaratingly fast and equipped with top-notch brakes. We don't want surprises when it comes to AI behavior, right?
So, how do we juggle these AI risks and keep things safe? Imagine AI guidelines and protocols like the seat belts in your car—designed to keep you secure while letting you enjoy the ride. By setting rules, we ensure AI behaves as expected, kind of like a teacher maintaining order in a classroom. And just like we educate kids to appreciate the diverse world around them, we train AI with a wide array of data to avoid any unfair biases. Plus, isn't it reassuring to know we can install an "off switch" on these systems? Think of it like having a remote control to power down the device if it starts acting up. These strategies are our way of making sure we can relish the wonders of AI, all while knowing we've got everything under control.
In a nutshell, AI safety is about bridging the gap between groundbreaking technology and everyday peace of mind. It's this journey of making technology a trustworthy companion rather than a wild card. After all, it's all about enjoying the benefits without the hiccups—who wouldn't want that kind of harmony in their tech-driven life?
========================
=== Workflow Completed ===
Topic: AI Safety
Outline Length: 100 characters
Draft Length: 1707 characters
Final Article Length: 2531 characters
```
## Extending the Example
You can easily extend this example by:
1. Adding more processing nodes to the workflow
2. Modifying the prompts in the node classes
3. Implementing branching logic based on the content generated
4. Adding user interaction between workflow steps
5. Using different structured output formats (JSON, XML, etc.)

View File

@ -0,0 +1,19 @@
from pocketflow import Flow
from nodes import GenerateOutline, WriteSimpleContent, ApplyStyle
def create_article_flow():
"""
Create and configure the article writing workflow
"""
# Create node instances
outline_node = GenerateOutline()
write_node = WriteSimpleContent()
style_node = ApplyStyle()
# Connect nodes in sequence
outline_node >> write_node >> style_node
# Create flow starting with outline node
article_flow = Flow(start=outline_node)
return article_flow

View File

@ -0,0 +1,37 @@
from flow import create_article_flow
def run_flow(topic="AI Safety"):
"""
Run the article writing workflow with a specific topic
Args:
topic (str): The topic for the article
"""
# Initialize shared data with the topic
shared = {"topic": topic}
# Print starting message
print(f"\n=== Starting Article Workflow on Topic: {topic} ===\n")
# Run the flow
flow = create_article_flow()
flow.run(shared)
# Output summary
print("\n=== Workflow Completed ===\n")
print(f"Topic: {shared['topic']}")
print(f"Outline Length: {len(shared['outline'])} characters")
print(f"Draft Length: {len(shared['draft'])} characters")
print(f"Final Article Length: {len(shared['final_article'])} characters")
return shared
if __name__ == "__main__":
import sys
# Get topic from command line if provided
topic = "AI Safety" # Default topic
if len(sys.argv) > 1:
topic = " ".join(sys.argv[1:])
run_flow(topic)

View File

@ -0,0 +1,124 @@
from pocketflow import Node, BatchNode
from utils import call_llm
import yaml
class GenerateOutline(Node):
def prep(self, shared):
return shared["topic"]
def exec(self, topic):
prompt = f"""
Create a simple outline for an article about {topic}.
Include at most 3 main sections (no subsections).
Output the sections in YAML format as shown below:
```yaml
sections:
- First section
- Second section
- Third section
```"""
response = call_llm(prompt)
yaml_str = response.split("```yaml")[1].split("```")[0].strip()
structured_result = yaml.safe_load(yaml_str)
return structured_result
def post(self, shared, prep_res, exec_res):
# Store the structured data
shared["outline_yaml"] = exec_res
# Extract sections
sections = exec_res["sections"]
shared["sections"] = sections
# Format for display
formatted_outline = "\n".join([f"{i+1}. {section}" for i, section in enumerate(sections)])
shared["outline"] = formatted_outline
# Display the results
print("\n===== OUTLINE (YAML) =====\n")
print(yaml.dump(exec_res, default_flow_style=False))
print("\n===== PARSED OUTLINE =====\n")
print(formatted_outline)
print("\n=========================\n")
print(f"Parsed {len(sections)} sections: {sections}")
return "default"
class WriteSimpleContent(BatchNode):
def prep(self, shared):
# Return the list of sections to process
return shared.get("sections", [])
def exec(self, section):
prompt = f"""
Write a short paragraph (MAXIMUM 100 WORDS) about this section:
{section}
Requirements:
- Explain the idea in simple, easy-to-understand terms
- Use everyday language, avoiding jargon
- Keep it very concise (no more than 100 words)
- Include one brief example or analogy
"""
return section, call_llm(prompt)
def post(self, shared, prep_res, exec_res_list):
# Create a dictionary of section: content
section_contents = {}
all_content = []
for section, content in exec_res_list:
section_contents[section] = content
all_content.append(f"## {section}\n\n{content}\n")
shared["section_contents"] = section_contents
# Combine all content into a single draft
shared["draft"] = "\n".join(all_content)
print("\n===== SECTION CONTENTS =====\n")
for section, content in section_contents.items():
print(f"--- {section} ---")
print(content)
print()
print("===========================\n")
return "default"
class ApplyStyle(Node):
def prep(self, shared):
"""
Get the draft from shared data
"""
return shared["draft"]
def exec(self, draft):
"""
Apply a specific style to the article
"""
prompt = f"""
Rewrite the following draft in a conversational, engaging style:
{draft}
Make it:
- Conversational and warm in tone
- Include rhetorical questions that engage the reader
- Add analogies and metaphors where appropriate
- Include a strong opening and conclusion
"""
return call_llm(prompt)
def post(self, shared, prep_res, exec_res):
"""
Store the final article in shared data
"""
shared["final_article"] = exec_res
print("\n===== FINAL ARTICLE =====\n")
print(exec_res)
print("\n========================\n")
return "default"

View File

@ -0,0 +1,3 @@
pocketflow>=0.1.0
openai>=1.0.0
pyyaml>=6.0

View File

@ -0,0 +1,14 @@
import os
from openai import OpenAI
def call_llm(prompt):
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY", "your-api-key"))
r = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}]
)
return r.choices[0].message.content
# Example usage
if __name__ == "__main__":
print(call_llm("Tell me a short joke"))