add map reduce tutorial

This commit is contained in:
zachary62 2025-03-22 12:44:33 -04:00
parent 7411a9127b
commit eb1c721e00
15 changed files with 420 additions and 52 deletions

View File

@ -67,6 +67,7 @@ From there, it's easy to implement popular design patterns like ([Multi-](https:
| [Chat](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-chat) | ☆☆☆ <br> *Dummy* | A basic chat bot with conversation history | | [Chat](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-chat) | ☆☆☆ <br> *Dummy* | A basic chat bot with conversation history |
| [RAG](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-rag) | ☆☆☆ <br> *Dummy* | A simple Retrieval-augmented Generation process | | [RAG](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-rag) | ☆☆☆ <br> *Dummy* | A simple Retrieval-augmented Generation process |
| [Workflow](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-workflow) | ☆☆☆ <br> *Dummy* | A writing workflow that outlines, writes content, and applies styling | | [Workflow](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-workflow) | ☆☆☆ <br> *Dummy* | A writing workflow that outlines, writes content, and applies styling |
| [Map-Reduce](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-map-reduce) | ☆☆☆ <br> *Dummy* | A resume qualification processor using map-reduce pattern for batch evaluation |
| [Agent](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-agent) | ☆☆☆ <br> *Dummy* | A research agent that can search the web and answer questions | | [Agent](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-agent) | ☆☆☆ <br> *Dummy* | A research agent that can search the web and answer questions |
| [Streaming](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-llm-streaming) | ☆☆☆ <br> *Dummy* | A real-time LLM streaming demo with user interrupt capability | | [Streaming](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-llm-streaming) | ☆☆☆ <br> *Dummy* | A real-time LLM streaming demo with user interrupt capability |
| [Parallel](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-parallel-batch) | ★☆☆ <br> *Beginner* | A parallel execution demo that shows 3x speedup | | [Parallel](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-parallel-batch) | ★☆☆ <br> *Beginner* | A parallel execution demo that shows 3x speedup |

View File

@ -0,0 +1,78 @@
# Resume Qualification - Map Reduce Example
A PocketFlow example that demonstrates how to implement a Map-Reduce pattern for processing and evaluating resumes.
## Features
- Read and process multiple resume files using a Map-Reduce pattern
- Evaluate each resume individually using an LLM with structured YAML output
- Determine if candidates qualify for technical roles based on specific criteria
- Aggregate results to generate qualification statistics and summaries
## Getting Started
1. Install the required dependencies:
```bash
pip install -r requirements.txt
```
2. Set your OpenAI API key as an environment variable:
```bash
export OPENAI_API_KEY=your_api_key_here
```
3. Run the application:
```bash
python main.py
```
## How It Works
The workflow follows a classic Map-Reduce pattern with three sequential nodes:
```mermaid
flowchart LR
ReadResumes[Map: Read Resumese] --> EvaluateResumes[Batch: Evaluate Resumes]
EvaluateResumes --> ReduceResults[Reduce: Aggregate Results]
```
Here's what each node does:
1. **ReadResumesNode (Map Phase)**: Reads all resume files from the data directory and stores them in the shared data store
2. **EvaluateResumesNode (Batch Processing)**: Processes each resume individually using an LLM to determine if candidates qualify
3. **ReduceResultsNode (Reduce Phase)**: Aggregates evaluation results and produces a summary of qualified candidates
## Files
- [`main.py`](./main.py): Main entry point for running the resume qualification workflow
- [`flow.py`](./flow.py): Defines the flow that connects the nodes
- [`nodes.py`](./nodes.py): Contains the node classes for each step in the workflow
- [`utils.py`](./utils.py): Utility functions including the LLM wrapper
- [`requirements.txt`](./requirements.txt): Lists the required dependencies
- [`data/`](./data/): Directory containing sample resume files for evaluation
## Example Output
```
Starting resume qualification processing...
===== Resume Qualification Summary =====
Total candidates evaluated: 5
Qualified candidates: 2 (40.0%)
Qualified candidates:
- Emily Johnson
- John Smith
Detailed evaluation results:
✗ Michael Williams (resume3.txt)
✓ Emily Johnson (resume2.txt)
✗ Lisa Chen (resume4.txt)
✗ Robert Taylor (resume5.txt)
✓ John Smith (resume1.txt)
Resume processing complete!
```

View File

@ -0,0 +1,25 @@
John Smith
Software Engineer
Education:
- Master of Computer Science, Stanford University, 2018
- Bachelor of Computer Science, MIT, 2016
Experience:
- Senior Software Engineer, Google, 2019-present
* Led the development of cloud infrastructure projects
* Implemented scalable solutions using Kubernetes and Docker
* Reduced system latency by 40% through optimization
- Software Developer, Microsoft, 2016-2019
* Worked on Azure cloud services
* Built RESTful APIs for enterprise solutions
Skills:
- Programming: Python, Java, C++, JavaScript
- Technologies: Docker, Kubernetes, AWS, Azure
- Tools: Git, Jenkins, Jira
Projects:
- Developed a recommendation engine that increased user engagement by 25%
- Created a sentiment analysis tool using NLP techniques

View File

@ -0,0 +1,25 @@
Emily Johnson
Data Scientist
Education:
- Ph.D. in Statistics, UC Berkeley, 2020
- Master of Science in Mathematics, UCLA, 2016
Experience:
- Data Scientist, Netflix, 2020-present
* Developed machine learning models for content recommendation
* Implemented A/B testing frameworks to optimize user experience
* Collaborated with product teams to define metrics and KPIs
- Data Analyst, Amazon, 2016-2020
* Analyzed user behavior patterns to improve conversion rates
* Created dashboards and visualizations for executive decision-making
Skills:
- Programming: R, Python, SQL
- Machine Learning: TensorFlow, PyTorch, scikit-learn
- Data Visualization: Tableau, PowerBI, matplotlib
Publications:
- "Advances in Recommendation Systems" - Journal of Machine Learning, 2021
- "Statistical Methods for Big Data" - Conference on Data Science, 2019

View File

@ -0,0 +1,25 @@
Michael Williams
Marketing Manager
Education:
- MBA, Harvard Business School, 2015
- Bachelor of Arts in Communications, NYU, 2010
Experience:
- Marketing Director, Apple, 2018-present
* Managed a team of 15 marketing professionals
* Developed and executed global marketing campaigns
* Increased brand awareness by 30% through digital initiatives
- Marketing Manager, Coca-Cola, 2015-2018
* Led product launches across North America
* Coordinated with external agencies on advertising campaigns
Skills:
- Digital Marketing: SEO, SEM, Social Media Marketing
- Analytics: Google Analytics, Adobe Analytics
- Tools: HubSpot, Salesforce, Marketo
Achievements:
- Marketing Excellence Award, 2020
- Led campaign that won Cannes Lions Award, 2019

View File

@ -0,0 +1,28 @@
Lisa Chen
Frontend Developer
Education:
- Bachelor of Fine Arts, Rhode Island School of Design, 2019
Experience:
- UI/UX Designer, Airbnb, 2020-present
* Designed user interfaces for mobile and web applications
* Created wireframes and prototypes for new features
* Conducted user research and usability testing
- Junior Designer, Freelance, 2019-2020
* Worked with small businesses on branding and website design
* Developed responsive web designs using HTML, CSS, and JavaScript
Skills:
- Design: Figma, Sketch, Adobe XD
- Development: HTML, CSS, JavaScript, React
- Tools: Git, Zeplin
Portfolio Highlights:
- Redesigned checkout flow resulting in 15% conversion increase
- Created custom icon set for mobile application
- Designed responsive email templates
Certifications:
- UI/UX Design Certificate, Coursera, 2019

View File

@ -0,0 +1,28 @@
Robert Taylor
Sales Representative
Education:
- Bachelor of Business Administration, University of Texas, 2017
Experience:
- Account Executive, Salesforce, 2019-present
* Exceeded sales targets by 25% for three consecutive quarters
* Managed a portfolio of 50+ enterprise clients
* Developed and implemented strategic account plans
- Sales Associate, Oracle, 2017-2019
* Generated new business opportunities through cold calling
* Assisted senior sales representatives with client presentations
Skills:
- CRM Systems: Salesforce, HubSpot
- Communication: Negotiation, Public Speaking
- Tools: Microsoft Office Suite, Google Workspace
Achievements:
- Top Sales Representative Award, Q2 2020
- President's Club, 2021
Interests:
- Volunteer sales coach for local small businesses
- Member of Toastmasters International

View File

@ -0,0 +1,15 @@
from pocketflow import Flow
from nodes import ReadResumesNode, EvaluateResumesNode, ReduceResultsNode
def create_resume_processing_flow():
"""Create a map-reduce flow for processing resumes."""
# Create nodes
read_resumes_node = ReadResumesNode()
evaluate_resumes_node = EvaluateResumesNode()
reduce_results_node = ReduceResultsNode()
# Connect nodes
read_resumes_node >> evaluate_resumes_node >> reduce_results_node
# Create flow
return Flow(start=read_resumes_node)

View File

@ -0,0 +1,25 @@
from flow import create_resume_processing_flow
def main():
# Initialize shared store
shared = {}
# Create the resume processing flow
resume_flow = create_resume_processing_flow()
# Run the flow
print("Starting resume qualification processing...")
resume_flow.run(shared)
# Display final summary information (additional to what's already printed in ReduceResultsNode)
if "summary" in shared:
print("\nDetailed evaluation results:")
for filename, evaluation in shared.get("evaluations", {}).items():
qualified = "" if evaluation.get("qualifies", False) else ""
name = evaluation.get("candidate_name", "Unknown")
print(f"{qualified} {name} ({filename})")
print("\nResume processing complete!")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,106 @@
from pocketflow import Node, BatchNode
from utils import call_llm
import yaml
import os
class ReadResumesNode(Node):
"""Map phase: Read all resumes from the data directory into shared storage."""
def exec(self, _):
resume_files = {}
data_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), "data")
for filename in os.listdir(data_dir):
if filename.endswith(".txt"):
file_path = os.path.join(data_dir, filename)
with open(file_path, 'r', encoding='utf-8') as file:
resume_files[filename] = file.read()
return resume_files
def post(self, shared, prep_res, exec_res):
shared["resumes"] = exec_res
return "default"
class EvaluateResumesNode(BatchNode):
"""Batch processing: Evaluate each resume to determine if the candidate qualifies."""
def prep(self, shared):
return list(shared["resumes"].items())
def exec(self, resume_item):
"""Evaluate a single resume."""
filename, content = resume_item
prompt = f"""
Evaluate the following resume and determine if the candidate qualifies for an advanced technical role.
Criteria for qualification:
- At least a bachelor's degree in a relevant field
- At least 3 years of relevant work experience
- Strong technical skills relevant to the position
Resume:
{content}
Return your evaluation in YAML format:
```yaml
candidate_name: [Name of the candidate]
qualifies: [true/false]
reasons:
- [First reason for qualification/disqualification]
- [Second reason, if applicable]
```
"""
response = call_llm(prompt)
# Extract YAML content
yaml_content = response.split("```yaml")[1].split("```")[0].strip() if "```yaml" in response else response
result = yaml.safe_load(yaml_content)
return (filename, result)
def post(self, shared, prep_res, exec_res_list):
shared["evaluations"] = {filename: result for filename, result in exec_res_list}
return "default"
class ReduceResultsNode(Node):
"""Reduce node: Count and print out how many candidates qualify."""
def prep(self, shared):
return shared["evaluations"]
def exec(self, evaluations):
qualified_count = 0
total_count = len(evaluations)
qualified_candidates = []
for filename, evaluation in evaluations.items():
if evaluation.get("qualifies", False):
qualified_count += 1
qualified_candidates.append(evaluation.get("candidate_name", "Unknown"))
summary = {
"total_candidates": total_count,
"qualified_count": qualified_count,
"qualified_percentage": round(qualified_count / total_count * 100, 1) if total_count > 0 else 0,
"qualified_names": qualified_candidates
}
return summary
def post(self, shared, prep_res, exec_res):
shared["summary"] = exec_res
print("\n===== Resume Qualification Summary =====")
print(f"Total candidates evaluated: {exec_res['total_candidates']}")
print(f"Qualified candidates: {exec_res['qualified_count']} ({exec_res['qualified_percentage']}%)")
if exec_res['qualified_names']:
print("\nQualified candidates:")
for name in exec_res['qualified_names']:
print(f"- {name}")
return "default"

View File

@ -0,0 +1,3 @@
pocketflow>=0.0.1
openai>=1.0.0
pyyaml>=6.0

View File

@ -0,0 +1,14 @@
import os
from openai import OpenAI
def call_llm(prompt):
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY", "your-api-key"))
r = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}]
)
return r.choices[0].message.content
# Example usage
if __name__ == "__main__":
print(call_llm("Tell me a short joke"))

View File

@ -5,7 +5,6 @@ A PocketFlow example that demonstrates an article writing workflow using a seque
## Features ## Features
- Generate a simple outline with up to 3 main sections using YAML structured output - Generate a simple outline with up to 3 main sections using YAML structured output
- Process each section independently using batch processing
- Write concise (100 words max) content for each section in simple terms - Write concise (100 words max) content for each section in simple terms
- Apply a conversational, engaging style to the final article - Apply a conversational, engaging style to the final article
@ -41,14 +40,14 @@ The workflow consists of three sequential nodes:
```mermaid ```mermaid
graph LR graph LR
Outline[Generate Outline] --> Write[Batch Write Content] Outline[Generate Outline] --> Write[Write Content]
Write --> Style[Apply Style] Write --> Style[Apply Style]
``` ```
Here's what each node does: Here's what each node does:
1. **Generate Outline**: Creates a simple outline with up to 3 main sections using YAML structured output 1. **Generate Outline**: Creates a simple outline with up to 3 main sections using YAML structured output
2. **Write Simple Content**: Processes each section in parallel (as a BatchNode), writing a concise 100-word explanation for each 2. **Write Simple Content**: Writes a concise 100-word explanation for each section
3. **Apply Style**: Rewrites the combined content in a conversational, engaging style 3. **Apply Style**: Rewrites the combined content in a conversational, engaging style
## Files ## Files
@ -68,44 +67,51 @@ Here's what each node does:
===== OUTLINE (YAML) ===== ===== OUTLINE (YAML) =====
sections: sections:
- Understanding AI Safety - Introduction to AI Safety
- Challenges in Ensuring AI Safety - Key Challenges in AI Safety
- Strategies for Mitigating AI Risks - Strategies for Ensuring AI Safety
===== PARSED OUTLINE ===== ===== PARSED OUTLINE =====
1. Understanding AI Safety 1. Introduction to AI Safety
2. Challenges in Ensuring AI Safety 2. Key Challenges in AI Safety
3. Strategies for Mitigating AI Risks 3. Strategies for Ensuring AI Safety
========================= =========================
Parsed 3 sections: ['Understanding AI Safety', 'Challenges in Ensuring AI Safety', 'Strategies for Mitigating AI Risks']
===== SECTION CONTENTS ===== ===== SECTION CONTENTS =====
--- Understanding AI Safety --- --- Introduction to AI Safety ---
Understanding AI safety is about ensuring that artificial intelligence systems work safely and as intended. Just like you wouldn't want a car to suddenly speed up on its own, we want AI to be predictable and reliable. For example, if an AI were to help cook, we need to make sure it identifies ingredients correctly and doesn't start a fire. By focusing on AI safety, we aim to prevent accidents and ensure these systems help rather than harm us. AI Safety is about making sure that artificial intelligence (AI) systems are helpful and not harmful. Imagine teaching a robot to help with chores. AI Safety is like setting ground rules for the robot so it doesn't accidentally cause trouble, like mistaking a pet for a toy. By ensuring AI systems understand their tasks and limitations, we can trust them to act safely. It's about creating guidelines and checks to ensure AI assists us without unintended consequences.
--- Challenges in Ensuring AI Safety --- --- Key Challenges in AI Safety ---
Making sure AI is safe involves several challenges. Imagine teaching a robot to understand commands correctly; if it misinterprets instructions, things could go wrong. It's like teaching a toddler to cross the street safely—they need to understand when and where it's safe to walk. Similarly, AI must be programmed to make safe decisions. Ensuring AI doesn't act unpredictably and behaves as intended, even in new situations, is crucial. Balancing innovation and safety is key, just like making sure a car is fast but also has reliable brakes to prevent accidents. AI safety is about ensuring that artificial intelligence systems operate in ways that are beneficial and not harmful. One key challenge is making sure AI makes decisions that align with human values. Imagine teaching a robot to fetch coffee, but it ends up knocking things over because it doesn't understand the mess it creates. Similarly, if AI systems don't fully grasp human intentions, they might act in unexpected ways. The task is to make AI smart enough to achieve goals without causing problems, much like training a puppy to follow rules without chewing on your shoes.
--- Strategies for Mitigating AI Risks --- --- Strategies for Ensuring AI Safety ---
Mitigating AI risks is about making sure AI technologies help us without causing harm. It's like having seat belts in cars: they allow us to drive safely by minimizing dangers. To manage AI risks, we can use guidelines and rules to ensure AI behaves as expected. Training AI with diverse data is crucial so it doesn't develop biases, much like teaching children to respect different cultures. Additionally, we can create "off switches" for AI systems, similar to remote controls, to turn them off if they start acting unexpectedly. These steps help us safely enjoy the benefits AI offers. Ensuring AI safety is about making sure artificial intelligence behaves as expected and doesnt cause harm. Imagine AI as a new driver on the road; we need rules and safeguards to prevent accidents. By testing AI systems under different conditions, setting clear rules for their behavior, and keeping human oversight, we can manage risks. For instance, just as cars have brakes to ensure safety, AI systems need to have fail-safes. This helps in building trust and avoiding unexpected issues, keeping both humans and AI on the right track.
=========================== ===========================
===== FINAL ARTICLE ===== ===== FINAL ARTICLE =====
Hey there! Have you ever wondered about the safety of artificial intelligence and how it fits into our world? It's a bit like making sure a pet behaves itself—you want your dog to fetch the ball, not run off with your slippers! At its heart, understanding AI safety means ensuring these high-tech systems do what they're supposed to without causing a ruckus. Just as you wouldn't want your car to suddenly speed up without warning, we hope for AI to be as reliable as your morning coffee brewing on schedule. Imagine an AI assistant in your kitchen—it should know the difference between sugar and salt, and definitely not turn your peaceful cooking session into a fire drill. So, by focusing on AI safety, we're aiming for a world where these systems help us, without creating chaos. # Welcome to the World of AI Safety
Now, navigating the challenges of AI safety? That's quite the adventure! Picture this: you're trying to teach a robot your way of doing things. It's like teaching a toddler to cross a busy street. The little one needs to know when to stop, when to go, and how to manage all the things happening around them. Similarly, our AI pals need to be programmed to make safe decisions, even if they're seeing the world for the first time through their digital eyes. It's this delicate dance between innovation and safety—like crafting a sports car that's both exhilaratingly fast and equipped with top-notch brakes. We don't want surprises when it comes to AI behavior, right? Have you ever wondered what it would be like to have your very own robot helping you around the house? Sounds like a dream, right? But lets hit pause for a moment. What if this robot mistook your fluffy cat for a toy? Thats exactly where AI Safety comes in. Think of AI Safety as setting some friendly ground rules for your household helper, ensuring that it knows the difference between doing chores and causing a bit of chaos. Its all about making sure our AI allies play by the rules, making life easier without those pesky accidental hiccups.
So, how do we juggle these AI risks and keep things safe? Imagine AI guidelines and protocols like the seat belts in your car—designed to keep you secure while letting you enjoy the ride. By setting rules, we ensure AI behaves as expected, kind of like a teacher maintaining order in a classroom. And just like we educate kids to appreciate the diverse world around them, we train AI with a wide array of data to avoid any unfair biases. Plus, isn't it reassuring to know we can install an "off switch" on these systems? Think of it like having a remote control to power down the device if it starts acting up. These strategies are our way of making sure we can relish the wonders of AI, all while knowing we've got everything under control. # Navigating the Maze of AI Challenges
In a nutshell, AI safety is about bridging the gap between groundbreaking technology and everyday peace of mind. It's this journey of making technology a trustworthy companion rather than a wild card. After all, it's all about enjoying the benefits without the hiccups—who wouldn't want that kind of harmony in their tech-driven life? Picture this: you've asked your trusty robot to grab you a cup of coffee. But instead, it sends mugs flying and spills coffee because it doesnt quite get the concept of a mess. Frustrating, isnt it? One of the biggest hurdles in AI Safety is aligning AI decisions with our human values and intentions. Its like training a puppy not to gnaw on your favorite pair of shoes. Our job is to teach AI how to reach its goals without stepping on our toes, all while being as reliable and lovable as a well-trained pup.
# Steering AI Toward Safe Horizons
Now, how do we keep our AI friends on the straight and narrow? Imagine AI as a new driver learning to navigate the roads of life. Just like we teach new drivers the rules of the road and equip cars with brakes for safety, we provide AI with guidelines and fail-safes to prevent any unintended mishaps. Testing AI systems in various scenarios and keeping a watchful human eye on them ensures they dont veer off track. Its all about building trust and creating a partnership where both humans and AI are cruising smoothly together.
# Wrapping It Up
At the end of the day, AI Safety is about creating a harmonious relationship between humans and machines, where we trust our metal companions to support us without the fear of unexpected surprises. By setting boundaries and ensuring understanding, were not just building smarter machines—were crafting a future where AI and humanity can thrive together. So, next time youre imagining that helpful robot assistant, rest easy knowing that AI Safety is making sure it's ready to lend a hand without dropping the ball—or your coffee mug!
======================== ========================
@ -113,17 +119,7 @@ In a nutshell, AI safety is about bridging the gap between groundbreaking techno
=== Workflow Completed === === Workflow Completed ===
Topic: AI Safety Topic: AI Safety
Outline Length: 100 characters Outline Length: 96 characters
Draft Length: 1707 characters Draft Length: 1690 characters
Final Article Length: 2531 characters Final Article Length: 2266 characters
``` ```
## Extending the Example
You can easily extend this example by:
1. Adding more processing nodes to the workflow
2. Modifying the prompts in the node classes
3. Implementing branching logic based on the content generated
4. Adding user interaction between workflow steps
5. Using different structured output formats (JSON, XML, etc.)

View File

@ -43,16 +43,18 @@ sections:
print(formatted_outline) print(formatted_outline)
print("\n=========================\n") print("\n=========================\n")
print(f"Parsed {len(sections)} sections: {sections}")
return "default" return "default"
class WriteSimpleContent(BatchNode): class WriteSimpleContent(Node):
def prep(self, shared): def prep(self, shared):
# Return the list of sections to process # Get the list of sections to process
return shared.get("sections", []) return shared.get("sections", [])
def exec(self, section): def exec(self, sections):
all_sections_content = []
section_contents = {}
for section in sections:
prompt = f""" prompt = f"""
Write a short paragraph (MAXIMUM 100 WORDS) about this section: Write a short paragraph (MAXIMUM 100 WORDS) about this section:
@ -64,21 +66,18 @@ Requirements:
- Keep it very concise (no more than 100 words) - Keep it very concise (no more than 100 words)
- Include one brief example or analogy - Include one brief example or analogy
""" """
return section, call_llm(prompt) content = call_llm(prompt)
def post(self, shared, prep_res, exec_res_list):
# Create a dictionary of section: content
section_contents = {}
all_content = []
for section, content in exec_res_list:
section_contents[section] = content section_contents[section] = content
all_content.append(f"## {section}\n\n{content}\n") all_sections_content.append(f"## {section}\n\n{content}\n")
return sections, section_contents, "\n".join(all_sections_content)
def post(self, shared, prep_res, exec_res):
sections, section_contents, draft = exec_res
# Store the section contents and draft
shared["section_contents"] = section_contents shared["section_contents"] = section_contents
shared["draft"] = draft
# Combine all content into a single draft
shared["draft"] = "\n".join(all_content)
print("\n===== SECTION CONTENTS =====\n") print("\n===== SECTION CONTENTS =====\n")
for section, content in section_contents.items(): for section, content in section_contents.items():

View File

@ -1,3 +1,3 @@
pocketflow>=0.1.0 pocketflow>=0.0.1
openai>=1.0.0 openai>=1.0.0
pyyaml>=6.0 pyyaml>=6.0