add local HITL example

This commit is contained in:
zachary62 2025-05-27 00:57:52 -04:00
parent 55bd7c6819
commit d13180f835
7 changed files with 109 additions and 49 deletions

View File

@ -77,6 +77,7 @@ From there, it's easy to implement popular design patterns like ([Multi-](https:
| [Chat Guardrail](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-chat-guardrail) | ☆☆☆ <sup>*Dummy*</sup> | A travel advisor chatbot that only processes travel-related queries |
| [Majority Vote](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-majority-vote) | ☆☆☆ <sup>*Dummy*</sup> | Improve reasoning accuracy by aggregating multiple solution attempts |
| [Map-Reduce](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-map-reduce) | ☆☆☆ <sup>*Dummy*</sup> | Batch resume qualification using map-reduce pattern |
| [Cmd HITL](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-cmd-hitl) | ☆☆☆ <sup>*Dummy*</sup> | A command-line joke generator with human-in-the-loop feedback |
| [Multi-Agent](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-multi-agent) | ★☆☆ <sup>*Beginner*</sup> | A Taboo word game for async communication between 2 agents |
| [Supervisor](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-supervisor) | ★☆☆ <sup>*Beginner*</sup> | Research agent is getting unreliable... Let's build a supervision process|
| [Parallel](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-parallel-batch) | ★☆☆ <sup>*Beginner*</sup> | A parallel execution demo that shows 3x speedup |

View File

@ -0,0 +1,80 @@
# PocketFlow Command-Line Joke Generator (Human-in-the-Loop Example)
A simple, interactive command-line application that generates jokes based on user-provided topics and direct human feedback. This serves as a clear example of a Human-in-the-Loop (HITL) workflow orchestrated by PocketFlow.
## Features
- **Interactive Joke Generation**: Ask for jokes on any topic.
- **Human-in-the-Loop Feedback**: Dislike a joke? Your feedback directly influences the next generation attempt.
- **Minimalist Design**: A straightforward example of using PocketFlow for HITL tasks.
- **Powered by LLMs**: (Uses Anthropic Claude via an API call for joke generation).
## Getting Started
This project is part of the PocketFlow cookbook examples. It's assumed you have already cloned the [PocketFlow repository](https://github.com/the-pocket/PocketFlow) and are in the `cookbook/pocketflow-cmd-hitl` directory.
1. **Install required dependencies**:
```bash
pip install -r requirements.txt
```
2. **Set up your Anthropic API key**:
The application uses Anthropic Claude to generate jokes. You need to set your API key as an environment variable.
```bash
export ANTHROPIC_API_KEY="your-anthropic-api-key-here"
```
You can test if your `call_llm.py` utility is working by running it directly:
```bash
python utils/call_llm.py
```
3. **Run the Joke Generator**:
```bash
python main.py
```
## How It Works
The system uses a simple PocketFlow workflow:
```mermaid
flowchart TD
GetTopic[GetTopicNode] --> GenerateJoke[GenerateJokeNode]
GenerateJoke --> GetFeedback[GetFeedbackNode]
GetFeedback -- "Approve" --> Z((End))
GetFeedback -- "Disapprove" --> GenerateJoke
```
1. **GetTopicNode**: Prompts the user to enter a topic for the joke.
2. **GenerateJokeNode**: Sends the topic (and any previously disliked jokes as context) to an LLM to generate a new joke.
3. **GetFeedbackNode**: Shows the joke to the user and asks if they liked it.
* If **yes** (approved), the application ends.
* If **no** (disapproved), the disliked joke is recorded, and the flow loops back to `GenerateJokeNode` to try again.
## Sample Output
Here's an example of an interaction with the Joke Generator:
```
Welcome to the Command-Line Joke Generator!
What topic would you like a joke about? Pocket Flow: 100-line LLM framework
Joke: Pocket Flow: Finally, an LLM framework that fits in your pocket! Too bad your model still needs a data center.
Did you like this joke? (yes/no): no
Okay, let me try another one.
Joke: Pocket Flow: A 100-line LLM framework where 99 lines are imports and the last line is `print("TODO: implement intelligence")`.
Did you like this joke? (yes/no): yes
Great! Glad you liked it.
Thanks for using the Joke Generator!
```
## Files
- [`main.py`](./main.py): Entry point for the application.
- [`flow.py`](./flow.py): Defines the PocketFlow graph and node connections.
- [`nodes.py`](./nodes.py): Contains the definitions for `GetTopicNode`, `GenerateJokeNode`, and `GetFeedbackNode`.
- [`utils/call_llm.py`](./utils/call_llm.py): Utility function to interact with the LLM (Anthropic Claude).
- [`requirements.txt`](./requirements.txt): Lists project dependencies.
- [`docs/design.md`](./docs/design.md): The design document for this application.

View File

@ -1,26 +1,15 @@
from pocketflow import Flow
from .nodes import GetTopicNode, GenerateJokeNode, GetFeedbackNode
from nodes import GetTopicNode, GenerateJokeNode, GetFeedbackNode
def create_joke_flow() -> Flow:
"""Creates and returns the joke generation flow."""
# Create nodes
get_topic_node = GetTopicNode()
generate_joke_node = GenerateJokeNode()
get_feedback_node = GetFeedbackNode()
# Connect nodes
# GetTopicNode -> GenerateJokeNode (default action)
get_topic_node >> generate_joke_node
# GenerateJokeNode -> GetFeedbackNode (default action)
generate_joke_node >> get_feedback_node
get_feedback_node - "Disapprove" >> generate_joke_node
# GetFeedbackNode actions:
# "Approve" -> Ends the flow (no further connection)
# "Disapprove" -> GenerateJokeNode
# get_feedback_node.connect_to(generate_joke_node, action="Disapprove")
get_feedback_node - "Disapprove" >> generate_joke_node # Alternative syntax
# Create flow starting with the input node
joke_flow = Flow(start=get_topic_node)
return joke_flow

View File

@ -1,10 +1,9 @@
from .flow import create_joke_flow
from flow import create_joke_flow
def main():
"""Main function to run the joke generator application."""
print("Welcome to the Command-Line Joke Generator!")
# Initialize the shared store as per the design
shared = {
"topic": None,
"current_joke": None,
@ -12,10 +11,7 @@ def main():
"user_feedback": None
}
# Create the flow
joke_flow = create_joke_flow()
# Run the flow
joke_flow.run(shared)
print("\nThanks for using the Joke Generator!")

View File

@ -1,5 +1,5 @@
from pocketflow import Node
from .utils.call_llm import call_llm
from utils.call_llm import call_llm
class GetTopicNode(Node):
"""Prompts the user to enter the topic for the joke."""
@ -8,36 +8,28 @@ class GetTopicNode(Node):
def post(self, shared, _prep_res, exec_res):
shared["topic"] = exec_res
# No specific action needed, default will move to next connected node
return "default"
class GenerateJokeNode(Node):
"""Generates a joke based on the topic and any previous feedback."""
def prep(self, shared):
topic = shared.get("topic", "anything") # Default to "anything" if no topic
topic = shared.get("topic", "anything")
disliked_jokes = shared.get("disliked_jokes", [])
prompt = f"Please generate a joke about {topic}."
prompt = f"Please generate an one-liner joke about: {topic}. Make it short and funny."
if disliked_jokes:
disliked_str = "; ".join(disliked_jokes)
prompt = f"The user did not like the following jokes: [{disliked_str}]. Please generate a new, different joke about {topic}."
return prompt
def exec(self, prep_res):
return call_llm(prep_res) # prep_res is the prompt
return call_llm(prep_res)
def post(self, shared, _prep_res, exec_res):
shared["current_joke"] = exec_res
print(f"\nJoke: {exec_res}")
return "default"
class GetFeedbackNode(Node):
"""Presents the joke to the user and asks for approval."""
# prep is not strictly needed as current_joke is printed by GenerateJokeNode
# but we can read it if we want to display it again here for example.
# def prep(self, shared):
# return shared.get("current_joke")
def exec(self, _prep_res):
while True:
feedback = input("Did you like this joke? (yes/no): ").strip().lower()
@ -49,8 +41,8 @@ class GetFeedbackNode(Node):
if exec_res in ["yes", "y"]:
shared["user_feedback"] = "approve"
print("Great! Glad you liked it.")
return "Approve" # Action to end the flow
else: # "no" or "n"
return "Approve"
else:
shared["user_feedback"] = "disapprove"
current_joke = shared.get("current_joke")
if current_joke:
@ -58,4 +50,4 @@ class GetFeedbackNode(Node):
shared["disliked_jokes"] = []
shared["disliked_jokes"].append(current_joke)
print("Okay, let me try another one.")
return "Disapprove" # Action to loop back to GenerateJokeNode
return "Disapprove"

View File

@ -1,7 +1,2 @@
# Add any project-specific dependencies here.
# For example:
# openai
# anthropic
pocketflow>=0.0.1
openai>=1.0.0
anthropic>=0.20.0 # Or a recent version

View File

@ -1,17 +1,24 @@
from anthropic import Anthropic
import os
from openai import OpenAI
def call_llm(prompt: str) -> str:
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY", "your-api-key"))
r = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}]
client = Anthropic(api_key=os.environ.get("ANTHROPIC_API_KEY", "your-anthropic-api-key")) # Default if key not found
response = client.messages.create(
model="claude-3-haiku-20240307", # Using a smaller model for jokes
max_tokens=150, # Jokes don't need to be very long
messages=[
{"role": "user", "content": prompt}
]
)
return r.choices[0].message.content
return response.content[0].text
if __name__ == "__main__":
print("Testing real LLM call:")
joke_prompt = "Tell me a short joke about a programmer."
print("Testing Anthropic LLM call for jokes:")
joke_prompt = "Tell me a one-liner joke about a cat."
print(f"Prompt: {joke_prompt}")
try:
response = call_llm(joke_prompt)
print(f"Response: {response}")
except Exception as e:
print(f"Error calling LLM: {e}")
print("Please ensure your ANTHROPIC_API_KEY environment variable is set correctly.")