add local HITL example
This commit is contained in:
parent
55bd7c6819
commit
d13180f835
|
|
@ -77,6 +77,7 @@ From there, it's easy to implement popular design patterns like ([Multi-](https:
|
||||||
| [Chat Guardrail](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-chat-guardrail) | ☆☆☆ <sup>*Dummy*</sup> | A travel advisor chatbot that only processes travel-related queries |
|
| [Chat Guardrail](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-chat-guardrail) | ☆☆☆ <sup>*Dummy*</sup> | A travel advisor chatbot that only processes travel-related queries |
|
||||||
| [Majority Vote](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-majority-vote) | ☆☆☆ <sup>*Dummy*</sup> | Improve reasoning accuracy by aggregating multiple solution attempts |
|
| [Majority Vote](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-majority-vote) | ☆☆☆ <sup>*Dummy*</sup> | Improve reasoning accuracy by aggregating multiple solution attempts |
|
||||||
| [Map-Reduce](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-map-reduce) | ☆☆☆ <sup>*Dummy*</sup> | Batch resume qualification using map-reduce pattern |
|
| [Map-Reduce](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-map-reduce) | ☆☆☆ <sup>*Dummy*</sup> | Batch resume qualification using map-reduce pattern |
|
||||||
|
| [Cmd HITL](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-cmd-hitl) | ☆☆☆ <sup>*Dummy*</sup> | A command-line joke generator with human-in-the-loop feedback |
|
||||||
| [Multi-Agent](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-multi-agent) | ★☆☆ <sup>*Beginner*</sup> | A Taboo word game for async communication between 2 agents |
|
| [Multi-Agent](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-multi-agent) | ★☆☆ <sup>*Beginner*</sup> | A Taboo word game for async communication between 2 agents |
|
||||||
| [Supervisor](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-supervisor) | ★☆☆ <sup>*Beginner*</sup> | Research agent is getting unreliable... Let's build a supervision process|
|
| [Supervisor](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-supervisor) | ★☆☆ <sup>*Beginner*</sup> | Research agent is getting unreliable... Let's build a supervision process|
|
||||||
| [Parallel](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-parallel-batch) | ★☆☆ <sup>*Beginner*</sup> | A parallel execution demo that shows 3x speedup |
|
| [Parallel](https://github.com/The-Pocket/PocketFlow/tree/main/cookbook/pocketflow-parallel-batch) | ★☆☆ <sup>*Beginner*</sup> | A parallel execution demo that shows 3x speedup |
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,80 @@
|
||||||
|
# PocketFlow Command-Line Joke Generator (Human-in-the-Loop Example)
|
||||||
|
|
||||||
|
A simple, interactive command-line application that generates jokes based on user-provided topics and direct human feedback. This serves as a clear example of a Human-in-the-Loop (HITL) workflow orchestrated by PocketFlow.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- **Interactive Joke Generation**: Ask for jokes on any topic.
|
||||||
|
- **Human-in-the-Loop Feedback**: Dislike a joke? Your feedback directly influences the next generation attempt.
|
||||||
|
- **Minimalist Design**: A straightforward example of using PocketFlow for HITL tasks.
|
||||||
|
- **Powered by LLMs**: (Uses Anthropic Claude via an API call for joke generation).
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
|
||||||
|
This project is part of the PocketFlow cookbook examples. It's assumed you have already cloned the [PocketFlow repository](https://github.com/the-pocket/PocketFlow) and are in the `cookbook/pocketflow-cmd-hitl` directory.
|
||||||
|
|
||||||
|
1. **Install required dependencies**:
|
||||||
|
```bash
|
||||||
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Set up your Anthropic API key**:
|
||||||
|
The application uses Anthropic Claude to generate jokes. You need to set your API key as an environment variable.
|
||||||
|
```bash
|
||||||
|
export ANTHROPIC_API_KEY="your-anthropic-api-key-here"
|
||||||
|
```
|
||||||
|
You can test if your `call_llm.py` utility is working by running it directly:
|
||||||
|
```bash
|
||||||
|
python utils/call_llm.py
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Run the Joke Generator**:
|
||||||
|
```bash
|
||||||
|
python main.py
|
||||||
|
```
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
The system uses a simple PocketFlow workflow:
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart TD
|
||||||
|
GetTopic[GetTopicNode] --> GenerateJoke[GenerateJokeNode]
|
||||||
|
GenerateJoke --> GetFeedback[GetFeedbackNode]
|
||||||
|
GetFeedback -- "Approve" --> Z((End))
|
||||||
|
GetFeedback -- "Disapprove" --> GenerateJoke
|
||||||
|
```
|
||||||
|
|
||||||
|
1. **GetTopicNode**: Prompts the user to enter a topic for the joke.
|
||||||
|
2. **GenerateJokeNode**: Sends the topic (and any previously disliked jokes as context) to an LLM to generate a new joke.
|
||||||
|
3. **GetFeedbackNode**: Shows the joke to the user and asks if they liked it.
|
||||||
|
* If **yes** (approved), the application ends.
|
||||||
|
* If **no** (disapproved), the disliked joke is recorded, and the flow loops back to `GenerateJokeNode` to try again.
|
||||||
|
|
||||||
|
## Sample Output
|
||||||
|
|
||||||
|
Here's an example of an interaction with the Joke Generator:
|
||||||
|
|
||||||
|
```
|
||||||
|
Welcome to the Command-Line Joke Generator!
|
||||||
|
What topic would you like a joke about? Pocket Flow: 100-line LLM framework
|
||||||
|
|
||||||
|
Joke: Pocket Flow: Finally, an LLM framework that fits in your pocket! Too bad your model still needs a data center.
|
||||||
|
Did you like this joke? (yes/no): no
|
||||||
|
Okay, let me try another one.
|
||||||
|
|
||||||
|
Joke: Pocket Flow: A 100-line LLM framework where 99 lines are imports and the last line is `print("TODO: implement intelligence")`.
|
||||||
|
Did you like this joke? (yes/no): yes
|
||||||
|
Great! Glad you liked it.
|
||||||
|
|
||||||
|
Thanks for using the Joke Generator!
|
||||||
|
```
|
||||||
|
|
||||||
|
## Files
|
||||||
|
|
||||||
|
- [`main.py`](./main.py): Entry point for the application.
|
||||||
|
- [`flow.py`](./flow.py): Defines the PocketFlow graph and node connections.
|
||||||
|
- [`nodes.py`](./nodes.py): Contains the definitions for `GetTopicNode`, `GenerateJokeNode`, and `GetFeedbackNode`.
|
||||||
|
- [`utils/call_llm.py`](./utils/call_llm.py): Utility function to interact with the LLM (Anthropic Claude).
|
||||||
|
- [`requirements.txt`](./requirements.txt): Lists project dependencies.
|
||||||
|
- [`docs/design.md`](./docs/design.md): The design document for this application.
|
||||||
|
|
@ -1,26 +1,15 @@
|
||||||
from pocketflow import Flow
|
from pocketflow import Flow
|
||||||
from .nodes import GetTopicNode, GenerateJokeNode, GetFeedbackNode
|
from nodes import GetTopicNode, GenerateJokeNode, GetFeedbackNode
|
||||||
|
|
||||||
def create_joke_flow() -> Flow:
|
def create_joke_flow() -> Flow:
|
||||||
"""Creates and returns the joke generation flow."""
|
"""Creates and returns the joke generation flow."""
|
||||||
# Create nodes
|
|
||||||
get_topic_node = GetTopicNode()
|
get_topic_node = GetTopicNode()
|
||||||
generate_joke_node = GenerateJokeNode()
|
generate_joke_node = GenerateJokeNode()
|
||||||
get_feedback_node = GetFeedbackNode()
|
get_feedback_node = GetFeedbackNode()
|
||||||
|
|
||||||
# Connect nodes
|
|
||||||
# GetTopicNode -> GenerateJokeNode (default action)
|
|
||||||
get_topic_node >> generate_joke_node
|
get_topic_node >> generate_joke_node
|
||||||
|
|
||||||
# GenerateJokeNode -> GetFeedbackNode (default action)
|
|
||||||
generate_joke_node >> get_feedback_node
|
generate_joke_node >> get_feedback_node
|
||||||
|
get_feedback_node - "Disapprove" >> generate_joke_node
|
||||||
|
|
||||||
# GetFeedbackNode actions:
|
|
||||||
# "Approve" -> Ends the flow (no further connection)
|
|
||||||
# "Disapprove" -> GenerateJokeNode
|
|
||||||
# get_feedback_node.connect_to(generate_joke_node, action="Disapprove")
|
|
||||||
get_feedback_node - "Disapprove" >> generate_joke_node # Alternative syntax
|
|
||||||
|
|
||||||
# Create flow starting with the input node
|
|
||||||
joke_flow = Flow(start=get_topic_node)
|
joke_flow = Flow(start=get_topic_node)
|
||||||
return joke_flow
|
return joke_flow
|
||||||
|
|
@ -1,10 +1,9 @@
|
||||||
from .flow import create_joke_flow
|
from flow import create_joke_flow
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
"""Main function to run the joke generator application."""
|
"""Main function to run the joke generator application."""
|
||||||
print("Welcome to the Command-Line Joke Generator!")
|
print("Welcome to the Command-Line Joke Generator!")
|
||||||
|
|
||||||
# Initialize the shared store as per the design
|
|
||||||
shared = {
|
shared = {
|
||||||
"topic": None,
|
"topic": None,
|
||||||
"current_joke": None,
|
"current_joke": None,
|
||||||
|
|
@ -12,10 +11,7 @@ def main():
|
||||||
"user_feedback": None
|
"user_feedback": None
|
||||||
}
|
}
|
||||||
|
|
||||||
# Create the flow
|
|
||||||
joke_flow = create_joke_flow()
|
joke_flow = create_joke_flow()
|
||||||
|
|
||||||
# Run the flow
|
|
||||||
joke_flow.run(shared)
|
joke_flow.run(shared)
|
||||||
|
|
||||||
print("\nThanks for using the Joke Generator!")
|
print("\nThanks for using the Joke Generator!")
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,5 @@
|
||||||
from pocketflow import Node
|
from pocketflow import Node
|
||||||
from .utils.call_llm import call_llm
|
from utils.call_llm import call_llm
|
||||||
|
|
||||||
class GetTopicNode(Node):
|
class GetTopicNode(Node):
|
||||||
"""Prompts the user to enter the topic for the joke."""
|
"""Prompts the user to enter the topic for the joke."""
|
||||||
|
|
@ -8,36 +8,28 @@ class GetTopicNode(Node):
|
||||||
|
|
||||||
def post(self, shared, _prep_res, exec_res):
|
def post(self, shared, _prep_res, exec_res):
|
||||||
shared["topic"] = exec_res
|
shared["topic"] = exec_res
|
||||||
# No specific action needed, default will move to next connected node
|
|
||||||
return "default"
|
|
||||||
|
|
||||||
class GenerateJokeNode(Node):
|
class GenerateJokeNode(Node):
|
||||||
"""Generates a joke based on the topic and any previous feedback."""
|
"""Generates a joke based on the topic and any previous feedback."""
|
||||||
def prep(self, shared):
|
def prep(self, shared):
|
||||||
topic = shared.get("topic", "anything") # Default to "anything" if no topic
|
topic = shared.get("topic", "anything")
|
||||||
disliked_jokes = shared.get("disliked_jokes", [])
|
disliked_jokes = shared.get("disliked_jokes", [])
|
||||||
|
|
||||||
prompt = f"Please generate a joke about {topic}."
|
prompt = f"Please generate an one-liner joke about: {topic}. Make it short and funny."
|
||||||
if disliked_jokes:
|
if disliked_jokes:
|
||||||
disliked_str = "; ".join(disliked_jokes)
|
disliked_str = "; ".join(disliked_jokes)
|
||||||
prompt = f"The user did not like the following jokes: [{disliked_str}]. Please generate a new, different joke about {topic}."
|
prompt = f"The user did not like the following jokes: [{disliked_str}]. Please generate a new, different joke about {topic}."
|
||||||
return prompt
|
return prompt
|
||||||
|
|
||||||
def exec(self, prep_res):
|
def exec(self, prep_res):
|
||||||
return call_llm(prep_res) # prep_res is the prompt
|
return call_llm(prep_res)
|
||||||
|
|
||||||
def post(self, shared, _prep_res, exec_res):
|
def post(self, shared, _prep_res, exec_res):
|
||||||
shared["current_joke"] = exec_res
|
shared["current_joke"] = exec_res
|
||||||
print(f"\nJoke: {exec_res}")
|
print(f"\nJoke: {exec_res}")
|
||||||
return "default"
|
|
||||||
|
|
||||||
class GetFeedbackNode(Node):
|
class GetFeedbackNode(Node):
|
||||||
"""Presents the joke to the user and asks for approval."""
|
"""Presents the joke to the user and asks for approval."""
|
||||||
# prep is not strictly needed as current_joke is printed by GenerateJokeNode
|
|
||||||
# but we can read it if we want to display it again here for example.
|
|
||||||
# def prep(self, shared):
|
|
||||||
# return shared.get("current_joke")
|
|
||||||
|
|
||||||
def exec(self, _prep_res):
|
def exec(self, _prep_res):
|
||||||
while True:
|
while True:
|
||||||
feedback = input("Did you like this joke? (yes/no): ").strip().lower()
|
feedback = input("Did you like this joke? (yes/no): ").strip().lower()
|
||||||
|
|
@ -49,8 +41,8 @@ class GetFeedbackNode(Node):
|
||||||
if exec_res in ["yes", "y"]:
|
if exec_res in ["yes", "y"]:
|
||||||
shared["user_feedback"] = "approve"
|
shared["user_feedback"] = "approve"
|
||||||
print("Great! Glad you liked it.")
|
print("Great! Glad you liked it.")
|
||||||
return "Approve" # Action to end the flow
|
return "Approve"
|
||||||
else: # "no" or "n"
|
else:
|
||||||
shared["user_feedback"] = "disapprove"
|
shared["user_feedback"] = "disapprove"
|
||||||
current_joke = shared.get("current_joke")
|
current_joke = shared.get("current_joke")
|
||||||
if current_joke:
|
if current_joke:
|
||||||
|
|
@ -58,4 +50,4 @@ class GetFeedbackNode(Node):
|
||||||
shared["disliked_jokes"] = []
|
shared["disliked_jokes"] = []
|
||||||
shared["disliked_jokes"].append(current_joke)
|
shared["disliked_jokes"].append(current_joke)
|
||||||
print("Okay, let me try another one.")
|
print("Okay, let me try another one.")
|
||||||
return "Disapprove" # Action to loop back to GenerateJokeNode
|
return "Disapprove"
|
||||||
|
|
@ -1,7 +1,2 @@
|
||||||
# Add any project-specific dependencies here.
|
|
||||||
# For example:
|
|
||||||
# openai
|
|
||||||
# anthropic
|
|
||||||
|
|
||||||
pocketflow>=0.0.1
|
pocketflow>=0.0.1
|
||||||
openai>=1.0.0
|
anthropic>=0.20.0 # Or a recent version
|
||||||
|
|
@ -1,17 +1,24 @@
|
||||||
|
from anthropic import Anthropic
|
||||||
import os
|
import os
|
||||||
from openai import OpenAI
|
|
||||||
|
|
||||||
def call_llm(prompt: str) -> str:
|
def call_llm(prompt: str) -> str:
|
||||||
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY", "your-api-key"))
|
client = Anthropic(api_key=os.environ.get("ANTHROPIC_API_KEY", "your-anthropic-api-key")) # Default if key not found
|
||||||
r = client.chat.completions.create(
|
response = client.messages.create(
|
||||||
model="gpt-4o",
|
model="claude-3-haiku-20240307", # Using a smaller model for jokes
|
||||||
messages=[{"role": "user", "content": prompt}]
|
max_tokens=150, # Jokes don't need to be very long
|
||||||
|
messages=[
|
||||||
|
{"role": "user", "content": prompt}
|
||||||
|
]
|
||||||
)
|
)
|
||||||
return r.choices[0].message.content
|
return response.content[0].text
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
print("Testing real LLM call:")
|
print("Testing Anthropic LLM call for jokes:")
|
||||||
joke_prompt = "Tell me a short joke about a programmer."
|
joke_prompt = "Tell me a one-liner joke about a cat."
|
||||||
print(f"Prompt: {joke_prompt}")
|
print(f"Prompt: {joke_prompt}")
|
||||||
|
try:
|
||||||
response = call_llm(joke_prompt)
|
response = call_llm(joke_prompt)
|
||||||
print(f"Response: {response}")
|
print(f"Response: {response}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error calling LLM: {e}")
|
||||||
|
print("Please ensure your ANTHROPIC_API_KEY environment variable is set correctly.")
|
||||||
Loading…
Reference in New Issue