update fastapi
This commit is contained in:
parent
953a506c05
commit
aaf69731ee
|
|
@ -0,0 +1,74 @@
|
|||
# PocketFlow FastAPI Background Job
|
||||
|
||||
A minimal example of running PocketFlow workflows as background jobs with real-time progress updates via Server-Sent Events (SSE).
|
||||
|
||||
## Features
|
||||
|
||||
- Start article generation jobs via REST API
|
||||
- Real-time granular progress updates via SSE (shows progress for each section)
|
||||
- Background processing with FastAPI
|
||||
- Simple three-step workflow: Outline → Content → Style
|
||||
- Web interface for easy job submission and monitoring
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Install dependencies:
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
2. Set your OpenAI API key:
|
||||
```bash
|
||||
export OPENAI_API_KEY=your_api_key_here
|
||||
```
|
||||
|
||||
3. Run the server:
|
||||
```bash
|
||||
python main.py
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Web Interface (Recommended)
|
||||
|
||||
1. Open your browser and go to `http://localhost:8000`
|
||||
2. Enter an article topic (e.g., "AI Safety", "Climate Change")
|
||||
3. Click "Generate Article"
|
||||
4. You'll be redirected to a progress page showing real-time updates
|
||||
5. The final article will appear when generation is complete
|
||||
|
||||
### API Usage
|
||||
|
||||
#### Start a Job
|
||||
```bash
|
||||
curl -X POST "http://localhost:8000/start-job" -d "topic=AI Safety" -H "Content-Type: application/x-www-form-urlencoded"
|
||||
```
|
||||
|
||||
Response:
|
||||
```json
|
||||
{"job_id": "123e4567-e89b-12d3-a456-426614174000", "topic": "AI Safety", "status": "started"}
|
||||
```
|
||||
|
||||
#### Monitor Progress
|
||||
```bash
|
||||
curl "http://localhost:8000/progress/123e4567-e89b-12d3-a456-426614174000"
|
||||
```
|
||||
|
||||
SSE Stream:
|
||||
```
|
||||
data: {"step": "outline", "progress": 33, "data": {"sections": ["Introduction", "Challenges", "Solutions"]}}
|
||||
data: {"step": "content", "progress": 44, "data": {"section": "Introduction", "completed_sections": 1, "total_sections": 3}}
|
||||
data: {"step": "content", "progress": 55, "data": {"section": "Challenges", "completed_sections": 2, "total_sections": 3}}
|
||||
data: {"step": "content", "progress": 66, "data": {"section": "Solutions", "completed_sections": 3, "total_sections": 3}}
|
||||
data: {"step": "content", "progress": 66, "data": {"draft_length": 1234, "status": "complete"}}
|
||||
data: {"step": "complete", "progress": 100, "data": {"final_article": "..."}}
|
||||
```
|
||||
|
||||
## Files
|
||||
|
||||
- `main.py` - FastAPI app with background jobs and SSE
|
||||
- `flow.py` - PocketFlow workflow definition
|
||||
- `nodes.py` - Workflow nodes (Outline, Content, Style)
|
||||
- `utils/call_llm.py` - LLM utility function
|
||||
- `static/index.html` - Main page for starting jobs
|
||||
- `static/progress.html` - Progress monitoring page with real-time updates
|
||||
|
|
@ -0,0 +1,104 @@
|
|||
# Design Doc: PocketFlow FastAPI Background Job with SSE Progress
|
||||
|
||||
> Please DON'T remove notes for AI
|
||||
|
||||
## Requirements
|
||||
|
||||
> Notes for AI: Keep it simple and clear.
|
||||
> If the requirements are abstract, write concrete user stories
|
||||
|
||||
**User Story**: As a user, I want to submit an article topic via a web API and receive real-time progress updates while the article is being generated in the background, so I can see the workflow progress without blocking the UI.
|
||||
|
||||
**Core Requirements**:
|
||||
1. Submit article topic via REST API endpoint
|
||||
2. Start background job for article generation workflow
|
||||
3. Receive real-time progress updates via Server-Sent Events (SSE)
|
||||
4. Get final article result when workflow completes
|
||||
5. Handle multiple concurrent requests
|
||||
|
||||
**Technical Requirements**:
|
||||
- FastAPI web server with REST endpoints
|
||||
- Background task processing using asyncio
|
||||
- Server-Sent Events for progress streaming
|
||||
- Simple web interface to test the functionality
|
||||
|
||||
## Flow Design
|
||||
|
||||
> Notes for AI:
|
||||
> 1. Consider the design patterns of agent, map-reduce, rag, and workflow. Apply them if they fit.
|
||||
> 2. Present a concise, high-level description of the workflow.
|
||||
|
||||
### Applicable Design Pattern:
|
||||
|
||||
**Workflow Pattern**: Sequential processing of article generation steps with progress reporting at each stage.
|
||||
|
||||
### Flow High-level Design:
|
||||
|
||||
1. **Generate Outline Node**: Creates a structured outline for the article topic
|
||||
2. **Write Content Node**: Writes content for each section in the outline
|
||||
3. **Apply Style Node**: Applies conversational styling to the final article
|
||||
|
||||
Each node puts progress updates into an asyncio.Queue for SSE streaming.
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
outline[Generate Outline] --> content[Write Content]
|
||||
content --> styling[Apply Style]
|
||||
```
|
||||
|
||||
## Utility Functions
|
||||
|
||||
> Notes for AI:
|
||||
> 1. Understand the utility function definition thoroughly by reviewing the doc.
|
||||
> 2. Include only the necessary utility functions, based on nodes in the flow.
|
||||
|
||||
1. **Call LLM** (`utils/call_llm.py`)
|
||||
- *Input*: prompt (str)
|
||||
- *Output*: response (str)
|
||||
- Used by all workflow nodes for LLM tasks
|
||||
|
||||
## Node Design
|
||||
|
||||
### Shared Store
|
||||
|
||||
> Notes for AI: Try to minimize data redundancy
|
||||
|
||||
The shared store structure is organized as follows:
|
||||
|
||||
```python
|
||||
shared = {
|
||||
"topic": "user-provided-topic",
|
||||
"sse_queue": asyncio.Queue(), # For sending SSE updates
|
||||
"sections": ["section1", "section2", "section3"],
|
||||
"draft": "combined-section-content",
|
||||
"final_article": "styled-final-article"
|
||||
}
|
||||
```
|
||||
|
||||
### Node Steps
|
||||
|
||||
> Notes for AI: Carefully decide whether to use Batch/Async Node/Flow.
|
||||
|
||||
1. **Generate Outline Node**
|
||||
- *Purpose*: Create a structured outline with 3 main sections using YAML output
|
||||
- *Type*: Regular Node (synchronous LLM call)
|
||||
- *Steps*:
|
||||
- *prep*: Read "topic" from shared store
|
||||
- *exec*: Call LLM to generate YAML outline, parse and validate structure
|
||||
- *post*: Write "sections" to shared store, put progress update in sse_queue
|
||||
|
||||
2. **Write Content Node**
|
||||
- *Purpose*: Generate concise content for each outline section
|
||||
- *Type*: BatchNode (processes each section independently)
|
||||
- *Steps*:
|
||||
- *prep*: Read "sections" from shared store (returns list of sections)
|
||||
- *exec*: For one section, call LLM to write 100-word content
|
||||
- *post*: Combine all section content into "draft", put progress update in sse_queue
|
||||
|
||||
3. **Apply Style Node**
|
||||
- *Purpose*: Apply conversational, engaging style to the combined content
|
||||
- *Type*: Regular Node (single LLM call for styling)
|
||||
- *Steps*:
|
||||
- *prep*: Read "draft" from shared store
|
||||
- *exec*: Call LLM to rewrite in conversational style
|
||||
- *post*: Write "final_article" to shared store, put completion update in sse_queue
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
from pocketflow import Flow
|
||||
from nodes import GenerateOutline, WriteContent, ApplyStyle
|
||||
|
||||
def create_article_flow():
|
||||
"""
|
||||
Create and configure the article writing workflow
|
||||
"""
|
||||
# Create node instances
|
||||
outline_node = GenerateOutline()
|
||||
content_node = WriteContent()
|
||||
style_node = ApplyStyle()
|
||||
|
||||
# Connect nodes in sequence
|
||||
outline_node >> content_node >> style_node
|
||||
|
||||
# Create flow starting with outline node
|
||||
article_flow = Flow(start=outline_node)
|
||||
|
||||
return article_flow
|
||||
|
|
@ -0,0 +1,107 @@
|
|||
import asyncio
|
||||
import json
|
||||
import uuid
|
||||
from fastapi import FastAPI, BackgroundTasks, Form
|
||||
from fastapi.responses import StreamingResponse
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from fastapi.responses import FileResponse
|
||||
from flow import create_article_flow
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
# Mount static files
|
||||
app.mount("/static", StaticFiles(directory="static"), name="static")
|
||||
|
||||
# Store active jobs and their SSE queues
|
||||
active_jobs = {}
|
||||
|
||||
def run_article_workflow(job_id: str, topic: str):
|
||||
"""Run the article workflow in background"""
|
||||
try:
|
||||
# Create shared store with SSE queue
|
||||
sse_queue = asyncio.Queue()
|
||||
shared = {
|
||||
"topic": topic,
|
||||
"sse_queue": sse_queue,
|
||||
"sections": [],
|
||||
"draft": "",
|
||||
"final_article": ""
|
||||
}
|
||||
|
||||
# Store the queue for SSE access
|
||||
active_jobs[job_id] = sse_queue
|
||||
|
||||
# Run the workflow
|
||||
flow = create_article_flow()
|
||||
flow.run(shared)
|
||||
|
||||
except Exception as e:
|
||||
# Send error message
|
||||
error_msg = {"step": "error", "progress": 0, "data": {"error": str(e)}}
|
||||
if job_id in active_jobs:
|
||||
active_jobs[job_id].put_nowait(error_msg)
|
||||
|
||||
@app.post("/start-job")
|
||||
async def start_job(background_tasks: BackgroundTasks, topic: str = Form(...)):
|
||||
"""Start a new article generation job"""
|
||||
job_id = str(uuid.uuid4())
|
||||
|
||||
# Start background task
|
||||
background_tasks.add_task(run_article_workflow, job_id, topic)
|
||||
|
||||
return {"job_id": job_id, "topic": topic, "status": "started"}
|
||||
|
||||
@app.get("/progress/{job_id}")
|
||||
async def get_progress(job_id: str):
|
||||
"""Stream progress updates via SSE"""
|
||||
|
||||
async def event_stream():
|
||||
if job_id not in active_jobs:
|
||||
yield f"data: {json.dumps({'error': 'Job not found'})}\n\n"
|
||||
return
|
||||
|
||||
sse_queue = active_jobs[job_id]
|
||||
|
||||
try:
|
||||
while True:
|
||||
# Wait for next progress update
|
||||
try:
|
||||
# Use asyncio.wait_for to avoid blocking forever
|
||||
progress_msg = await asyncio.wait_for(sse_queue.get(), timeout=1.0)
|
||||
yield f"data: {json.dumps(progress_msg)}\n\n"
|
||||
|
||||
# If job is complete, clean up and exit
|
||||
if progress_msg.get("step") == "complete":
|
||||
del active_jobs[job_id]
|
||||
break
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
# Send heartbeat to keep connection alive
|
||||
yield f"data: {json.dumps({'heartbeat': True})}\n\n"
|
||||
|
||||
except Exception as e:
|
||||
yield f"data: {json.dumps({'error': str(e)})}\n\n"
|
||||
|
||||
return StreamingResponse(
|
||||
event_stream(),
|
||||
media_type="text/plain",
|
||||
headers={
|
||||
"Cache-Control": "no-cache",
|
||||
"Connection": "keep-alive",
|
||||
"Content-Type": "text/event-stream"
|
||||
}
|
||||
)
|
||||
|
||||
@app.get("/")
|
||||
async def get_index():
|
||||
"""Serve the main page"""
|
||||
return FileResponse("static/index.html")
|
||||
|
||||
@app.get("/progress.html")
|
||||
async def get_progress_page():
|
||||
"""Serve the progress page"""
|
||||
return FileResponse("static/progress.html")
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8000)
|
||||
|
|
@ -0,0 +1,109 @@
|
|||
import yaml
|
||||
from pocketflow import Node, BatchNode
|
||||
from utils.call_llm import call_llm
|
||||
|
||||
class GenerateOutline(Node):
|
||||
def prep(self, shared):
|
||||
return shared["topic"]
|
||||
|
||||
def exec(self, topic):
|
||||
prompt = f"""
|
||||
Create a simple outline for an article about {topic}.
|
||||
Include at most 3 main sections (no subsections).
|
||||
|
||||
Output the sections in YAML format as shown below:
|
||||
|
||||
```yaml
|
||||
sections:
|
||||
- First section title
|
||||
- Second section title
|
||||
- Third section title
|
||||
```"""
|
||||
response = call_llm(prompt)
|
||||
yaml_str = response.split("```yaml")[1].split("```")[0].strip()
|
||||
structured_result = yaml.safe_load(yaml_str)
|
||||
return structured_result
|
||||
|
||||
def post(self, shared, prep_res, exec_res):
|
||||
sections = exec_res["sections"]
|
||||
shared["sections"] = sections
|
||||
|
||||
# Send progress update via SSE queue
|
||||
progress_msg = {"step": "outline", "progress": 33, "data": {"sections": sections}}
|
||||
shared["sse_queue"].put_nowait(progress_msg)
|
||||
|
||||
return "default"
|
||||
|
||||
class WriteContent(BatchNode):
|
||||
def prep(self, shared):
|
||||
# Store sections and sse_queue for use in exec
|
||||
self.sections = shared.get("sections", [])
|
||||
self.sse_queue = shared["sse_queue"]
|
||||
return self.sections
|
||||
|
||||
def exec(self, section):
|
||||
prompt = f"""
|
||||
Write a short paragraph (MAXIMUM 100 WORDS) about this section:
|
||||
|
||||
{section}
|
||||
|
||||
Requirements:
|
||||
- Explain the idea in simple, easy-to-understand terms
|
||||
- Use everyday language, avoiding jargon
|
||||
- Keep it very concise (no more than 100 words)
|
||||
- Include one brief example or analogy
|
||||
"""
|
||||
content = call_llm(prompt)
|
||||
|
||||
# Send progress update for this section
|
||||
current_section_index = self.sections.index(section) if section in self.sections else 0
|
||||
total_sections = len(self.sections)
|
||||
|
||||
# Progress from 33% (after outline) to 66% (before styling)
|
||||
# Each section contributes (66-33)/total_sections = 33/total_sections percent
|
||||
section_progress = 33 + ((current_section_index + 1) * 33 // total_sections)
|
||||
|
||||
progress_msg = {
|
||||
"step": "content",
|
||||
"progress": section_progress,
|
||||
"data": {
|
||||
"section": section,
|
||||
"completed_sections": current_section_index + 1,
|
||||
"total_sections": total_sections
|
||||
}
|
||||
}
|
||||
self.sse_queue.put_nowait(progress_msg)
|
||||
|
||||
return f"## {section}\n\n{content}\n"
|
||||
|
||||
def post(self, shared, prep_res, exec_res_list):
|
||||
draft = "\n".join(exec_res_list)
|
||||
shared["draft"] = draft
|
||||
return "default"
|
||||
|
||||
class ApplyStyle(Node):
|
||||
def prep(self, shared):
|
||||
return shared["draft"]
|
||||
|
||||
def exec(self, draft):
|
||||
prompt = f"""
|
||||
Rewrite the following draft in a conversational, engaging style:
|
||||
|
||||
{draft}
|
||||
|
||||
Make it:
|
||||
- Conversational and warm in tone
|
||||
- Include rhetorical questions that engage the reader
|
||||
- Add analogies and metaphors where appropriate
|
||||
- Include a strong opening and conclusion
|
||||
"""
|
||||
return call_llm(prompt)
|
||||
|
||||
def post(self, shared, prep_res, exec_res):
|
||||
shared["final_article"] = exec_res
|
||||
|
||||
# Send completion update via SSE queue
|
||||
progress_msg = {"step": "complete", "progress": 100, "data": {"final_article": exec_res}}
|
||||
shared["sse_queue"].put_nowait(progress_msg)
|
||||
|
||||
return "default"
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
fastapi
|
||||
uvicorn
|
||||
openai
|
||||
pyyaml
|
||||
python-multipart
|
||||
|
|
@ -0,0 +1,124 @@
|
|||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>PocketFlow Article Generator</title>
|
||||
<style>
|
||||
body {
|
||||
font-family: Arial, sans-serif;
|
||||
max-width: 600px;
|
||||
margin: 50px auto;
|
||||
padding: 20px;
|
||||
background-color: #f5f5f5;
|
||||
}
|
||||
.container {
|
||||
background: white;
|
||||
padding: 30px;
|
||||
border-radius: 10px;
|
||||
box-shadow: 0 2px 10px rgba(0,0,0,0.1);
|
||||
}
|
||||
h1 {
|
||||
color: #333;
|
||||
text-align: center;
|
||||
margin-bottom: 30px;
|
||||
}
|
||||
.form-group {
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
label {
|
||||
display: block;
|
||||
margin-bottom: 5px;
|
||||
font-weight: bold;
|
||||
color: #555;
|
||||
}
|
||||
input[type="text"] {
|
||||
width: 100%;
|
||||
padding: 12px;
|
||||
border: 2px solid #ddd;
|
||||
border-radius: 5px;
|
||||
font-size: 16px;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
input[type="text"]:focus {
|
||||
border-color: #4CAF50;
|
||||
outline: none;
|
||||
}
|
||||
button {
|
||||
background-color: #4CAF50;
|
||||
color: white;
|
||||
padding: 12px 30px;
|
||||
border: none;
|
||||
border-radius: 5px;
|
||||
cursor: pointer;
|
||||
font-size: 16px;
|
||||
width: 100%;
|
||||
}
|
||||
button:hover {
|
||||
background-color: #45a049;
|
||||
}
|
||||
button:disabled {
|
||||
background-color: #cccccc;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
.loading {
|
||||
text-align: center;
|
||||
color: #666;
|
||||
margin-top: 20px;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<h1>🚀 Article Generator</h1>
|
||||
<form id="jobForm">
|
||||
<div class="form-group">
|
||||
<label for="topic">Article Topic:</label>
|
||||
<input type="text" id="topic" name="topic" placeholder="e.g., AI Safety, Climate Change, Space Exploration" required>
|
||||
</div>
|
||||
<button type="submit" id="submitBtn">Generate Article</button>
|
||||
</form>
|
||||
<div id="loading" class="loading" style="display: none;">
|
||||
Starting your article generation...
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
document.getElementById('jobForm').addEventListener('submit', async function(e) {
|
||||
e.preventDefault();
|
||||
|
||||
const topic = document.getElementById('topic').value.trim();
|
||||
if (!topic) return;
|
||||
|
||||
// Show loading state
|
||||
document.getElementById('submitBtn').disabled = true;
|
||||
document.getElementById('loading').style.display = 'block';
|
||||
|
||||
try {
|
||||
const response = await fetch('/start-job', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/x-www-form-urlencoded',
|
||||
},
|
||||
body: `topic=${encodeURIComponent(topic)}`
|
||||
});
|
||||
|
||||
const result = await response.json();
|
||||
|
||||
if (result.job_id) {
|
||||
// Redirect to progress page
|
||||
window.location.href = `/progress.html?job_id=${result.job_id}&topic=${encodeURIComponent(topic)}`;
|
||||
} else {
|
||||
alert('Failed to start job');
|
||||
document.getElementById('submitBtn').disabled = false;
|
||||
document.getElementById('loading').style.display = 'none';
|
||||
}
|
||||
} catch (error) {
|
||||
alert('Error starting job: ' + error.message);
|
||||
document.getElementById('submitBtn').disabled = false;
|
||||
document.getElementById('loading').style.display = 'none';
|
||||
}
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
|
|
@ -0,0 +1,223 @@
|
|||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Article Generation Progress</title>
|
||||
<style>
|
||||
body {
|
||||
font-family: Arial, sans-serif;
|
||||
max-width: 800px;
|
||||
margin: 20px auto;
|
||||
padding: 20px;
|
||||
background-color: #f5f5f5;
|
||||
}
|
||||
.container {
|
||||
background: white;
|
||||
padding: 30px;
|
||||
border-radius: 10px;
|
||||
box-shadow: 0 2px 10px rgba(0,0,0,0.1);
|
||||
}
|
||||
h1 {
|
||||
color: #333;
|
||||
text-align: center;
|
||||
margin-bottom: 30px;
|
||||
}
|
||||
.topic {
|
||||
text-align: center;
|
||||
color: #666;
|
||||
margin-bottom: 30px;
|
||||
font-style: italic;
|
||||
}
|
||||
.progress-container {
|
||||
margin-bottom: 30px;
|
||||
}
|
||||
.progress-bar {
|
||||
width: 100%;
|
||||
height: 20px;
|
||||
background-color: #f0f0f0;
|
||||
border-radius: 10px;
|
||||
overflow: hidden;
|
||||
}
|
||||
.progress-fill {
|
||||
height: 100%;
|
||||
background-color: #4CAF50;
|
||||
width: 0%;
|
||||
transition: width 0.3s ease;
|
||||
}
|
||||
.progress-text {
|
||||
text-align: center;
|
||||
margin-top: 10px;
|
||||
font-weight: bold;
|
||||
color: #333;
|
||||
}
|
||||
.step-info {
|
||||
background-color: #f8f9fa;
|
||||
padding: 15px;
|
||||
border-radius: 5px;
|
||||
margin-bottom: 20px;
|
||||
border-left: 4px solid #4CAF50;
|
||||
}
|
||||
.article-result {
|
||||
background-color: #f8f9fa;
|
||||
padding: 20px;
|
||||
border-radius: 5px;
|
||||
margin-top: 20px;
|
||||
white-space: pre-wrap;
|
||||
line-height: 1.6;
|
||||
}
|
||||
.error {
|
||||
background-color: #ffebee;
|
||||
color: #c62828;
|
||||
padding: 15px;
|
||||
border-radius: 5px;
|
||||
border-left: 4px solid #f44336;
|
||||
}
|
||||
.back-button {
|
||||
background-color: #2196F3;
|
||||
color: white;
|
||||
padding: 10px 20px;
|
||||
border: none;
|
||||
border-radius: 5px;
|
||||
cursor: pointer;
|
||||
text-decoration: none;
|
||||
display: inline-block;
|
||||
margin-top: 20px;
|
||||
}
|
||||
.back-button:hover {
|
||||
background-color: #1976D2;
|
||||
}
|
||||
.loading-dots {
|
||||
display: inline-block;
|
||||
}
|
||||
.loading-dots:after {
|
||||
content: '';
|
||||
animation: dots 1.5s steps(5, end) infinite;
|
||||
}
|
||||
@keyframes dots {
|
||||
0%, 20% { content: ''; }
|
||||
40% { content: '.'; }
|
||||
60% { content: '..'; }
|
||||
80%, 100% { content: '...'; }
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<h1>📝 Generating Your Article</h1>
|
||||
<div class="topic" id="topicDisplay"></div>
|
||||
|
||||
<div class="progress-container">
|
||||
<div class="progress-bar">
|
||||
<div class="progress-fill" id="progressFill"></div>
|
||||
</div>
|
||||
<div class="progress-text" id="progressText">Starting<span class="loading-dots"></span></div>
|
||||
</div>
|
||||
|
||||
<div id="stepInfo" class="step-info" style="display: none;"></div>
|
||||
<div id="errorInfo" class="error" style="display: none;"></div>
|
||||
<div id="articleResult" class="article-result" style="display: none;"></div>
|
||||
|
||||
<a href="/" class="back-button">← Generate Another Article</a>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Get job_id and topic from URL parameters
|
||||
const urlParams = new URLSearchParams(window.location.search);
|
||||
const jobId = urlParams.get('job_id');
|
||||
const topic = urlParams.get('topic');
|
||||
|
||||
if (!jobId) {
|
||||
document.getElementById('errorInfo').style.display = 'block';
|
||||
document.getElementById('errorInfo').textContent = 'No job ID provided';
|
||||
} else {
|
||||
document.getElementById('topicDisplay').textContent = `Topic: ${topic || 'Unknown'}`;
|
||||
startProgressMonitoring(jobId);
|
||||
}
|
||||
|
||||
function startProgressMonitoring(jobId) {
|
||||
const eventSource = new EventSource(`/progress/${jobId}`);
|
||||
|
||||
eventSource.onmessage = function(event) {
|
||||
try {
|
||||
const data = JSON.parse(event.data);
|
||||
|
||||
if (data.error) {
|
||||
showError(data.error);
|
||||
eventSource.close();
|
||||
return;
|
||||
}
|
||||
|
||||
if (data.heartbeat) {
|
||||
return; // Ignore heartbeat messages
|
||||
}
|
||||
|
||||
updateProgress(data);
|
||||
|
||||
if (data.step === 'complete') {
|
||||
showFinalResult(data.data.final_article);
|
||||
eventSource.close();
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
console.error('Error parsing SSE data:', error);
|
||||
}
|
||||
};
|
||||
|
||||
eventSource.onerror = function(event) {
|
||||
console.error('SSE connection error:', event);
|
||||
showError('Connection lost. Please refresh the page.');
|
||||
eventSource.close();
|
||||
};
|
||||
}
|
||||
|
||||
function updateProgress(data) {
|
||||
const progressFill = document.getElementById('progressFill');
|
||||
const progressText = document.getElementById('progressText');
|
||||
const stepInfo = document.getElementById('stepInfo');
|
||||
|
||||
// Update progress bar
|
||||
progressFill.style.width = data.progress + '%';
|
||||
|
||||
// Update progress text and step info
|
||||
switch (data.step) {
|
||||
case 'outline':
|
||||
progressText.textContent = 'Creating outline... (33%)';
|
||||
stepInfo.style.display = 'block';
|
||||
stepInfo.innerHTML = `<strong>Step 1:</strong> Generated outline with sections: ${data.data.sections.join(', ')}`;
|
||||
break;
|
||||
case 'content':
|
||||
if (data.data.section) {
|
||||
// Individual section progress
|
||||
progressText.textContent = `Writing content... (${data.progress}%)`;
|
||||
stepInfo.innerHTML = `<strong>Step 2:</strong> Completed section "${data.data.section}" (${data.data.completed_sections}/${data.data.total_sections})`;
|
||||
} else {
|
||||
// Final content completion
|
||||
progressText.textContent = 'Writing content... (66%)';
|
||||
stepInfo.innerHTML = `<strong>Step 2:</strong> Generated ${data.data.draft_length} characters of content`;
|
||||
}
|
||||
break;
|
||||
case 'complete':
|
||||
progressText.textContent = 'Complete! (100%)';
|
||||
stepInfo.innerHTML = `<strong>Step 3:</strong> Applied conversational styling - Article ready!`;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
function showFinalResult(article) {
|
||||
const resultDiv = document.getElementById('articleResult');
|
||||
resultDiv.style.display = 'block';
|
||||
resultDiv.textContent = article;
|
||||
}
|
||||
|
||||
function showError(errorMessage) {
|
||||
const errorDiv = document.getElementById('errorInfo');
|
||||
errorDiv.style.display = 'block';
|
||||
errorDiv.textContent = `Error: ${errorMessage}`;
|
||||
|
||||
const progressText = document.getElementById('progressText');
|
||||
progressText.textContent = 'Failed';
|
||||
}
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
|
|
@ -0,0 +1 @@
|
|||
# Utils package for FastAPI Background Job Interface
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
import os
|
||||
from openai import OpenAI
|
||||
|
||||
def call_llm(prompt):
|
||||
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY", "your-api-key"))
|
||||
r = client.chat.completions.create(
|
||||
model="gpt-4o",
|
||||
messages=[{"role": "user", "content": prompt}]
|
||||
)
|
||||
return r.choices[0].message.content
|
||||
|
||||
if __name__ == "__main__":
|
||||
print(call_llm("Tell me a short joke"))
|
||||
|
|
@ -2,6 +2,12 @@
|
|||
|
||||
Real-time chat interface with streaming LLM responses using PocketFlow, FastAPI, and WebSocket.
|
||||
|
||||
<p align="center">
|
||||
<img
|
||||
src="./assets/banner.png" width="800"
|
||||
/>
|
||||
</p>
|
||||
|
||||
## Features
|
||||
|
||||
- **Real-time Streaming**: See AI responses typed out in real-time as the LLM generates them
|
||||
|
|
|
|||
Binary file not shown.
|
After Width: | Height: | Size: 668 KiB |
|
|
@ -55,7 +55,7 @@ Here's what each node does:
|
|||
- [`main.py`](./main.py): Main entry point for running the article workflow
|
||||
- [`flow.py`](./flow.py): Defines the flow that connects the nodes
|
||||
- [`nodes.py`](./nodes.py): Contains the node classes for each step in the workflow
|
||||
- [`utils.py`](./utils.py): Utility functions including the LLM wrapper
|
||||
- [`utils/call_llm.py`](./utils/call_llm.py): LLM utility function
|
||||
- [`requirements.txt`](./requirements.txt): Lists the required dependencies
|
||||
|
||||
## Example Output
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import re
|
||||
from pocketflow import Node, BatchNode
|
||||
from utils import call_llm
|
||||
from utils.call_llm import call_llm
|
||||
import yaml
|
||||
|
||||
class GenerateOutline(Node):
|
||||
|
|
@ -49,16 +49,13 @@ sections:
|
|||
|
||||
return "default"
|
||||
|
||||
class WriteSimpleContent(Node):
|
||||
class WriteSimpleContent(BatchNode):
|
||||
def prep(self, shared):
|
||||
# Get the list of sections to process
|
||||
return shared.get("sections", [])
|
||||
# Get the list of sections to process and store for progress tracking
|
||||
self.sections = shared.get("sections", [])
|
||||
return self.sections
|
||||
|
||||
def exec(self, sections):
|
||||
all_sections_content = []
|
||||
section_contents = {}
|
||||
|
||||
for section in sections:
|
||||
def exec(self, section):
|
||||
prompt = f"""
|
||||
Write a short paragraph (MAXIMUM 100 WORDS) about this section:
|
||||
|
||||
|
|
@ -71,13 +68,24 @@ Requirements:
|
|||
- Include one brief example or analogy
|
||||
"""
|
||||
content = call_llm(prompt)
|
||||
|
||||
# Show progress for this section
|
||||
current_section_index = self.sections.index(section) if section in self.sections else 0
|
||||
total_sections = len(self.sections)
|
||||
print(f"✓ Completed section {current_section_index + 1}/{total_sections}: {section}")
|
||||
|
||||
return section, content
|
||||
|
||||
def post(self, shared, prep_res, exec_res_list):
|
||||
# exec_res_list contains [(section, content), (section, content), ...]
|
||||
section_contents = {}
|
||||
all_sections_content = []
|
||||
|
||||
for section, content in exec_res_list:
|
||||
section_contents[section] = content
|
||||
all_sections_content.append(f"## {section}\n\n{content}\n")
|
||||
|
||||
return sections, section_contents, "\n".join(all_sections_content)
|
||||
|
||||
def post(self, shared, prep_res, exec_res):
|
||||
sections, section_contents, draft = exec_res
|
||||
draft = "\n".join(all_sections_content)
|
||||
|
||||
# Store the section contents and draft
|
||||
shared["section_contents"] = section_contents
|
||||
|
|
|
|||
Loading…
Reference in New Issue