1.6 KiB
1.6 KiB
PocketFlow FastAPI WebSocket Chat
Real-time chat interface with streaming LLM responses using PocketFlow, FastAPI, and WebSocket.
Features
- Real-time Streaming: See AI responses typed out in real-time as the LLM generates them
- Conversation Memory: Maintains chat history across messages
- Modern UI: Clean, responsive chat interface with gradient design
- WebSocket Connection: Persistent connection for instant communication
- PocketFlow Integration: Uses PocketFlow
AsyncNodeandAsyncFlowfor streaming
How to Run
-
Set OpenAI API Key:
export OPENAI_API_KEY="your-openai-api-key" -
Install Dependencies:
pip install -r requirements.txt -
Run the Application:
python main.py -
Access the Web UI: Open
http://localhost:8000in your browser.
Usage
- Type Message: Enter your message in the input field
- Send: Press Enter or click Send button
- Watch Streaming: See the AI response appear in real-time
- Continue Chat: Conversation history is maintained automatically
Files
main.py: FastAPI application with WebSocket endpointnodes.py: PocketFlowStreamingChatNodedefinitionflow.py: PocketFlowAsyncFlowfor chat processingutils/stream_llm.py: OpenAI streaming utilitystatic/index.html: Modern chat interfacerequirements.txt: Project dependenciesdocs/design.md: System design documentationREADME.md: This file