{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! pip install pocketflow\n", "! pip install faiss-cpu\n", "! pip install openai" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "Cookbook: Pocket Flow + Cursor AI\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "1. Utility Function\n", "
\n", "\n", "\n", " Utility Functions are the helper functions like calling an LLM, generating embeddings, or using external APIs. Pocket Flow is deliberately kept minimal and does NOT provide any of these. \n", "
\n", "\n", "\n", "But don’t worry: you can simply ask Cursor AI to create them for you. \n", "
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> Help me implement (1) `call_llm` function that takes a prompt and returns the response from the OpenAI gpt-4o model. (2) `get_embedding` function that takes a text and returns the embedding from the OpenAI text-embedding-ada-002 model. " ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "from openai import OpenAI\n", "import os\n", "\n", "def call_llm(prompt):\n", " client = OpenAI(api_key=API_KEY)\n", " response = client.chat.completions.create(\n", " model=\"gpt-4o\",\n", " messages=[{\"role\": \"user\", \"content\": prompt}]\n", " )\n", " return response.choices[0].message.content\n", "\n", "def get_embedding(text):\n", " client = OpenAI(api_key=API_KEY)\n", " response = client.embeddings.create(\n", " model=\"text-embedding-ada-002\",\n", " input=text\n", " )\n", " return response.data[0].embedding\n", "\n", "# Example usage:\n", "response = call_llm(\"What's the meaning of life?\")\n", "print(response)\n", "embedding = get_embedding(\"What's the meaning of life?\")\n", "print(embedding)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "2. Node\n", "
\n", "\n", " \n", "\n",
" A Node is your smallest unit of work with 3 steps \n",
" prep->exec->post:\n",
"
\n",
" prep(shared)\n",
"
\n", " - Reads and preprocess data from the shared store.\n", "
\n", "\n", " - E.g., load a file, query a database, or turn data into a string.\n", "
\n", "\n",
" exec(prep_res)\n",
"
\n", " - Executes the core logic\n", "
\n", "\n", " - E.g., call an LLM, invoke remote APIs, or embed texts.\n", "
\n", "\n",
" post(shared, prep_res, exec_res)\n",
"
\n", " - Writes data back to the shared store.\n", "
\n", "\n", " 3. Batch\n", "
\n", "\n", " \n", "\n",
" Batch helps repeat the same work multiple items. \n",
" Instead of calling exec() once, a Batch Node calls \n",
" exec() \n",
" for each item in a list from prep(). \n",
"
\n", " Think of it as \"item-by-item\" processing:\n", "
\n", "\n", " \n", "prep(shared): Return a list of items.\n",
" exec(item): Called once per item.\n",
" post(shared, item_list, results_list): Combines all results.\n",
" \n", " 4. Flow\n", "
\n", "\n", " \n", "\n", " Flow connects your Nodes to a graph.\n", "
\n", "\n", " \n", "node_1 >> node_2): Break down complex problems into simple chained steps.\n",
" node_1 - \"action\" ->> node_2): \n",
" Agentic decisions—where a Node’s \n",
" post() return the action string.\n",
" Flow(start=node_a). \n",
" Then call \n",
" flow.run(shared).\n",
" \n", " That’s it! You can nest Flows, branch your actions, or keep it simple with a straight chain of Nodes.\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "