From 0363eb55be9c746b21c8832fee9ad1f861992f7c Mon Sep 17 00:00:00 2001 From: zvictor Date: Tue, 25 Mar 2025 09:34:41 +0100 Subject: [PATCH] fix assets location in .cursorrules --- .cursorrules | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/.cursorrules b/.cursorrules index 8f05631..ec33fa0 100644 --- a/.cursorrules +++ b/.cursorrules @@ -56,7 +56,7 @@ Agentic Coding should be a collaboration between Human System Design and Agent I 3. **Utilities**: Based on the Flow Design, identify and implement necessary utility functions. - Think of your AI system as the brain. It needs a body—these *external utility functions*—to interact with the real world: -
+
- Reading inputs (e.g., retrieving Slack messages, reading emails) - Writing outputs (e.g., generating reports, sending emails) @@ -127,7 +127,7 @@ Agentic Coding should be a collaboration between Human System Design and Agent I - > **You'll likely iterate a lot!** Expect to repeat Steps 3–6 hundreds of times. > - >
+ >
{: .best-practice } 8. **Reliability** @@ -244,7 +244,7 @@ A [100-line](https://github.com/the-pocket/PocketFlow/blob/main/pocketflow/__ini - **Agentic-Coding**: Intuitive enough for AI agents to help humans build complex LLM applications.
- +
## Core Abstraction @@ -259,7 +259,7 @@ We model the LLM workflow as a **Graph + Shared Store**: - [(Advanced) Parallel](./core_abstraction/parallel.md) nodes/flows handle I/O-bound tasks.
- +
## Design Pattern @@ -274,7 +274,7 @@ From there, it’s easy to implement popular design patterns: - [(Advanced) Multi-Agents](./design_pattern/multi_agent.md) coordinate multiple agents.
- +
## Utility Function @@ -794,7 +794,7 @@ nav_order: 1 A **Node** is the smallest building block. Each Node has 3 steps `prep->exec->post`:
- +
1. `prep(shared)` @@ -964,7 +964,7 @@ nav_order: 1 Agent is a powerful design pattern in which nodes can take dynamic actions based on the context.
- +
## Implement Agent with Graph @@ -1122,7 +1122,7 @@ MapReduce is a design pattern suitable when you have either: and there is a logical way to break the task into smaller, ideally independent parts.
- +
You first break down the task using [BatchNode](../core_abstraction/batch.md) in the map phase, followed by aggregation in the reduce phase. @@ -1192,7 +1192,7 @@ nav_order: 3 For certain LLM tasks like answering questions, providing relevant context is essential. One common architecture is a **two-stage** RAG pipeline:
- +
1. **Offline stage**: Preprocess and index documents ("building the index"). @@ -1475,7 +1475,7 @@ nav_order: 2 Many real-world tasks are too complex for one LLM call. The solution is to **Task Decomposition**: decompose them into a [chain](../core_abstraction/flow.md) of multiple Nodes.
- +
> - You don't want to make each task **too coarse**, because it may be *too complex for one LLM call*.