Getting StartedHow NetPad Works

How NetPad Works: From Concept to Execution

Overview

NetPad is a visual workflow builder that transforms the way developers create agentic AI systems. Instead of writing complex orchestration code, users drag and drop nodes onto a canvas, configure their properties, and execute workflows visually. This document explains in detail how NetPad works, from creating your first workflow to understanding the execution engine.

Core Architecture

NetPad operates on three fundamental layers:

  1. Visual Layer: SVG-based canvas for interactive workflow design
  2. Data Layer: JSON DSL (Domain Specific Language) for workflow persistence
  3. Execution Layer: Node-based runtime engine for workflow execution

The Workflow Creation Process

1. Canvas System: The Visual Foundation

The workflow creation process begins with NetPad’s custom SVG-based canvas system, implemented in src/app/components/Canvas.js.

Shape Creation

When you drag a node from the palette onto the canvas, the getNewShape() function creates a new shape instance:

// src/app/components/Canvas.js - Lines 52-89
function getNewShape(activeShape, x, y, iconName, subtype, customProps) {
  const id = Date.now().toString() + Math.random().toString(36).slice(2, 7);
  let type = typeof activeShape === 'string' ? activeShape : activeShape?.type;
 
  // Registry-driven creation for all other types
  const reg = shapeRegistry[type];
  if (reg && reg.create) {
    const base = reg.create({ id, x, y });
    return { ...base, ...customProps };
  }
  return null;
}

This function:

  • Generates a unique ID for each node
  • Determines the node type from the active shape
  • Uses the shape registry to create the appropriate node instance
  • Applies any custom properties

Shape Registry System

All node types are defined in the shape registry at src/app/shapes/shapeRegistryData.js. Each node type includes:

// Example: Agent Node definition
agent_node: {
  type: 'agent_node',
  label: 'Agent Node',
  description: 'An autonomous reasoning or orchestration node',
  propertyDefs: [
    { name: 'prompt', type: 'string', section: 'Agent', multiline: true },
    { name: 'systemPrompt', type: 'string', section: 'Agent', multiline: true },
    { name: 'temperature', type: 'number', section: 'LLM', min: 0, max: 2 }
  ],
  ports: {
    input: [{ id: 'context_in', name: 'Context In' }],
    output: [{ id: 'tool_out', name: 'Tool Out' }]
  }
}

Canvas Interactions

The canvas supports multiple interaction modes through specialized hooks:

  • usePanZoom: Handles canvas panning and zooming
  • useShapeInteractions: Manages shape selection, dragging, and editing
  • useConnectionInteractions: Controls connecting nodes with edges
  • useResizeInteractions: Enables shape resizing

2. Node Configuration: Properties and Ports

Properties System

Each node type has configurable properties defined in its registry entry. The properties are edited through the PropertiesTab component, which dynamically generates form controls based on the property definitions:

// Property types supported:
- string: Text input (with optional multiline)
- number: Numeric input with min/max validation
- boolean: Checkbox input
- select: Dropdown with predefined options
- multiselect: Multiple selection dropdown
- object: Complex object editor

Port System

Nodes communicate through a port-based I/O system:

  • Input Ports: Receive data from other nodes
  • Output Ports: Send data to connected nodes
  • Port Types: Determine what kind of data flows through connections

Example port configuration:

ports: {
  input: [
    { id: 'context_in', name: 'Context In' },
    { id: 'trigger_in', name: 'Trigger In' }
  ],
  output: [
    { id: 'tool_out', name: 'Tool Out' },
    { id: 'memory_out', name: 'Memory Out' },
    { id: 'branch_success', name: 'Success' },
    { id: 'branch_failure', name: 'Failure' }
  ]
}

3. JSON DSL: The Universal Language

NetPad stores all workflows in a canonical JSON format that serves as the single source of truth:

{
  "nodes": [
    {
      "id": "agent1",
      "type": "agent_node",
      "x": 100,
      "y": 100,
      "params": {
        "prompt": "Analyze the provided data and suggest improvements",
        "systemPrompt": "You are a helpful data analyst",
        "temperature": 0.7,
        "model": "gpt-4"
      }
    }
  ],
  "edges": [
    {
      "from": "input1",
      "to": "agent1",
      "fromPort": "output",
      "toPort": "context_in"
    }
  ],
  "meta": {
    "created_by": "user123",
    "created_at": "2024-05-17T12:00:00Z",
    "version": "1.0"
  }
}

This JSON DSL enables:

  • Version control: Workflows can be tracked like code
  • Export/Import: Sharing workflows between environments
  • Programmatic generation: AI can create workflows
  • Validation: Schema validation ensures consistency

The Execution Engine

1. Execution Engine Overview

The workflow execution engine is implemented in src/app/engine/executionEngine.js. It orchestrates the execution of individual nodes and manages data flow between them.

Core Execution Function

The main runNode() function handles individual node execution:

// src/app/engine/executionEngine.js - Lines 150+
export async function runNode(
  nodeId,
  nodeMap,
  edgeMap,
  portData,
  initialContext,
  // ... other parameters
) {
  const node = nodeMap[nodeId];
  
  // Memory injection for agent nodes
  let injectedContext = initialContext;
  if (node.type === 'agent_node') {
    // Handle memory injection and context preparation
    injectedContext = await prepareAgentContext(node, initialContext);
  }
 
  // Execute the node using its registered runner
  const runner = getNodeRunner(node.type);
  if (runner) {
    await runner(node, {
      getInput: (portId) => getInputFromPort(nodeId, portId, portData),
      setOutput: (portId, value) => setOutputToPort(nodeId, portId, value, portData),
      trigger: (portId) => triggerConnectedNodes(nodeId, portId, edgeMap),
      context: injectedContext,
      log: (message) => addToExecutionLog(message)
    });
  }
}

Node Runner Registry

Each node type has a corresponding runner function:

// src/app/engine/executionEngine.js - Lines 43-73
const defaultNodeRunners = {
  agent_node: runAgentNode,
  tool_node: runToolNode,
  code_node: runCodeNode,
  chat_node: runChatNode,
  mongodb_node: runMongoDBNode,
  api_node: runApiNode,
  // ... other node types
};

2. Agent Node Execution

Agent nodes are the most sophisticated components in NetPad. The agent node runner (src/app/runners/agentNodeRunner.js) implements autonomous reasoning capabilities.

Agent Decision Making Process

  1. Context Preparation: The agent receives input context and memory
  2. Tool Discovery: Available tools are loaded and analyzed
  3. LLM Planning: The agent uses an LLM to decide which tool to use
  4. Tool Execution: The selected tool is executed with parameters
  5. Output Generation: Results are formatted and sent to output ports
// src/app/runners/agentNodeRunner.js - Lines 23-45
function buildAgentLLMPrompt({ systemPrompt, userPrompt, context, memory, tools }) {
  const toolsList = tools.map((tool, i) =>
    `${i + 1}. ${tool.name}: ${tool.description}\n   Parameters: ${JSON.stringify(tool.parameters)}`
  ).join('\n');
 
  const userMessage = `Here is the user's request:\n---\n${userPrompt}\n---\n
Here is your current memory:\n${JSON.stringify(memory, null, 2)}\n
Here are the available tools:\n${toolsList}\n
Based on the above, select the best tool and provide the parameters.`;
 
  return [
    systemPrompt ? { role: 'system', content: systemPrompt } : null,
    { role: 'user', content: userMessage }
  ].filter(Boolean);
}

Tool Selection and Execution

The agent analyzes available tools and selects the most appropriate one:

// Agent decides which tool to use based on:
// 1. Tool descriptions and capabilities
// 2. Current context and memory
// 3. User's request or workflow requirements
// 4. Previous execution history
 
const llmResponse = await commandProcessor({
  type: 'llm_prompt',
  input: {
    messages: agentPrompt,
    temperature: node.params?.temperature || 0.7,
    model: node.params?.model || context?.model
  }
});
 
const toolSelection = extractJsonFromResponse(llmResponse.output.response);

3. Data Flow Management

Port Data System

NetPad uses a sophisticated port data system to manage data flow between nodes:

// Data flows through ports using this structure:
const portData = {
  'nodeId:portId': {
    value: actualData,
    timestamp: Date.now(),
    metadata: { type: 'string', source: 'previousNode' }
  }
};

Context Propagation

Context information flows through the entire workflow:

  • Global Context: Workflow-wide settings (LLM keys, database connections)
  • Node Context: Node-specific data and state
  • Memory Context: Persistent memory for agent nodes
  • Variable Substitution: {{variable}} syntax for dynamic values

4. Memory System

NetPad implements a sophisticated memory system for stateful workflows:

Memory Types

  • In-Memory: Fast, session-scoped storage
  • MongoDB: Persistent, thread-scoped storage
  • Checkpoints: State snapshots at critical points

Memory Integration

Agent nodes automatically integrate with memory nodes:

// src/app/engine/executionEngine.js - Memory injection logic
if (node.type === 'agent_node') {
  let memoryNode = nodeMap[node.memoryNodeId] || 
                   Object.values(nodeMap).find(n => n.type === 'memory_node');
  
  if (memoryNode) {
    const command = {
      type: 'memory_op',
      input: {
        memoryType: memoryNode.params?.memoryType || 'in-memory',
        threadId: memoryNode.params?.threadId || 'default',
        // ... other memory parameters
      }
    };
    const result = await commandProcessor(command);
    injectedContext = { ...initialContext, memoryIn: result.output.memory };
  }
}

5. Command Processor System

The mCP (Model-Component-Property) system provides a unified interface for all external operations through the command processor (src/utils/commandProcessor.js):

Supported Command Types

  • llm_prompt: LLM interactions across providers
  • mongodb_query: Database operations
  • api_request: HTTP API calls
  • memory_op: Memory operations
  • tool_execution: Custom tool execution
  • code_analysis: Code analysis and review

Command Structure

const command = {
  type: 'llm_prompt',
  input: {
    prompt: 'Analyze this data',
    model: 'gpt-4',
    temperature: 0.7,
    maxTokens: 1000
  },
  context: {
    workflow_id: 'workflow123',
    user_id: 'user456',
    variables: { userName: 'John' }
  },
  metadata: {
    node_id: 'node789',
    timestamp: new Date().toISOString()
  }
};
 
const result = await commandProcessor(command);

Practical Example: Building a Research Workflow

Let’s walk through creating a practical research workflow to understand how all these components work together.

Step 1: Create the Workflow Canvas

  1. Open NetPad and create a new diagram
  2. Drag nodes from the palette:
    • Start Node (terminator_node with subtype: ‘start’)
    • Prompt Node (for user input)
    • Agent Node (for research planning)
    • Web Scraper Node (for data collection)
    • Chat Node (for analysis)
    • End Node (terminator_node with subtype: ‘end’)

Step 2: Configure Node Properties

Prompt Node Configuration:

{
  "label": "Research Query",
  "prompt": "What topic would you like me to research?",
  "description": "Collect user's research topic"
}

Agent Node Configuration:

{
  "label": "Research Planner",
  "systemPrompt": "You are a research assistant. Plan the research strategy.",
  "prompt": "Based on the user's query: {{userInput}}, create a research plan.",
  "temperature": 0.7,
  "planningMode": true
}

Web Scraper Node Configuration:

{
  "label": "Data Collector",
  "url": "https://en.wikipedia.org/wiki/{{searchTerm}}",
  "autoSelector": true,
  "summarize": true
}

Step 3: Connect the Nodes

Connect the nodes using the canvas connection tool:

  • Start → Prompt (triggers the user input)
  • Prompt → Agent (passes user input to planner)
  • Agent → Web Scraper (provides search terms)
  • Web Scraper → Chat (passes collected data)
  • Chat → End (completes the workflow)

Step 4: Execute the Workflow

When you click “Run Workflow”:

  1. Execution Engine identifies the start node
  2. Prompt Node pauses execution, waits for user input
  3. Agent Node receives input, plans research strategy
  4. Web Scraper collects data from identified sources
  5. Chat Node analyzes and summarizes findings
  6. Results are displayed in the execution log

Step 5: Monitor Execution

NetPad provides real-time execution monitoring:

  • Visual indicators show which nodes are active
  • Execution log displays detailed progress
  • Port data shows data flowing between nodes
  • Error handling captures and displays any issues

Advanced Features

1. Context Variables

Use {{variable}} syntax throughout your workflow:

  • {{userInput}} - Data from prompt nodes
  • {{context.userName}} - Global context variables
  • {{memory.lastResult}} - Memory-stored values

2. Conditional Logic

Use condition nodes for branching:

// If Condition Node
{
  "condition": "{{scraped_data.length}} > 100",
  // Routes to 'true' or 'false' output ports
}

3. Loop Processing

Process arrays of data with loop nodes:

// Loop Node
{
  "iterations": "{{searchTerms.length}}",
  // Executes connected nodes for each iteration
}

4. Memory Persistence

Store workflow state across executions:

// Memory Node
{
  "memoryType": "mongodb",
  "threadId": "research_session_{{userId}}",
  "persistence": true
}

Debugging and Troubleshooting

Execution Logs

Every node execution is logged with:

  • Input data received
  • Processing steps taken
  • Output data generated
  • Any errors encountered

Visual Debugging

  • Node highlighting shows execution progress
  • Connection animation displays data flow
  • Port inspection reveals data at connection points
  • Step-by-step mode allows pausing between nodes

Common Issues and Solutions

  1. Node not executing: Check input connections and data availability
  2. Memory issues: Verify memory node configuration and thread IDs
  3. LLM errors: Check API keys and model availability
  4. Data flow problems: Inspect port data and variable substitution

Extending NetPad

Adding New Node Types

  1. Define the node in shapeRegistryData.js
  2. Create a renderer in src/app/components/shapes/
  3. Implement the runner in src/app/runners/
  4. Register in execution engine

Custom Tools

Create custom tools for agent nodes:

// Store in MongoDB Tool collection
{
  name: "custom_calculator",
  description: "Performs complex calculations",
  parameters: {
    expression: { type: "string", required: true },
    precision: { type: "number", default: 2 }
  },
  code: "function(input) { return eval(input.expression).toFixed(input.precision); }"
}

Conclusion

NetPad transforms the complexity of agentic AI development into a visual, collaborative experience. By understanding the canvas system, execution engine, and data flow patterns, you can build sophisticated AI workflows that are both powerful and maintainable.

The combination of visual design, JSON DSL persistence, and robust execution engine makes NetPad suitable for everything from rapid prototyping to production deployment of agentic AI systems.

Key takeaways:

  • Visual design makes complex workflows comprehensible
  • JSON DSL ensures portability and version control
  • Modular architecture supports extensibility
  • Robust execution handles production workloads
  • Memory system enables stateful agents
  • Command processor standardizes external integrations

Whether you’re building customer service agents, research assistants, or complex multi-agent systems, NetPad provides the foundation for visual agentic AI development.