FruxonDocs

Core Concepts

Understanding the fundamental concepts behind Fruxon

Agents

Configurable AI workflows that execute multi-step logic, call LLMs, use tools, and compose with other agents.

What Makes an Agent

An agent is more than just an LLM prompt. It's a complete workflow that can:

  • Accept structured inputs with validation
  • Execute multiple steps in sequence or parallel
  • Call external APIs and services
  • Make decisions based on intermediate results
  • Return structured outputs

Agent Capabilities

Agents can be simple single-prompt assistants or complex multi-step orchestrators that coordinate sub-agents, external tools, and conditional logic.

Workflows

Visual, node-based representation of agent logic.

Node Types

  • Entry Point - Define input parameters and their types
  • Agent Steps - Call AI models with templated prompts, with optional integrations as tools
  • Exit Point - Define and format outputs

Flow Control

Nodes are connected automatically based on the dependency tree — when you reference a parameter or step output in a prompt, the connection is created for you.

Parameters

Inputs to your agents that make them dynamic and reusable.

Supported Types

  • String - Text inputs
  • Number - Numeric values
  • Boolean - True/false flags
  • Object - Structured JSON data
  • Array - Lists of values

Using Placeholders

Reference parameters anywhere in your workflow using the placeholder syntax: {{input.name}}

Integrations

External APIs and tools your agents can call.

API Tools

Import from OpenAPI/Swagger specs or configure custom HTTP endpoints. Your agents can call any REST API.

System Integrations

Connect to databases, message queues, and third-party services through pre-built connectors.

Connectors

Chat platform bridges that let end-users interact with deployed agents.

How They Work

Connectors route messages between external chat platforms (Slack, Microsoft Teams, Telegram) and your agents. When a user sends a message in their chat app, the connector delivers it to your agent. The agent processes it and responds through the same channel.

Access Control

Each connector has a policy — Allow All for open access, or Onboarding for managed approval where new users must be reviewed before they can interact.

Versions

Every save creates a revision, giving you complete control over your agent's history.

Revisions

Each save captures the complete workflow state. Compare versions side-by-side to see what changed.

Deployment

Deploy specific revisions to production. Rollback instantly if issues arise. No downtime, no risk.

Execution & Tracing

Full observability for every agent run.

What's Captured

  • Inputs and outputs for each step
  • LLM responses and token usage
  • Execution timing and performance
  • Errors with full context and stack traces

Debugging

Use traces to understand exactly what happened during execution. Identify bottlenecks and optimize performance.

Organizations

Multi-tenancy with isolated teams and resources.

Team Structure

Create organizations for different teams or projects. Each organization has its own agents, integrations, and settings.

Access Control

Role-based permissions control who can view, edit, and deploy agents. Keep production safe while enabling development.

Next Steps

On this page