Getting Started
Build your first AI agent in Fruxon
Sign up at fruxon.com if you haven't already. Once you're in, here's how Fruxon works.
How Fruxon Works
Fruxon lets you build AI agents visually. Instead of writing code, you wire together nodes on a canvas — each node does one thing, and data flows between them.
A typical agent looks like this:
Entry Point → Agent Steps → Exit Point
- Entry Point — Defines the inputs your agent accepts (e.g., a user's question, a document to summarize)
- Agent Steps — The AI processing. Each step calls an LLM with a prompt you configure. You can chain multiple steps, and reference outputs from previous steps using placeholders like
{{step.summarize}} - Exit Point — Defines what your agent returns
You build and test all of this in the Agent Studio, a visual drag-and-drop canvas.
Connecting to AI Providers
Agents need an LLM to run. Fruxon supports OpenAI, Anthropic, Google, and more — you bring your own API key. AI connections are configured per-agent inside Agent Studio, so different agents can use different providers.
Extending with Integrations
Agents aren't limited to LLM calls. You can attach Integrations as tools that your agent steps can call — import an OpenAPI spec or configure custom endpoints. This lets agents fetch live data, trigger actions in external systems, or call other Fruxon agents as sub-agents.
The Dashboard
The sidebar is your home base:
| Section | Description |
|---|---|
| Agents | Your AI agents and workflows |
| Solutions | Pre-built agent templates you can clone and customize |
| Integrations | Connect external APIs and tools |
First-time users get an interactive walkthrough tour that highlights key features. You can restart it anytime from the Help menu.