The implementation of aifusion relies on several key components that work alongside Fabric's Data Products (Logical Units) and their logical functionalities.
In addition to the components described below, certain tools and utilities are stored in the Shared Objects LU for use across multiple LUs.
The aifusion LU is the main component of the aifusion extension. Each LU Instance (LUI) represents a single conversation session — typically a chat session — conducted by a user.
Each session (such as a chat) creates a new aifusion LUI with the session ID serving as the instance identifier (IID).
As the conversation progresses, Fabric stores the user's questions and the agent's responses within the dedicated instance.
This mechanism maintains the ongoing conversation context, enabling the AI agent to correctly interpret follow-up questions and references to earlier exchanges. This capability is also known as Short-Term Memory (STM), or Working Memory, in agentic AI terminology.
The LUI tables capture:
The above data types not only provide the conversational context, but also support debugging, optimization, and compliance requirements.
The aifusion LU provides the actors and flows needed to orchestrate agent processing. Most actor and flow components are located under the aifusion/Broadway/ai path.
LLMs are invoked at various points in the agent workflow to perform different tasks:
LLM interfaces are installed via K2exchange, offering integrations with multiple LLM providers, such as OpenAI, Anthropic, AWS Bedrock, and Google’s Vertex AI.
For more information on provisioning LLM interfaces, read here.
The aifusion framework supports vector databases for semantic search, enabling Retrieval-Augmented Generation (RAG). This is essential for organizations with unstructured data such as knowledge base (KB) articles, corporate procedures, policy documents
Vector data stores can be used in Fabric storage, where where they will be embedded as tables inside LUIs. These vector stores are SQLite-based with dedicated tables in a special format.
Fabric's built-in vector store is mainly recommended for:
For complex documents including tariff plans, device support guides, or marketing materials, consider using dedicated vector database services, such as AWS Bedrock Knowledge Bases.
The GenAI Data Fusion platform, aifusion, provides actors and embedding interfaces to interact with vector stores.
For additional information, see Vector Data Stores.
The Metrics/Assurance DB collects data for tracking, analysis, and optimization across three perspectives:
Supported Databases:
The collected data powers customizable dashboards for tracking and improving the solution. This database also supports the Evaluation platform for testing and validation.
The aifusion web app provides tools that accompany the full agent framework and agent-builder lifecycle:
This Trace pane is available in addition to the comprehensive debugging capabilities and visibility provided by Broadway flows and Java code within the Studio.
The Pipeline is an accompanying tool used to run regressions on these test cases.
The implementation of aifusion relies on several key components that work alongside Fabric's Data Products (Logical Units) and their logical functionalities.
In addition to the components described below, certain tools and utilities are stored in the Shared Objects LU for use across multiple LUs.
The aifusion LU is the main component of the aifusion extension. Each LU Instance (LUI) represents a single conversation session — typically a chat session — conducted by a user.
Each session (such as a chat) creates a new aifusion LUI with the session ID serving as the instance identifier (IID).
As the conversation progresses, Fabric stores the user's questions and the agent's responses within the dedicated instance.
This mechanism maintains the ongoing conversation context, enabling the AI agent to correctly interpret follow-up questions and references to earlier exchanges. This capability is also known as Short-Term Memory (STM), or Working Memory, in agentic AI terminology.
The LUI tables capture:
The above data types not only provide the conversational context, but also support debugging, optimization, and compliance requirements.
The aifusion LU provides the actors and flows needed to orchestrate agent processing. Most actor and flow components are located under the aifusion/Broadway/ai path.
LLMs are invoked at various points in the agent workflow to perform different tasks:
LLM interfaces are installed via K2exchange, offering integrations with multiple LLM providers, such as OpenAI, Anthropic, AWS Bedrock, and Google’s Vertex AI.
For more information on provisioning LLM interfaces, read here.
The aifusion framework supports vector databases for semantic search, enabling Retrieval-Augmented Generation (RAG). This is essential for organizations with unstructured data such as knowledge base (KB) articles, corporate procedures, policy documents
Vector data stores can be used in Fabric storage, where where they will be embedded as tables inside LUIs. These vector stores are SQLite-based with dedicated tables in a special format.
Fabric's built-in vector store is mainly recommended for:
For complex documents including tariff plans, device support guides, or marketing materials, consider using dedicated vector database services, such as AWS Bedrock Knowledge Bases.
The GenAI Data Fusion platform, aifusion, provides actors and embedding interfaces to interact with vector stores.
For additional information, see Vector Data Stores.
The Metrics/Assurance DB collects data for tracking, analysis, and optimization across three perspectives:
Supported Databases:
The collected data powers customizable dashboards for tracking and improving the solution. This database also supports the Evaluation platform for testing and validation.
The aifusion web app provides tools that accompany the full agent framework and agent-builder lifecycle:
This Trace pane is available in addition to the comprehensive debugging capabilities and visibility provided by Broadway flows and Java code within the Studio.
The Pipeline is an accompanying tool used to run regressions on these test cases.