Event Routing
AgeDigitalTwins supports real-time event routing to various external systems when digital twins, relationships, or models are created, updated, or deleted. This enables you to build reactive systems, data pipelines, and analytics solutions.
AgeDigitalTwins supports real-time event routing to various external systems when digital twins, relationships, or models are created, updated, or deleted. This enables you to build reactive systems, data pipelines, and analytics solutions.
Overview
Event routing in AgeDigitalTwins works similarly to Azure Digital Twins, with the following key features:
- Real-time streaming: Events are captured and routed in near real-time using PostgreSQL logical replication
- CloudEvents format: All events conform to the CloudEvents specification
- Multiple sinks: Route events to Kafka, Azure Data Explorer (Kusto), MQTT, and more
- Event filtering: Configure which events are routed to which sinks
- Two event types: Event Notifications and Data History events
Supported Event Sinks
Kafka / Azure Event Hubs
Stream events to Apache Kafka or Azure Event Hubs for real-time processing and integration with downstream systems.
Azure Data Explorer (Kusto)
Send events directly to Azure Data Explorer for analytics and time-series analysis, bypassing the need for intermediate Event Hubs.
MQTT
Route events to MQTT brokers for lightweight messaging and IoT scenarios. Uses structured CloudEvents format over MQTT.
Configuration
Configure event routing in your application settings. Here's the structure based on your example:
Federated Authentication for Event Sinks (Kusto & Event Hubs)
AgeDigitalTwins uses DefaultAzureCredential
for authenticating to Azure services like Kusto (ADX) and Event Hubs (via Kafka). For secure, automated integration, we recommend using Azure AD Workload Identity Federation (OIDC) with a service principal.
1. Create a Service Principal in Your Azure Tenant
You (the customer) should create a service principal (app registration) in your own Azure AD tenant:
Or via the portal:
- Go to Azure Active Directory → App registrations → New registration
2. Federate the Service Principal with AgeDigitalTwins
We will provide you with the OIDC issuer URL for our cluster (e.g., https://<your-agedt-cluster>/oidc
).
Add a federated credential to your app registration:
- Go to your app registration in Azure Portal
- Select Certificates & secrets → Federated credentials → Add credential
- Set the Issuer to the OIDC issuer URL we provide
- Set the Subject to the workload identity you want to allow (e.g.,
system:serviceaccount:<namespace>:<serviceaccount>
) - Save
3. Assign Roles to the Service Principal
- For Kusto: Assign the service principal the
Contributor
orIngestor
role on your Kusto cluster/database. - For Event Hubs: Assign the service principal the
Azure Event Hubs Data Sender
role on the Event Hub namespace.
Example (Kusto):
Example (Event Hubs):
4. Configure AgeDigitalTwins
No secrets or credentials need to be stored in AgeDigitalTwins. The platform will use the federated identity to obtain tokens via DefaultAzureCredential
.
Example event sink config:
Event Routes
You can define event routes to control which events go to which sinks. Example:
Migration Note
If you previously used managed identities or connection strings in Azure Digital Twins, you now use federated credentials for secure, passwordless authentication.
Event Types
Event Notifications
Event notifications are fired whenever a digital twin, relationship, or model is created, updated, or deleted. These events contain the current state of the entity.
Twin Events
Twin Create:
Twin Update:
Twin Delete:
Relationship Events
Relationship Create:
Relationship Update:
Data History Events
Data history events provide detailed property-level change tracking for analytics and auditing purposes.
Property Events
Lifecycle Events
CloudEvents vs Azure Digital Twins Format
AgeDigitalTwins uses the official CloudEvents specification for event formatting. This differs from Azure Digital Twins Event Hub events, particularly for Kafka/Event Hubs integration:
Azure Digital Twins | AgeDigitalTwins CloudEvents | Description |
---|---|---|
cloudEvents:subject | ce_subject | Kafka header format only |
cloudEvents:type | ce_type | Kafka header format only |
cloudEvents:source | ce_source | Kafka header format only |
Custom properties | Standard CloudEvents envelope | Consistent event structure |
Event Hub specific | CloudEvents Kafka binding | Uses official CloudEvents library |
Key Differences for Kafka/Event Hubs:
- CloudEvents properties are prefixed with
ce_
in Kafka headers (e.g.,ce_subject
,ce_type
,ce_source
) - Uses binary content mode for Kafka (more efficient than structured)
- Follows CloudEvents Kafka binding specification
- Other sinks (Kusto, MQTT) use standard CloudEvents format without
ce_
prefixes
Why the ce_
prefix?
The CloudEvents Kafka binding uses the ce_
prefix for binary mode to distinguish CloudEvents attributes from regular Kafka headers, as defined in the CloudEvents Kafka Protocol Binding.
Event Routing Configuration
Kafka Configuration
Notes:
- For Azure Event Hubs, use
OAUTHBEARER
with Azure credentials - Port 9093 is automatically appended if not specified in
brokerList
- SASL/SSL is automatically configured for secure communication
Kusto Configuration
Notes:
- Uses queued ingestion for optimal performance
- Automatically creates JSON ingestion mappings for each event type
- Table names are optional and will use defaults if not specified
MQTT Configuration
Notes:
- Uses structured CloudEvents format for MQTT messages
- Supports MQTT v3.1.0, v3.1.1, and v5.0.0
- Automatic reconnection on connection loss
Performance and Reliability
- Batching: Events are processed in configurable batches (default 50 events) for optimal performance
- PostgreSQL Logical Replication: Uses PostgreSQL's built-in logical replication for real-time event capture
- Replication Slots: Managed replication slots ensure no event loss during restarts or failovers
- Connection Management: Enhanced timeout settings and automatic reconnection for high-load scenarios
- Health Monitoring: Built-in health checks monitor replication connection status via
IsHealthy
property - Graceful Degradation: Service continues operating even if some sinks are unavailable
- Error Handling: Individual event failures don't stop batch processing
- TCP Keep-Alive: Configured for reliable long-running connections (30-second intervals)
Event Processing Architecture
The event routing system consists of two main components:
- Replication Producer: Captures changes from PostgreSQL using logical replication and queues events
- Event Consumer: Processes queued events in batches and routes them to configured sinks
Events flow through the following stages:
- Database changes trigger logical replication messages
- Changes are converted to
EventData
objects and queued - Consumer processes events in batches and converts to CloudEvents
- CloudEvents are routed to configured sinks based on event routes
- Each sink handles delivery with appropriate retry logic