Broadway has a group of built-in Actors that manage Pub / Sub asynchronous message handling. These Actors belong to the queue category and they are:
Message provider types supported in Broadway are:
The queue category Actors enable the Pub / Sub services functionality of the supported message providers, whereby their input arguments correspond to the functionality of these message providers. Some input arguments are relevant only for Kafka and some only for JMS. For example:
Publisher and Subscriber applications must be defined in Fabric as an Interface and then be set in the Actor's interface input argument.
The topic, group_id and few other input arguments have a default configuration on the interface level, thus they can be left empty in the Actor. However when a value is defined in the Actor, it is used in the flow instead of the value defined in the interface.
The Subscribe Actor should always listen to the same topic. The Publish Actor can send messages to different topics thus the topic argument of the Actor can be overridden during the flow. Subscribe Actor can listen to multiple topics by using regex in the topic argument.
Arguments not supported by the message provider can be left empty and be ignored. For example, the batch size is set by the max_batch_records input argument. This parameter is ignored by interfaces that do not support batches (such as JMS) which consider all batches to have a size of 1.
The message type to be processed by the Broadway Pub / Sub functionality must be aligned with the Data type defined on the Interface and is limited to: String, byte[], JSON, long. The message type of an in-memory broker is not limited to any specific types.
Starting from V6.5.1, transaction_mode input argument is added to the Publish Actor and it determines how the Publisher handles transactions on supported interfaces (currently supported by Kafka interface only). Async transaction mode (default) means that messages are sent asynchronously and only on commit Kafka sends an acknowledgement for a success or failure. Note that Async transaction mode is applicable only when the Publish Actor runs inside the transaction.
The timeout must be defined for the Subscribe Actor to indicate the time to wait for a new message. If the timeout elapses, the collection ends.
The Subscribe Actor has two timeout related settings:
When debugging the flow, the Subscribe Actor waits only 1 sec to prevent the Debug run from getting stuck. In a regular run, timeout can be controlled by setting it to the required elapsed time. It can also be set to -1, meaning an infinite wait for messages.
The Subscribe Actor sends an acknowledgement to the Pub / Sub service for each message received:
Check out message-broker-pubsub.flow for Pub / Sub examples. To do so, go to Actions > Examples in the Main menu.
The following section of the example flow shows how to publish messages to an in-memory queue using a Publish Actor and then reading them from a queue using the Subscribe Actor.
The messages are read in batches of 100 by default. To change the batch size, set max_batch_records to the required number.
Example 1 - Retrieve from DB and Publish to Kafka
The following example shows a Publish flow which includes a transaction. The performance of such flow is better in a transaction due to Async transaction mode because the acknowledgement is performed at commit only.
Example 2 - Retrieve Data in Batches and Publish to Kafka
The following example shows a flow where the data is retrieved in batches using the SubscribeBatch Actor. The flow iterates over the messages of each batch, performing the required logic and publishing the results using the Publish Actor.
Since the flow includes a transaction and due to Async transaction mode, the commit is performed after completing a batch and before moving to another one. This behavior enables a quick processing of messages because the messages are published to Kafka asynchronously and the acknowledgement is performed once for the whole batch, rather than message by message.
Example 3 - Retrieve Data in Batches and Load to DB
Similar to the previous example behavior will be when at the end of the flow the messages are loaded to a DB. The commit to the Database is performed once for each batch and before moving to another one.
Broadway has a group of built-in Actors that manage Pub / Sub asynchronous message handling. These Actors belong to the queue category and they are:
Message provider types supported in Broadway are:
The queue category Actors enable the Pub / Sub services functionality of the supported message providers, whereby their input arguments correspond to the functionality of these message providers. Some input arguments are relevant only for Kafka and some only for JMS. For example:
Publisher and Subscriber applications must be defined in Fabric as an Interface and then be set in the Actor's interface input argument.
The topic, group_id and few other input arguments have a default configuration on the interface level, thus they can be left empty in the Actor. However when a value is defined in the Actor, it is used in the flow instead of the value defined in the interface.
The Subscribe Actor should always listen to the same topic. The Publish Actor can send messages to different topics thus the topic argument of the Actor can be overridden during the flow. Subscribe Actor can listen to multiple topics by using regex in the topic argument.
Arguments not supported by the message provider can be left empty and be ignored. For example, the batch size is set by the max_batch_records input argument. This parameter is ignored by interfaces that do not support batches (such as JMS) which consider all batches to have a size of 1.
The message type to be processed by the Broadway Pub / Sub functionality must be aligned with the Data type defined on the Interface and is limited to: String, byte[], JSON, long. The message type of an in-memory broker is not limited to any specific types.
Starting from V6.5.1, transaction_mode input argument is added to the Publish Actor and it determines how the Publisher handles transactions on supported interfaces (currently supported by Kafka interface only). Async transaction mode (default) means that messages are sent asynchronously and only on commit Kafka sends an acknowledgement for a success or failure. Note that Async transaction mode is applicable only when the Publish Actor runs inside the transaction.
The timeout must be defined for the Subscribe Actor to indicate the time to wait for a new message. If the timeout elapses, the collection ends.
The Subscribe Actor has two timeout related settings:
When debugging the flow, the Subscribe Actor waits only 1 sec to prevent the Debug run from getting stuck. In a regular run, timeout can be controlled by setting it to the required elapsed time. It can also be set to -1, meaning an infinite wait for messages.
The Subscribe Actor sends an acknowledgement to the Pub / Sub service for each message received:
Check out message-broker-pubsub.flow for Pub / Sub examples. To do so, go to Actions > Examples in the Main menu.
The following section of the example flow shows how to publish messages to an in-memory queue using a Publish Actor and then reading them from a queue using the Subscribe Actor.
The messages are read in batches of 100 by default. To change the batch size, set max_batch_records to the required number.
Example 1 - Retrieve from DB and Publish to Kafka
The following example shows a Publish flow which includes a transaction. The performance of such flow is better in a transaction due to Async transaction mode because the acknowledgement is performed at commit only.
Example 2 - Retrieve Data in Batches and Publish to Kafka
The following example shows a flow where the data is retrieved in batches using the SubscribeBatch Actor. The flow iterates over the messages of each batch, performing the required logic and publishing the results using the Publish Actor.
Since the flow includes a transaction and due to Async transaction mode, the commit is performed after completing a batch and before moving to another one. This behavior enables a quick processing of messages because the messages are published to Kafka asynchronously and the acknowledgement is performed once for the whole batch, rather than message by message.
Example 3 - Retrieve Data in Batches and Load to DB
Similar to the previous example behavior will be when at the end of the flow the messages are loaded to a DB. The commit to the Database is performed once for each batch and before moving to another one.