Regardless of the synchronization type (background or on-demand), Fabric provides two different options for synchronizing Reference tables data:
Both Update and Snapshot options can work in either one of the following modes:
For example, if an update consists of running 2500 insert commands, the 2500 inserts are divided into 3 bulks of 1000, 1000 and 500 each, then each 1000 bulk is written to Cassandra.
This mode is selected by default, when a row update onto the reference table is required. In this mode, updates are performed as Create/Update/Delete SQL queries directly on the table itself. Each node executes this change locally on its local SQLite CommonDB copy as a single logical transaction. A message to Kafka is then sent with the content of the update for all other nodes to execute. Note that if the update is over 1000 rows, Cassandra will also be involved as described later in this article.
When an update happens in a snapshot mode, the node that requested the update will take the data from Kafka and/or Cassandra and not directly from the node that prepared the snapshot.
The snapshot mode will only be triggered by one of the following actions:
where
statementEach node performs the following snapshot synchronization if instructed in the Kafka message.
Two types of transactions can be differentiated when updating a CommonDB reference table:
Two different flows occur for each transaction, depending on whether the update content size exceeds 1000 rows or not.
The following illustration shows how a Synchronization Job (Sync Job 1) publishes an update notification and a short message content on the Kafka Queue dedicated to Table 1, subsequently causing all listening nodes in the cluster to write the update directly from Kafka to their own SQLite CommonDB copy.
The following illustration shows how a Synchronization Job (Sync Job 2) publishes an update message in the Kafka Queue dedicated to Table T, and how it writes the long message content in Cassandra. This, subsequently, causes any listening node within the cluster to write the update's content directly from Cassandra into its own SQLite CommonDB copy.
Any transaction involving the common table is done in asynchronous mode, meaning that the updated data cannot be seen until it has been committed, and until Fabric updates the relevant CommonDB table. More over, each node will perform the update in its own time. The transaction message is sent to Kafka while its content is saved into Kafka (within the message payload) or in a Cassandra keyspace, depending on its size.
Regardless of the synchronization type (background or on-demand), Fabric provides two different options for synchronizing Reference tables data:
Both Update and Snapshot options can work in either one of the following modes:
For example, if an update consists of running 2500 insert commands, the 2500 inserts are divided into 3 bulks of 1000, 1000 and 500 each, then each 1000 bulk is written to Cassandra.
This mode is selected by default, when a row update onto the reference table is required. In this mode, updates are performed as Create/Update/Delete SQL queries directly on the table itself. Each node executes this change locally on its local SQLite CommonDB copy as a single logical transaction. A message to Kafka is then sent with the content of the update for all other nodes to execute. Note that if the update is over 1000 rows, Cassandra will also be involved as described later in this article.
When an update happens in a snapshot mode, the node that requested the update will take the data from Kafka and/or Cassandra and not directly from the node that prepared the snapshot.
The snapshot mode will only be triggered by one of the following actions:
where
statementEach node performs the following snapshot synchronization if instructed in the Kafka message.
Two types of transactions can be differentiated when updating a CommonDB reference table:
Two different flows occur for each transaction, depending on whether the update content size exceeds 1000 rows or not.
The following illustration shows how a Synchronization Job (Sync Job 1) publishes an update notification and a short message content on the Kafka Queue dedicated to Table 1, subsequently causing all listening nodes in the cluster to write the update directly from Kafka to their own SQLite CommonDB copy.
The following illustration shows how a Synchronization Job (Sync Job 2) publishes an update message in the Kafka Queue dedicated to Table T, and how it writes the long message content in Cassandra. This, subsequently, causes any listening node within the cluster to write the update's content directly from Cassandra into its own SQLite CommonDB copy.
Any transaction involving the common table is done in asynchronous mode, meaning that the updated data cannot be seen until it has been committed, and until Fabric updates the relevant CommonDB table. More over, each node will perform the update in its own time. The transaction message is sent to Kafka while its content is saved into Kafka (within the message payload) or in a Cassandra keyspace, depending on its size.