Fabric enables storing big LUIs without size limitations by splitting data into chunks. The SQLite file's chunks are written into the System DB entity_chunks table in parallel.
When the System DB is Cassandra, the Cassandra Loader is used. The Loader configuration for the parallel save can be done using the config.ini by adding a section named [LU type]_ cassandra_entity_storage per each LU. The parameters under this section are the same as the Cassandra Loader definition parameters (for example, Loader execution mode).
The LUI data is first written into the entity_chunks table, and then, after all chunks were successfully written, the entity table is populated.
The entity table includes the following:
The entity_chunks table includes the following data:
The chunk size is set using the config.ini file parameters, defined per node:
When dealing with a large amount of entity chunks, LUI Partitioning can be enabled by setting the following parameter in the config.ini file: ENABLE_PARTITIONED_MDB=true
Note that there is no upgrade path for existing projects. You must clean all data in Fabric and bring Fabric back up. It is recommended to turn this feature on when dealing with big LUIs that are split into multiple chunks.
When loading the chunks of big LUIs from System DB to Fabric, as part of the GET command, there is a trade-off between the performance of the load and the memory allocated to this process. To improve the performance of the load, you can define the number of threads that will be executed in parallel. When setting the number of threads, you must also define the maximum memory allowed to be used for the parallel load.
The config.ini parameters for configuring the above are:
These parameters are applicable when the LUI Partitioning is enabled.
Fabric enables storing big LUIs without size limitations by splitting data into chunks. The SQLite file's chunks are written into the System DB entity_chunks table in parallel.
When the System DB is Cassandra, the Cassandra Loader is used. The Loader configuration for the parallel save can be done using the config.ini by adding a section named [LU type]_ cassandra_entity_storage per each LU. The parameters under this section are the same as the Cassandra Loader definition parameters (for example, Loader execution mode).
The LUI data is first written into the entity_chunks table, and then, after all chunks were successfully written, the entity table is populated.
The entity table includes the following:
The entity_chunks table includes the following data:
The chunk size is set using the config.ini file parameters, defined per node:
When dealing with a large amount of entity chunks, LUI Partitioning can be enabled by setting the following parameter in the config.ini file: ENABLE_PARTITIONED_MDB=true
Note that there is no upgrade path for existing projects. You must clean all data in Fabric and bring Fabric back up. It is recommended to turn this feature on when dealing with big LUIs that are split into multiple chunks.
When loading the chunks of big LUIs from System DB to Fabric, as part of the GET command, there is a trade-off between the performance of the load and the memory allocated to this process. To improve the performance of the load, you can define the number of threads that will be executed in parallel. When setting the number of threads, you must also define the maximum memory allowed to be used for the parallel load.
The config.ini parameters for configuring the above are:
These parameters are applicable when the LUI Partitioning is enabled.