There is a 2G limitation in Cassandra for writing compressed SQLite LUI files into a Cassandra entity table as blobs. Fabric enables storing big LUIs without limitations on size by splitting data into chunks. The SQLite file's chunks are written into the Cassandra entity_chunks table in parallel using the Cassandra Loader.
The Loader configuration for the parallel save can be done using the config.ini by adding a section named [LU type]_ cassandra_entity_storage per each LU. The parameters under this section are the same as the Cassandra Loader definition parameters (for example, Loader execution mode).
The LUI data is first written into the entity_chunks table and then after all chunks were written successfully the entity table is populated.
The entity table includes the following data:
The entity_chunks table includes the following data:
The chunk size is set using the config.ini file parameters, defined per node:
When dealing with a large amount of entity chunks, LUI Partitioning can be enabled by setting the following in the config.ini file: ENABLE_PARTITIONED_MDB=true
Note however that there is no upgrade path for existing projects. You must clean all data in Fabric and bring Fabric back. It is recommended to turn this feature on when dealing with large LUIs that are split into multiple chunks.
When loading the chunks of Big LUI from Cassandra to Fabric as part of the GET command, there is a trade-off between the performance of the load and the memory allocated to this process. To improve the performance of the load, you can define the number of threads that will be executed in parallel. When setting the number of threads, you must also define the maximum memory allowed to be used for the parallel load.
The config.ini parameters to configure the above are:
These parameters are applicable when the LUI Partitioning is enabled.
There is a 2G limitation in Cassandra for writing compressed SQLite LUI files into a Cassandra entity table as blobs. Fabric enables storing big LUIs without limitations on size by splitting data into chunks. The SQLite file's chunks are written into the Cassandra entity_chunks table in parallel using the Cassandra Loader.
The Loader configuration for the parallel save can be done using the config.ini by adding a section named [LU type]_ cassandra_entity_storage per each LU. The parameters under this section are the same as the Cassandra Loader definition parameters (for example, Loader execution mode).
The LUI data is first written into the entity_chunks table and then after all chunks were written successfully the entity table is populated.
The entity table includes the following data:
The entity_chunks table includes the following data:
The chunk size is set using the config.ini file parameters, defined per node:
When dealing with a large amount of entity chunks, LUI Partitioning can be enabled by setting the following in the config.ini file: ENABLE_PARTITIONED_MDB=true
Note however that there is no upgrade path for existing projects. You must clean all data in Fabric and bring Fabric back. It is recommended to turn this feature on when dealing with large LUIs that are split into multiple chunks.
When loading the chunks of Big LUI from Cassandra to Fabric as part of the GET command, there is a trade-off between the performance of the load and the memory allocated to this process. To improve the performance of the load, you can define the number of threads that will be executed in parallel. When setting the number of threads, you must also define the maximum memory allowed to be used for the parallel load.
The config.ini parameters to configure the above are:
These parameters are applicable when the LUI Partitioning is enabled.