There is a 2G limitation in Cassandra for writing compressed SQLite LUI files into a Cassandra entity table as blobs. Fabric enables storing big LUIs without limitations on size by splitting data into chunks. The SQLite file's chunks are written into the Cassandra entity_chunks table in parallel using the Cassandra Loader.
The Loader configuration for the parallel save can be done using the config.ini by adding a section named [LU type]_ cassandra_entity_storage per each LU. The parameters under this section are the same as the Cassandra Loader definition parameters (for example, Loader execution mode).
The LUI data is first written into the entity_chunks table and then after all chunks were written successfully the entity table is populated.
The entity table includes the following data:
The entity_chunks table includes the following data:
The chunk size is set using the config.ini file parameters, defined per node:
When loading the chunks of Big LUI from Cassandra to Fabric as part of the GET command, there is a trade-off between the performance of the load and the memory allocated to this process. To improve the performance of the load, you can define the number of threads that will be executed in parallel. When setting the number of threads, you must also define the maximum memory allowed to be used for the parallel load.
The config.ini parameters to configure the above are:
Another major performance improvement was introduced for release 6.5.4, when dealing with many entity chunks.
The parameters needed to configure Fabric to deal with a large amount of entity chunks are the following:
They are to be enhanced on the config.ini file.
However, there is no upgrade path for existing projects. You must clean all data in Fabric and bring it back. It is recommended to turn this feature on only when dealing with very large LUIs that are split into multiple chunks.
There is a 2G limitation in Cassandra for writing compressed SQLite LUI files into a Cassandra entity table as blobs. Fabric enables storing big LUIs without limitations on size by splitting data into chunks. The SQLite file's chunks are written into the Cassandra entity_chunks table in parallel using the Cassandra Loader.
The Loader configuration for the parallel save can be done using the config.ini by adding a section named [LU type]_ cassandra_entity_storage per each LU. The parameters under this section are the same as the Cassandra Loader definition parameters (for example, Loader execution mode).
The LUI data is first written into the entity_chunks table and then after all chunks were written successfully the entity table is populated.
The entity table includes the following data:
The entity_chunks table includes the following data:
The chunk size is set using the config.ini file parameters, defined per node:
When loading the chunks of Big LUI from Cassandra to Fabric as part of the GET command, there is a trade-off between the performance of the load and the memory allocated to this process. To improve the performance of the load, you can define the number of threads that will be executed in parallel. When setting the number of threads, you must also define the maximum memory allowed to be used for the parallel load.
The config.ini parameters to configure the above are:
Another major performance improvement was introduced for release 6.5.4, when dealing with many entity chunks.
The parameters needed to configure Fabric to deal with a large amount of entity chunks are the following:
They are to be enhanced on the config.ini file.
However, there is no upgrade path for existing projects. You must clean all data in Fabric and bring it back. It is recommended to turn this feature on only when dealing with very large LUIs that are split into multiple chunks.