Skip to content

Troubleshoot hypercore

Resolve common issues with hypercore, columnstore conversion, and compression

Temporary file size limit exceeded when converting chunks to the columnstore

Message:

ERROR: temporary file size exceeds temp_file_limit

When you try to convert a chunk to the columnstore, especially if the chunk is very large, you could get this error. Compression operations write files to a new compressed chunk table, which is written in temporary memory. The maximum amount of temporary memory available is determined by the temp_file_limit parameter. You can work around this problem by adjusting the temp_file_limit and maintenance_work_mem parameters.

Inefficient compress_chunk_time_interval configuration

When you configure compress_chunk_time_interval but do not set the primary dimension as the first column in orderby, TimescaleDB converts chunks back to the rowstore before merging. This makes merging less efficient. Set the primary dimension of the chunk as the first column in orderby to improve efficiency.

Tuple decompression limit exceeded by operation

Message:

ERROR: tuple decompression limit exceeded by operation

When inserting, updating, or deleting tuples from chunks in the columnstore, it might be necessary to convert tuples to the rowstore. This happens either when you are updating existing tuples or have constraints that need to be verified during insert time. If you happen to trigger a lot of rowstore conversion with a single command, you may end up running out of storage space. For this reason, a limit has been put in place on the number of tuples you can decompress into the rowstore for a single command.

The limit can be increased or turned off (set to 0) like so:

-- set limit to a million tuples
SET timescaledb.max_tuples_decompressed_per_dml_transaction TO 1000000;
-- disable limit by setting to 0
SET timescaledb.max_tuples_decompressed_per_dml_transaction TO 0;

Out of memory errors after enabling the columnstore

Error out of memory
DETAIL: Failed on request of size 16777216 in memory context "ErrorContext".

By default, columnstore policies move all uncompressed chunks to the columnstore. However, before converting a large backlog of chunks from the rowstore to the columnstore, best practice is to set maxchunks_to_compress and limit the amount of chunks to be converted. For example:

SELECT alter_job(j.job_id, config => j.config || '{"maxchunks_to_compress": 10}'::jsonb)
FROM timescaledb_information.jobs j
WHERE j.proc_name = 'policy_compression'
AND j.hypertable_name = '<hypertable_name>';

When all chunks have been converted to the columnstore, reset maxchunks_to_compress to 0 (unlimited):

SELECT alter_job(j.job_id, config => j.config || '{"maxchunks_to_compress": 0}'::jsonb)
FROM timescaledb_information.jobs j
WHERE j.proc_name = 'policy_compression'
AND j.hypertable_name = '<hypertable_name>';

User permissions do not allow chunks to be converted to columnstore or rowstore

Message:

ERROR: must be owner of hypertable "HYPERTABLE_NAME"

You might get this error if you attempt to compress a chunk into the columnstore, or decompress it back into rowstore with a non-privileged user account. To compress or decompress a chunk, your user account must have permissions that allow it to perform CREATE INDEX on the chunk. You can check the permissions of the current user with this command at the psql command prompt:

\dn+ <USERNAME>

To resolve this problem, grant your user account the appropriate privileges with this command:

GRANT PRIVILEGES
ON TABLE <TABLE_NAME>
TO <ROLE_TYPE>;

For more information about the GRANT command, see the PostgreSQL documentation.

Reindex hypertables to fix large indexes

Message:

ERROR: invalid attribute number -6 for _hyper_2_839_chunk
CONTEXT: SQL function "hypertable_local_size" statement 1 PL/pgSQL function hypertable_detailed_size(regclass) line 26 at RETURN QUERY SQL function "hypertable_size" statement 1
SQL state: XX000

You might see this error if your hypertable indexes have become very large. To resolve the problem, reindex your hypertables with this command:

REINDEX TABLE _timescaledb_internal._hyper_2_1523284_chunk;

For more information, see the hypertable documentation.

Low compression rate

Low compression rates are often caused by high cardinality of the segment key. This means that the column you selected for grouping the rows during compression has too many unique values. This makes it impossible to group a lot of rows in a batch. To achieve better compression results, choose a segment key with lower cardinality.