Troubleshoot jobs
Solutions to common errors when working with scheduled jobs
Scheduled jobs stop running
Your scheduled jobs might stop running for various reasons. On self-hosted TimescaleDB, you can fix this by restarting background workers:
SELECT _timescaledb_functions.start_background_workers();SELECT _timescaledb_internal.start_background_workers();On Tiger Cloud and Managed Service for TimescaleDB, restart background workers by doing one of the following:
- Run
SELECT timescaledb_pre_restore(), followed bySELECT timescaledb_post_restore(). - Power the service off and on again. This might cause a downtime of a few minutes while the service restores from backup and replays the write-ahead log.
Failed to start a background worker
You might see this error message in the logs if background workers aren’t properly configured:
"<TYPE_OF_BACKGROUND_JOB>": failed to start a background workerTo fix this error, make sure that max_worker_processes,
max_parallel_workers, and timescaledb.max_background_workers are properly
set. timescaledb.max_background_workers should equal the number of databases
plus the number of concurrent background workers. max_worker_processes should
equal the sum of timescaledb.max_background_workers and
max_parallel_workers.
For more information, see the configuration docs.
Background jobs failing with “out of shared memory”
You might see this error when running compression, continuous aggregate refresh, or other background jobs that touch many chunks:
FATAL: out of shared memoryHINT: You might need to increase max_locks_per_transaction.Despite the wording, this error is not caused by insufficient RAM. It means the PostgreSQL lock table is full. TimescaleDB acquires a lock on every chunk involved in a query or background job. When a hypertable has many chunks, these locks can exceed the default max_locks_per_transaction limit (usually 64).
Diagnose the issue
-
Check the current setting:
SELECT name, settingFROM pg_settingsWHERE name = 'max_locks_per_transaction'; -
Count chunks per hypertable:
SELECT hypertable_name, num_chunksFROM timescaledb_information.hypertablesORDER BY num_chunks DESC;
Calculate the right value
For most workloads, use this formula:
max_locks_per_transaction = (2 × max_chunks_in_any_hypertable) / max_connectionsThe factor of 2 accounts for index locks. Round up and add headroom for future growth, because changing this parameter requires a database restart.
Apply the fix
Tiger Cloud: Adjust max_locks_per_transaction from Tiger Console under Database configuration → Advanced parameters. Search for the parameter, edit the value, and click Apply changes and restart. See Advanced parameters for details.
Self-hosted: Set max_locks_per_transaction in the postgresql.conf configuration file, then restart PostgreSQL.
For more information, see Transaction lock settings and the PostgreSQL lock management documentation.