Troubleshooting
Troubleshoot common problems with self-hosted TimescaleDB
Failed to start a background worker
You might see this error message in the logs if background workers aren’t properly configured:
"<TYPE_OF_BACKGROUND_JOB>": failed to start a background workerTo fix this error, make sure that max_worker_processes,
max_parallel_workers, and timescaledb.max_background_workers are properly
set. timescaledb.max_background_workers should equal the number of databases
plus the number of concurrent background workers. max_worker_processes should
equal the sum of timescaledb.max_background_workers and
max_parallel_workers.
For more information, see the configuration docs.
Log error: could not access file “timescaledb”
If your PostgreSQL logs have this error preventing it from starting up, you
should double-check that the TimescaleDB files have been installed to the
correct location. The installation methods use pg_config to get PostgreSQL‘s
location. However, if you have multiple versions of PostgreSQL installed on the
same machine, the location pg_config points to may not be for the version you
expect. To check which version of TimescaleDB is used:
$ pg_config --versionPostgreSQL 12.3If that is the correct version, double-check that the installation path is
the one you’d expect. For example, for PostgreSQL 11.0 installed via
Homebrew on macOS it should be /usr/local/Cellar/postgresql/11.0/bin:
$ pg_config --bindir/usr/local/Cellar/postgresql/11.0/binIf either of those steps is not the version you are expecting, you need to
either uninstall the incorrect version of PostgreSQL if you can, or update your
PATH environmental variable to have the correct path of pg_config listed
first, that is, by prepending the full path:
export PATH = /usr/local/Cellar/postgresql/11.0/bin:$PATHThen, reinstall TimescaleDB and it should find the correct installation path.
Warnings during pg_dump migration
You might see these warnings during the migration process:
pg_dump: warning: there are circular foreign-key constraints on this table:pg_dump: hypertablepg_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.pg_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.pg_dump: NOTICE: hypertable data are in the chunks, no data will be copiedDETAIL: Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.HINT: Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.WARNING: skipping "<TABLE OR INDEX>" --- only superuser can analyze itYou can safely ignore these errors. The migration still occurs successfully.
Permission error during pg_restore migration
The pg_restore function tries to apply the TimescaleDB extension when it
copies your schema. This can cause a permissions error:
pg_restore: creating EXTENSION "timescaledb"pg_restore: creating COMMENT "EXTENSION timescaledb"pg_restore: while PROCESSING TOC:pg_restore: from TOC entry 6239; 0 0 COMMENT EXTENSION timescaledbpg_restore: error: could not execute query: ERROR: must be owner of extension timescaledbIf you already have the TimescaleDB extension installed, you can safely ignore this.
Permission denied for table job_errors when running pg_dump
You might see this error when running pg_dump:
pg_dump: error: query failed: ERROR: permission denied for table job_errorspg_dump: detail: Query was: LOCK TABLE _timescaledb_internal.job_errors IN ACCESS SHARE MODEWhen the pg_dump tool tries to acquire a lock on the job_errors
table, if the user doesn’t have the required SELECT permission, it
results in this error.
To resolve this issue, use a superuser account to grant the necessary
permissions to the user requiring the pg_dump tool.
Use this command to grant permissions to <TEST_USER>:
GRANT SELECT ON TABLE _timescaledb_internal.job_errors TO <TEST_USER>;Errors when running pg_dump
You might see this notice when running pg_dump:
pg_dump: NOTICE: hypertable data are in the chunks, no data will be copiedDETAIL: Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.HINT: Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.You can safely ignore this. Your hypertable data is still accurately copied.
Errors after restoring from file dump
You might see this error after restoring from a file dump:
org.postgresql.util.PSQLException: ERROR: invalid INSERT on the root table of hypertable "_hyper_1_10_chunk.When loading from a logical dump make sure that you set timescaledb.restoring to true before loading
the dump.
Version mismatch when dumping and restoring a database
The PostgreSQL pg_dump command does not allow you to specify which version of
the extension to use when backing up. This can create problems if you have a
more recent version installed. For example, if you create the backup using an
older version of TimescaleDB, and when you restore it uses the current version,
without giving you an opportunity to upgrade first.
You can work around this problem when you are restoring from backup by making sure the new PostgreSQL instance has the same extension version as the original database before you perform the restore. After the data is restored, you can upgrade the version of TimescaleDB.
Install or upgrade of TimescaleDB Toolkit fails
In some cases, when you create the TimescaleDB Toolkit extension, or upgrade it
with the ALTER EXTENSION timescaledb_toolkit UPDATE command, it might fail
with this error:
ERROR: extension "timescaledb_toolkit" has no update path from version "1.2" to version "1.3"This occurs if the list of available extensions does not include the version you are trying to upgrade to, and it can occur if the package was not installed correctly in the first place. To correct the problem, install the upgrade package, restart PostgreSQL, verify the version, and then attempt the update again.
- Check your package repository
If you’re installing Toolkit from a package, check your package manager’s local repository list. Make sure the TimescaleDB repository is available and contains Toolkit. Update your local repository list with
apt updateoryum update. - Restart PostgreSQL
Restart your PostgreSQL service.
- Verify the available version
Check that the right version of Toolkit is among your available extensions:
SELECT * FROM pg_available_extensionsWHERE name = 'timescaledb_toolkit';The result should look like this:
Terminal window -[ RECORD 1 ]-----+--------------------------------------------------------------------------------------name | timescaledb_toolkitdefault_version | 1.6.0installed_version | 1.6.0comment | Library of analytical hyperfunctions, time-series pipelining, and other SQL utilities - Retry the command
Retry
CREATE EXTENSIONorALTER EXTENSION.
Upgrade fails with “old version has already been loaded”
When you use the ALTER EXTENSION timescaledb UPDATE command to upgrade, you
might see this error:
ERROR: extension "timescaledb" cannot be updated after the old version has already been loadedThis occurs if you don’t run ALTER EXTENSION timescaledb UPDATE as the
first command after starting a new session using psql, or if you use tab
completion when running the command. Tab completion triggers metadata queries in
the background which prevents the alter extension from being the first command.
To correct the problem, execute the ALTER EXTENSION command like this:
psql -X -c 'ALTER EXTENSION timescaledb UPDATE;'Could not access file “timescaledb-<version>”: No such file or directory
If this error occurs immediately after updating your version of TimescaleDB and
the file mentioned is from the previous version, it is probably due to an
incomplete update process. Within the greater PostgreSQL server instance, each
database that has TimescaleDB installed needs to be updated with the SQL command
ALTER EXTENSION timescaledb UPDATE; while connected to that database.
Otherwise, the database looks for the previous version of the timescaledb files.
See the upgrade docs for more info.
Error loading the TimescaleDB extension on Windows
If you see a message saying that PostgreSQL cannot load the TimescaleDB library timescaledb-<version>.dll:
ERROR: could not load library "C:/Program Files/PostgreSQL/16/lib/timescaledb-2.14.2.dll": The specified module could not be found.Start a new psql session to your self-hosted instance and create the timescaledb extension as the first command:
psql -X -d "postgres://<user>:<password>@<source_host>:<source_port>/<db_name>" -c "CREATE EXTENSION IF NOT EXISTS timescaledb;"Upgrade fails with no update path
When you use the ALTER EXTENSION timescaledb UPDATE command to
upgrade, it might fail with this error:
ERROR: extension "timescaledb" has no update path from version "X.Y" to version "X.Z"This occurs if the list of available extensions does not include the version you are trying to upgrade to, and it can occur if the package was not installed correctly in the first place. To correct the problem, install the upgrade package, restart PostgreSQL, verify the version, and then attempt the upgrade again.
Background jobs failing with “out of shared memory”
You might see this error when running compression, continuous aggregate refresh, or other background jobs that touch many chunks:
FATAL: out of shared memoryHINT: You might need to increase max_locks_per_transaction.Despite the wording, this error is not caused by insufficient RAM. It means the PostgreSQL lock table is full. TimescaleDB acquires a lock on every chunk involved in a query or background job. When a hypertable has many chunks, these locks can exceed the default max_locks_per_transaction limit (usually 64).
Diagnose the issue
-
Check the current setting:
SELECT name, settingFROM pg_settingsWHERE name = 'max_locks_per_transaction'; -
Count chunks per hypertable:
SELECT hypertable_name, num_chunksFROM timescaledb_information.hypertablesORDER BY num_chunks DESC;
Calculate the right value
For most workloads, use this formula:
max_locks_per_transaction = (2 × max_chunks_in_any_hypertable) / max_connectionsThe factor of 2 accounts for index locks. Round up and add headroom for future growth, because changing this parameter requires a database restart.
Apply the fix
Tiger Cloud: Adjust max_locks_per_transaction from Tiger Console under Database configuration → Advanced parameters. Search for the parameter, edit the value, and click Apply changes and restart. See Advanced parameters for details.
Self-hosted: Set max_locks_per_transaction in the postgresql.conf configuration file, then restart PostgreSQL.
For more information, see Transaction lock settings and the PostgreSQL lock management documentation.