---
title: Troubleshoot jobs | Tiger Data Docs
description: Solutions to common errors when working with scheduled jobs
---

## Scheduled jobs stop running

Your scheduled jobs might stop running for various reasons. On self-hosted TimescaleDB, you can fix this by restarting background workers:

- [TimescaleDB >= 2.12](#tab-panel-404)
- [TimescaleDB < 2.12](#tab-panel-405)

```
SELECT _timescaledb_functions.start_background_workers();
```

```
SELECT _timescaledb_internal.start_background_workers();
```

On Tiger Cloud and Managed Service for TimescaleDB, restart background workers by doing one of the following:

- Run `SELECT timescaledb_pre_restore()`, followed by `SELECT timescaledb_post_restore()`.
- Power the service off and on again. This might cause a downtime of a few minutes while the service restores from backup and replays the write-ahead log.

## Failed to start a background worker

You might see this error message in the logs if background workers aren’t properly configured:

Terminal window

```
"<TYPE_OF_BACKGROUND_JOB>": failed to start a background worker
```

To fix this error, make sure that `max_worker_processes`, `max_parallel_workers`, and `timescaledb.max_background_workers` are properly set. `timescaledb.max_background_workers` should equal the number of databases plus the number of concurrent background workers. `max_worker_processes` should equal the sum of `timescaledb.max_background_workers` and `max_parallel_workers`.

For more information, see the [configuration docs](/deploy/self-hosted/configuration/about-configuration/index.md).

## Background jobs failing with “out of shared memory”

You might see this error when running compression, continuous aggregate refresh, or other background jobs that touch many chunks:

```
FATAL:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
```

Despite the wording, this error is **not caused by insufficient RAM**. It means the PostgreSQL lock table is full. TimescaleDB acquires a lock on every chunk involved in a query or background job. When a hypertable has many chunks, these locks can exceed the default `max_locks_per_transaction` limit (usually 64).

### Diagnose the issue

1. **Check the current setting:**

   ```
   SELECT name, setting
   FROM pg_settings
   WHERE name = 'max_locks_per_transaction';
   ```

2. **Count chunks per hypertable:**

   ```
   SELECT hypertable_name, num_chunks
   FROM timescaledb_information.hypertables
   ORDER BY num_chunks DESC;
   ```

### Calculate the right value

For most workloads, use this formula:

```
max_locks_per_transaction = (2 × max_chunks_in_any_hypertable) / max_connections
```

The factor of 2 accounts for index locks. Round up and add headroom for future growth, because changing this parameter requires a database restart.

### Apply the fix

Tips

**Tiger Cloud:** Adjust `max_locks_per_transaction` from Tiger Console under **Database configuration → Advanced parameters**. Search for the parameter, edit the value, and click **Apply changes and restart**. See [Advanced parameters](/deploy/tiger-cloud/tiger-cloud-aws/configuration/advanced-parameters/index.md) for details.

**Self-hosted:** Set `max_locks_per_transaction` in the `postgresql.conf` configuration file, then restart PostgreSQL.

For more information, see [Transaction lock settings](/reference/timescaledb/configuration/tiger-postgres#transaction-lock-settings/index.md) and the [PostgreSQL lock management documentation](https://www.postgresql.org/docs/current/runtime-config-locks.html).
