---
title: Troubleshoot Tiger Cloud | Tiger Data Docs
description: Diagnose and resolve common issues with Tiger Cloud services
---

## Cannot create another database

```
ERROR:  tsdb_admin: database <DB_NAME> is not an allowed database name
HINT:  Contact your administrator to configure the "tsdb_admin.allowed_databases"
```

Each Tiger Cloud service hosts a single PostgreSQL instance called `tsdb`. You see this error when you try to create an additional database in a service. If you need another database, [create a new service](/get-started/quickstart/create-service/index.md).

## User permissions do not allow chunks to be converted to columnstore or rowstore

```
ERROR:  must be owner of hypertable "HYPERTABLE_NAME"
```

You might get this error if you attempt to compress a chunk into the columnstore, or decompress it back into rowstore with a non-privileged user account. To compress or decompress a chunk, your user account must have permissions that allow it to perform `CREATE INDEX` on the chunk. You can check the permissions of the current user with this command at the `psql` command prompt:

```
\dn+ <USERNAME>
```

To resolve this problem, grant your user account the appropriate privileges with this command:

```
GRANT PRIVILEGES
    ON TABLE <TABLE_NAME>
    TO <ROLE_TYPE>;
```

For more information about the `GRANT` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-grant.html).

## Background jobs failing with “out of shared memory”

You might see this error when running compression, continuous aggregate refresh, or other background jobs that touch many chunks:

```
FATAL:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
```

Despite the wording, this error is **not caused by insufficient RAM**. It means the PostgreSQL lock table is full. TimescaleDB acquires a lock on every chunk involved in a query or background job. When a hypertable has many chunks, these locks can exceed the default `max_locks_per_transaction` limit (usually 64).

### Diagnose the issue

1. **Check the current setting:**

   ```
   SELECT name, setting
   FROM pg_settings
   WHERE name = 'max_locks_per_transaction';
   ```

2. **Count chunks per hypertable:**

   ```
   SELECT hypertable_name, num_chunks
   FROM timescaledb_information.hypertables
   ORDER BY num_chunks DESC;
   ```

### Calculate the right value

For most workloads, use this formula:

```
max_locks_per_transaction = (2 × max_chunks_in_any_hypertable) / max_connections
```

The factor of 2 accounts for index locks. Round up and add headroom for future growth, because changing this parameter requires a database restart.

### Apply the fix

Tips

**Tiger Cloud:** Adjust `max_locks_per_transaction` from Tiger Console under **Database configuration → Advanced parameters**. Search for the parameter, edit the value, and click **Apply changes and restart**. See [Advanced parameters](/deploy/tiger-cloud/tiger-cloud-aws/configuration/advanced-parameters/index.md) for details.

**Self-hosted:** Set `max_locks_per_transaction` in the `postgresql.conf` configuration file, then restart PostgreSQL.

For more information, see [Transaction lock settings](/reference/timescaledb/configuration/tiger-postgres#transaction-lock-settings/index.md) and the [PostgreSQL lock management documentation](https://www.postgresql.org/docs/current/runtime-config-locks.html).
