Understand capabilities
Learn how TimescaleDB and Tiger Cloud capabilities work together to power time-series and analytics workloads
TimescaleDB and Tiger Cloud extend PostgreSQL with powerful capabilities designed specifically for time-series data, real-time analytics, and event-driven workloads. These capabilities work together to provide a complete solution for ingesting, storing, querying, and analyzing massive datasets efficiently.
Capabilities overview
Section titled “Capabilities overview”TimescaleDB and Tiger Cloud capabilities fall into the following categories:
Data storage and organization
Section titled “Data storage and organization”- Hypertables: automatically partition time-series data into chunks for efficient data management at scale.
- Hypercore: compress data into columnar storage delivering 90-95% storage reduction.
Data processing and aggregation
Section titled “Data processing and aggregation”- Continuous aggregates: maintain pre-computed aggregations that update incrementally as new data arrives.
- Hyperfunctions: analyze data with specialized SQL functions including statistical aggregation and percentiles.
Data lifecycle management
Section titled “Data lifecycle management”- Data retention: automatically drop old data based on time intervals, keeping storage costs under control.
- Data tiering (Tiger Cloud-exclusive capability): move older data to low-cost object storage while keeping it queryable with standard SQL.
Schema optimization and automation
Section titled “Schema optimization and automation”- Schema optimization: use PostgreSQL features like indexes, constraints, triggers, tablespaces, and foreign data wrappers.
- Jobs: automate recurring tasks like continuous aggregate refreshes, data retention, and custom maintenance.
Typical workflow
Section titled “Typical workflow”Here’s how TimescaleDB capabilities work together in a typical time-series application:
- Data ingestion
Start by creating a hypertable for your time-series data. The hypertable automatically partitions data into time-based chunks, enabling efficient inserts and queries. For high-volume ingestion, optimize your schema with appropriate indexes and constraints. Use bulk insert methods like
COPYor multi-rowINSERTstatements for best performance. For migrating existing data or importing from external sources, see Migrate. - Data optimization
Hypercore is automatically enabled when you create a hypertable, providing columnar storage with advanced compression. This reduces storage by 90-95% while maintaining full query capabilities and delivering 100x to 1000x performance improvements for analytical queries.
- Real-time analytics
Create continuous aggregates to automatically maintain pre-computed summaries. Use hyperfunctions in your aggregates to calculate statistics, percentiles, time-weighted averages, and other specialized metrics. Query continuous aggregates instead of raw data for instant results on dashboards and reports. Real-time aggregates ensure you see the latest data without waiting for batch processing.
- Data lifecycle
Configure retention policies to automatically drop old data when it’s no longer needed. Retention works seamlessly with hypercore, removing entire chunks efficiently without impacting performance. Retention policies can preserve aggregated data in continuous aggregates even after dropping raw data, enabling long-term trend analysis without storing every data point. On Tiger Cloud, use tiered storage to move older data to low-cost object storage while keeping it queryable.
- Automation
Schedule jobs to automate hypercore compression, continuous aggregate refreshes, retention, and custom maintenance tasks. Jobs run reliably in the background and provide execution history for monitoring and troubleshooting.
Capabilities by use case
Section titled “Capabilities by use case”IoT and sensor data
Section titled “IoT and sensor data”For IoT workloads with millions of devices generating continuous metrics:
- Hypertables: partition data by time and optionally by device ID for optimal performance.
- Hypercore: compress sensor readings with 95%+ storage reduction.
- Continuous aggregates: pre-compute device statistics and fleet-wide metrics.
- Hyperfunctions: downsample with LTTB for visualization, use time-weighted averages for irregular samples.
- Data retention: drop raw sensor data automatically after a retention period.
Financial analytics
Section titled “Financial analytics”For financial data with high-frequency trading, market data, and portfolio analytics:
- Hypertables: store tick data, OHLCV bars, and trade executions.
- Hypercore: compress historical market data for cost-effective long-term backtesting.
- Continuous aggregates: maintain pre-computed OHLCV bars, technical indicators, and portfolio valuations.
- Hyperfunctions: calculate candlestick aggregates, percentiles, and statistical measures.
- Schema optimization: use indexes for symbol lookups, constraints for data integrity.
Observability and monitoring
Section titled “Observability and monitoring”For system metrics, logs, and distributed tracing:
- Hypertables: ingest metrics, logs, and traces with automatic partitioning.
- Hypercore: compress historical metrics and traces for cost-effective long-term retention.
- Continuous aggregates: maintain service health metrics, error rates, and latency percentiles.
- Hyperfunctions: calculate uptime/downtime via heartbeat aggregation, detect anomalies, and analyze distributions.
- Data retention: drop raw data after the debugging period while keeping aggregated metrics.