benchANT Homepage
benchANT Homepage

TimechoDB Takes the Lead in the benchANT Time-series Ranking

Released: 2025-10-30

We have expanded the Time-series part of the benchANT Database Performance Ranking by adding TimechoDB, which is an enterprise-grade time-series database system based on Apache IoTDB. In our evaluation, TimechoDB is based on Apache IoTDB v1.3.4.

Apache IoTDB v1.2.1 has been listed in the time-series ranking since September 2023 and a version tuned by Timecho Limited has led the xSmall and small categories until early 2025 when KaiwuDB v2.1.0 took the first place for both categories. Now, Timecho Limited successfully adds TimechoDB to the competition for the small category and strikes: TimechoDB on Apache IoTDB v1.3.4 achieves top throughput across all test dimensions and captures the lead in this category.

Benchmarking: Scenarios & Methodology

The time-series workload of our ranking is based on the DevOps workload of the Time-Series Benchmark Suite. The 'DevOps' workload is used to generate, insert, and measure data from 9 hypothetical IT systems whose system metrics (e.g., CPU, memory, disk, etc.) are monitored (observed) in a DevOps scenario. The ranking distinguishes two different scaling sizes for the benchmarks: xsmall and small. Both of them contain a total of 15 data points. A full overview on both scaling sizes is available on the ranking page.

The time-series ranking uses the following metrics to compare the outcomes of the competitors. Write throughput denotes the operations per second at which a database can ingest new data during the load phase of the TSBS. Here, TSBS uses batched ingestions with multiple, parallel clients. The chosen batch size is default (the default) for vanilla configurations, but can be tuned by sponsors. Read throughput denotes the operations per second that can be achieved with a fixed number of concurrent clients. The actual number of clients depends on the scaling size (see below). The read latency captures the average latency per read operation. In addition to these performance metrics, we further report on the storage consumption, measuring the maximum used disk space during a benchmark run. We further consider the infrastructure costs for a cluster, which is mostly depending on the scaling size. This leads to the read throughput per dollar.

For the ranking, measurements are taken with the following workload and resource settings.

Workload

  • Benchmark: benchANT TSBS
  • Workload scenario: DevOps
  • Scale flag: 1000
  • Query type: single-groupby-1-1-1
  • Data set size: 3 days
  • Batch insert size: 1,000
  • Runtime query phase: 100,000 queries
  • Benchmark API VM: 16 cores
  • Repetitions per benchmark: 1 execution
  • Concurrency level: depends on scaling size

Database Deployment

Scaling: xSmall

  • VM size: 2 vCPUs / 8 GiB RAM
  • Cluster size: 1
  • Replication Factor: 1
  • Workload threads: 50

Scaling: small

  • VM size: 4 vCPUs / 16 GiB RAM
  • Cluster size: 1
  • Replication Factor: 1
  • Workload threads: 100

Key Insights TimechoDB Taking the Lead for all Metrics

TimechoDB competes in the small scale category. The deployment of TimechoDB (based on Apache IoTDB v1.3.4) runs on a single AWS EC2 node. The database configuration was tuned by Timecho Limited, the database provider.

TimechoDB performance
Current Leaderboard of the Time-series Ranking
  • Write throughput: The small setup for TimechoDB reaches 8.8 million operations per second, which is around 2.1 million operations per second more than the previous number one ranked competitor and over 2.4 times the write throughput achieved for Apache IoTDB 1.2.1.

  • Read throughput: TimechoDB achieves more than 86 thousand read operations per second, which is around 30 thousand reads per second more than the previous leader of the board, and more than 7.5 times what could be achieved with Apache IoTDB v1.2.1.

  • Read latency: Despite the high throughput, TimechoDB keeps the read latency very low, at 1ms, which is equal to what the previous leader achieved. For Apache IoTDB v1.1.2 the read latency was at 3ms.

  • Storage consumption: While running the benchmark, TimechoDB only used 2 GiB disk storage, which is equal to the storage consumptions measured earlier for Apache IoTDB v1.2.1. In contrast, the previous leader required more than 15 times more disk capacity.

The raw results for this and all other configurations in the time-series ranking are available on GitHub.

Conclusions

With the performance numbers discussed earlier, TimechoDB gains the number one spot in the benchANT time-series ranking. The results show that they are a real alternative to many other database systems in the time-series domain. The results also show that performance-demanding users of Apache IoTDB should consider upgrading to TimechoDB which in the current version is based on Apache IoTDB 1.3.4.

About benchANT and the Database Ranking

benchANT is an independent benchmarking and consulting company focused on database and cloud performance. For vendors, we design and run unbiased benchmarks for database and cloud services, and we help enterprise customers make informed technology choices with application-specific performance evaluations.

Our benchmarking framework is automated, transparent, and reproducible, ensuring reliable insights that go beyond marketing claims.

With the openly accessible Database Ranking, we provide a standardized comparison platform for database performance results, which is continuously updated with new systems and workloads. We also offer the DBaaS Navigator, a free technical tool to explore and compare Database-as-a-Service offerings.

Disclaimer on the Database Ranking

The performance results presented in the Database Ranking are based on standardized, generic workloads (e.g., CRUD, time-series) executed under specific configurations and resource settings. These measurements are intended to provide high-level, comparable insights into database performance, cost, and scalability.

However, they should not be interpreted as definitive guidance for all use cases. Real-world applications often differ significantly in workload characteristics, data distribution, query patterns, and infrastructure requirements. As such, the ranking results should be seen as an orientation and starting point, rather than the sole foundation for technology decisions.

For organizations where performance is a critical factor, we strongly recommend the design and execution of application-specific benchmarks. benchANT provides tailored benchmarking services to ensure that database and cloud technology choices are validated against the actual needs and workloads of your application.