benchANT Homepage
benchANT Homepage

The benchANT Database Ranking

This page contains our database performance ranking. It has been built with the goal to support our users with database comparisons. We use a set of established and widely used database benchmarks such as YCSB and TSBS to evaluate various SQL, NoSQL, and NewSQL (cloud) database management systems and Database-as-a-Service (DBaaS) offerings in different set-ups.

The results demonstrate the performance of and performance differences between different databases available on the market for the specific scenarios we evaluated for. Thus, while they do indicate tendendcies in database performance, they should not be blindly taken for granted for any scenario. Instead, each usage scenario needs to be re-evaluated specifically. Read our disclaimer on that matter.
benchANT explaining performance

Currently, the ranking supports three different types of workloads representing three different usage scenarios. The concrete Workload Specification is available below. All results are available as open data in our Benchmarking Data Repository on GitHub.

In case you want to discuss results and their implications, please visit our Discussion Group on LinkedIn or use our contact form. Finally, a detailed technical analysis including a price/performance comparison of DBaaS can also be found in our DBaaS Navigator.

No data available

The list of database management systems and DBaaS providers is constantly expanding. Please contact us for new resources.

Workload load 1: "General Purpose" (balanced - 50% READ/ 50% WRITE)
Workload load 2: "OLTP: Mix" (transactional, 100 tables)

NOTE: The results of this ranking should not be generalized. They are only meaningful in the specified configurations and for this individual workload. Please read the DISCLAIMER.

Please do not make any decisions for your IT application based on this ranking. Please perform your own actual benchmark measurements that fit your requirements.

Benchmark measurements were performed using benchANT's automated cloud database benchmarking platform.

Database Ranking - Structure & KPIs

The ranking table is structured in the following columns:

  • Rank: Dynamic position according to filtering and sorting.
  • Database: Database technology, provider and version incl. the specification of the configuration with further information in the tooltip.
    • DBMS-Technology: 1. line: underlying DBMS technology
    • DBMS provider (optional): for DBaaS: provider of the DBMS product
    • Version name and version number: Product version (license) and published technical release.
    • License: Open-Source, Community (free), Commercial and Database-as-a-Service.
    • DBMS settings: Vanilla settings (without changing the initial vendor configuration), Tuned settings (with changing the initial configuration, see tooltip).
    • Cluster Size: Number of DBMS nodes
  • Cloud: Cloud provider of the IaaS resources, with detailed information in the tooltip.
    • VM Size: Size of the virtual machine used, including number of instances (see tooltip).
  • Throughput: Number of measured executable operations per second on average for the created workload.
  • Read Latency: Measured latency of read operations in milliseconds at the 95th percentile (95% of operations were faster/equally fast to the specified value).
  • Write Latency: Measured latency of write operations in milliseconds at the 95th percentile (95% of operations were faster/equally fast to the specified value).
  • Monthly Costs: [logged-in users only] Monthly cost (on-demand cost) to purchase the required IaaS resources or DBaaS offerings. This includes VM prices, storage costs, and the cost of DBMS technology (if any for “commercial”) or list price of the DBaaS offering.
  • Throughput per Cost: [logged-in users only] Economic efficiency and decision criterion: number of operations per second per monthly cost (see above).

Using the specified filter options, the results can be reduced to the desired data sets.

The columns of the table can be sorted by clicking on the header (result columns only).

Database Ranking - Workload

The workload has a decisive influence on the performance of a database, as do the configuration settings of the database. Changing a single workload parameter can have a significant impact on performance and database ranking, see Disclaimer. You can use the specified command lines to independently re-measure and verify the database ranking results.

For the measurement series of a workload of the database ranking, different dimensions have been scaled

  • Workload intensity: The number of parallel threads is varied here.
  • Cluster size: The number of database nodes is varied for vertical scaling, if technically possible.
  • Instance size: The instance sizes of “virtual machines” (VMs) are varied for vertical scaling.

The exact specifications can be found in the following workload descriptions.

Workload load pattern CRUD: General-Purpose.

The “CRUD: General-Purpose” workload pattern is characterized by its balanced proportion of 50% simple CRUD READ operations and 50% WRITE operations.

CRUD workloads differ significantly in their complexity and pattern from OLAP and OLTP, but also from sometimes even simpler time-series workloads.

Many database management systems see themselves as “general purpose” solutions for universal use. In addition, there are further specializations such as time-series databases, transactional or analytical processes or access patterns.

A “CRUD: General Purpose” load pattern can be found in a wide variety of IT applications ranging from CRM, ERP, mobile apps etc. It is important here that no transactional operations are performed, but only simple READs, e.g., via the primary key, or based on a condition. JOINs, ORDERs or GROUP BY commands are not performed in this workload.

Measurements were taken with the following workload and resource settings:

Workload: General

  • Benchmark: YCSB v0.17.0
  • Initial data size: 2.5 GB
  • Data set size: 500 bytes
  • Access distribution: Zipfian
  • Benchmark API VM: 16 cores
  • Runtime per benchmark: 30 minutes
  • Repetitions per benchmark: 1 execution
  • Parallel threads: 50 - 600 (depending on scaling size)

Scaling: xSmall

  • VM size: 2 vCPUs x 8 GiB RAM
  • Cluster size: 1
  • Replication Factor: 1
  • Workload threads: 50

Scaling: Small

  • VM size: 4 vCPUs x 16 GiB RAM
  • Cluster size: 1
  • Replication Factor: 1
  • Workload threads: 100

Scaling: Medium

  • VM size: 4 vCPUs x 16 GiB RAM
  • Cluster size: 3
  • Replication Factor: 3
  • Workload threads: 100

Scaling: Large

  • VM size: 8 vCPUs x 32 GiB RAM
  • Cluster size: 3
  • Replication Factor: 3
  • Workload threads: 200

Scaling: xLarge

  • VM size: 8 vCPUs x 32 GiB RAM
  • Cluster size: 9
  • Replication Factor: 3
  • Workload threads: 600

Runtime of the benchmarks is maximum 30 minutes or 54,000,000 operations.

Workload load pattern OLTP: Mix

The workload load pattern “OLTP: Mix” characterizes a common transactional workload consisting of non-simple WRITE/READ/UPDATE and DELETE operations, which are grouped into transactions. The database operations here also contain more complex order and grouping functions. This workload can be considered significantly more costly than the CRUD workload used above.

OLTP workloads reflect the real-time transaction processing of business processes, such as those found in eCommerce stores, ERP systems, and logistics software. The complexity of the OLTP workload is defined by the transactions and the complexity of the individual queries they contain. In particular, JOINs and grouping functions are complex operations at the database management system level. This specific workload contains no JOIN operations.

No BATCH processing occurs in this OLTP workload, as it is known from OLAP workloads, for example.

The usual database management systems used for OLTP workloads are the classical relational SQL databases. Recently, many NewSQL databases have also placed themselves as alternatives in the OLTP area. However, it is also possible to use NoSQL databases for OLTP workloads.

Measurements were taken with the following workload and resource settings:

Workload: General

  • Benchmark: sysbench 1.0.20
  • Initial data size: 25 GB
  • Tables: 100
  • Table entries: 1,000,000
  • Benchmark API VM: 16 cores
  • Runtime per benchmark: 30 minutes
  • Repetitions per benchmark: 1 execution
  • Parallel threads: 50 - 100 (depending on scaling size)

Scaling: xSmall

  • VM size: 2 vCPUs x 8 GiB RAM
  • Cluster size: 1
  • Replication Factor: 1
  • Workload threads: 50

Scaling: Small

  • VM size: 4 vCPUs x 16 GiB RAM
  • Cluster size: 1
  • Replication Factor: 1
  • Workload threads: 100

Runtime of the benchmarks is maximum 30 minutes or 54,000,000 operations.

Workload load pattern Time-Series: DevOps.

The workload load pattern “Time-Series: DevOps” is based on the TSBS benchmarking suite and the workload scenario “DevOps” defined therein.

The 'DevOps' workload is used to generate, insert, and measure data from 9 'systems' that could be monitored in a real world dev ops scenario (e.g., CPU, memory, disk, etc.).

In addition to metric readings, 'tags' (including the location of the host, its operating system, etc.) are generated for each host with readings in the dataset. Each unique set of tags identifies one host in the dataset, and the number of different hosts generated is defined by the scale flag.

Measurements were taken with the following workload and resource settings:

Workload: General

  • Benchmark: TSBS benchANT fork (https://github.com/benchANT/tsbs)
  • Workload scenario: DevOps
  • Scale flag: 1000
  • Query type: single-groupby-1-1-1
  • Data set size: 3 days
  • Batch insert size: 1,000
  • Runtime query phase: 100,000 queries
  • Benchmark API VM: 16 cores
  • Repetitions per benchmark: 1 execution
  • Parallel threads: 50 - 100 (depending on scaling size)

Scaling: xSmall

  • VM size: 2 vCPUs x 8 GiB RAM
  • Cluster size: 1
  • Replication factor: 1
  • Workload threads: 50

Scaling: Small

  • VM size: 4 vCPUs x 16 GiB RAM
  • Cluster size: 1
  • Replication factor: 1
  • Workload threads: 100

Database Ranking - Disclaimer

The database ranking provides independent performance data on popular database management systems and database-as-a-service (DBaaS) offerings on public cloud resources. Measurements were performed using the benchANT cloud database benchmarking platform. The measurements are independent and to the best of our knowledge, based on a science-based benchmarking process.

The measured performance data for the different database offerings are only meaningful for

  1. for the specific selected database configuration in the specified version
  2. for the specified cloud resources (VM + storage) and instances
  3. the exact workload described above created as a load on the database.

Since the workload does not reflect a real-world application, the database ranking data is not suitable for informed decision-making. Application-specific benchmark measurements are required for informed decision-making.

  • Do not make decisions based on this database ranking!
  • Do not generalize/extrapolate data for an assumption/decision!
  • Measure yourself for your individual use case!
  • Get database performance experts to help you!

benchANT does not warrant the accuracy or completeness of the performance ranking. This also applies to any claims made based on this data.

The database ranking was carried out completely independently by benchANT, without any influence from the providers. However, the given links "To the provider" are affiliate links for which benchANT receives a commission. By clicking on this link you support this Open-Data database ranking project!

This work is licensed under the international license Creative Commons 4.0 share Attribution NonCommercial ShareAlike 4.0. You are thus free to copy and redistribute ("share") or remix, transform and build upon ("adapt") the material, subject to the following conditions:

  • Attribution/source attribution ("Name + Link").
  • licensing of modified data under the same license ("Link")
  • Non-commercial use only

For commercial use of the data contact us please!

Thank you very much!

Database Ranking - Motivation

"Data are the new Oil". Data and data processing is one of the central IT topics of the 2010s/2020s. Applications in the areas of IoT, Industry 4.0, machine learning, AI, eCommerce, social media, etc. generate and process huge amounts of data.

For this reason, there are now over 600 different database management systems with a wide variety of data structures and operating modes. Each database management system has its individual specialization and suitability here.

It is not about which database is the best or the most popular. It is about which database can provide the best performance in which scenario.

In our opinion, there is no best SQL database, or best NoSQL database. At most, there is a best-fit database for the specific use case at the chosen scale.

This database ranking is intended to show this differentiation and to increase the "awareness" for this topic.
Decisions should not be made solely on the basis of popularity, features and (especially not) data structures, but above all by including reliable data on performance and scalability.

Database performance problems, inefficient solutions and over-sized computing power ("Kill it with iron!") must be a thing of the past. In the future, IT applications must be properly scaled and equipped with the best possible technology in order to be efficient and competitive.

Dr. Daniel Seybold, the CTO of benchANT, has developed a multi-database, multi-cloud and multi-workload benchmarking tool during his PhD. This enables benchANT to automate the performance measurements for this database ranking, making it fast, efficient and reliable.

Typically, this tool is used to individually measure cloud database setups for customer-specific applications. This is the ideal approach to make informed decisions for cloud resources and database technologies.

This database ranking serves as a first orientation!

"Measure everything, assume nothing!"