Mafiree logo
  • About
  • Services
  • Blogs
  • Careers
  • Products
    • orbit logo Orbit
    • streamer logo Xstreami
  • Contact
Schedule a Call
Menu
  • About
  • Services
  • Blogs
  • Careers
  • Products
    • orbit logo Orbit
    • streamer logo Xstreami
  • Contact
  • Schedule a Call
Database
Database Database Managed Services
MySQL MySQL
MySQL Consulting Services
MySQL Migration Services
MySQL Optimization & Query Tuning
MySQL Database Administration
MySQL Backup & Recovery
MySQL Security & Maintenance
MySQL Cloud Services (AWS RDS, Aurora, Google Cloud SQL, Azure)
MySQL for Ecommerce
MySQL High Availability & Replication
MongoDB MongoDB
MongoDB Consulting Services
MongoDB Migration Services
MongoDB Optimization & Query Tuning
MongoDB Database Administration
MongoDB Backup & Recovery
MongoDB Security & Maintenance
MongoDB Cloud (Atlas)
MongoDB Solutions by Industry
MongoDB High Availability & Replication
PostgreSQL PostgreSQL
PostgreSQL Consulting
PostgreSQL Migration & Upgrades
Performance Tuning & Query Optimization
PostgreSQL Administration & Managed Services
High Availability, Clustering & Replication
PostgreSQL Backup, Recovery & Disaster Planning
PostgreSQL Security, Compliance & Auditing
PostgreSQL for Analytics & Data Warehousing
PostgreSQL on Cloud & Containers
PostgreSQL Extensions & Open-Source Integrations
PostgreSQL for Every Industry
SQL Server MSSQL
MSSQL Consulting Services
MSSQL Migration Services
MSSQL Optimization & Query Tuning Services
MSSQL Database Administration Services
MSSQL Backup & Recovery Services
MSSQL High Availability & Replication Services
MSSQL Security & Compliance Services
MSSQL Performance Monitoring & Health Checks
MSSQL Solutions by Industry
Aerospike Aerospike
Aerospike Consulting Services
Aerospike Migration Services
Aerospike Performance Optimization & Tuning
Aerospike Database Administration
Aerospike Backup & Recovery
Aerospike High Availability
Aerospike Cloud & Hybrid Deployments
Aerospike for Real-Time Applications (AdTech, FinTech, Retail, IoT)
Analytics DB
Analytics DB Analytics DB Services
Clickhouse Clickhouse
ClickHouse Consulting Services
ClickHouse Migration Services
ClickHouse Optimization & Query Tuning
ClickHouse Database Administration
ClickHouse Backup & Recovery
ClickHouse Security & Maintenance
ClickHouse Cloud Services (ClickHouse Cloud, AWS, GCP, Azure)
ClickHouse Solutions by Industry
ClickHouse High Availability & Replication
TiDB TiDB
TiDB Consulting & Architecture Planning
TiDB Administration & Maintenance
TiDB Security and Privacy Maintenance
TiDB Performance & Query Optimization
TiDB Migration Services
TiDB Backup & Disaster Recovery
TiDB High Availability Solutions
TiDB Solutions by Industry
TiDB Cloud Services
ScyllaDB ScyllaDB
ScyllaDB Consulting & Architecture Planning
ScyllaDB Administration & Maintenance
ScyllaDB Security and Privacy Maintenance
ScyllaDB Performance & Query Optimization
ScyllaDB Migration Services
ScyllaDB Backup & Disaster Recovery
ScyllaDB High Availability Solutions
ScyllaDB Solutions by Industry
ScyllaDB Cloud Services
DevOps
DevOps DevOps Services
Version Control Version Control
Kubernetes Kubernetes
Infrastructure Infrastructure Management
Web Servers Web Servers
Networking
Networking Networking Services
Basic Basic
Advanced Advanced
MySQL MySQL
MongoDB MongoDB
PostgreSQL PostgreSQL
MSSQL MSSQL
Aerospike Aerospike
Clickhouse Clickhouse
TiDB TiDB
ScyllaDB ScyllaDB
Version Control Version Control
Kubernetes Kubernetes
Infrastructure Infrastructure Management
Web Servers Web Servers
Basic Basic
Advanced Advanced
MySQL Consulting Services
MySQL Migration Services
MySQL Optimization & Query Tuning
MySQL Database Administration
MySQL Backup & Recovery
MySQL Security & Maintenance
MySQL Cloud Services (AWS RDS, Aurora, Google Cloud SQL, Azure)
MySQL for Ecommerce
MySQL High Availability & Replication
MongoDB Consulting Services
MongoDB Migration Services
MongoDB Optimization & Query Tuning
MongoDB Database Administration
MongoDB Backup & Recovery
MongoDB Security & Maintenance
MongoDB Cloud (Atlas)
MongoDB Solutions by Industry
MongoDB High Availability & Replication
PostgreSQL Consulting
PostgreSQL Migration & Upgrades
Performance Tuning & Query Optimization
PostgreSQL Administration & Managed Services
High Availability, Clustering & Replication
PostgreSQL Backup, Recovery & Disaster Planning
PostgreSQL Security, Compliance & Auditing
PostgreSQL for Analytics & Data Warehousing
PostgreSQL on Cloud & Containers
PostgreSQL Extensions & Open-Source Integrations
PostgreSQL for Every Industry
MSSQL Consulting Services
MSSQL Migration Services
MSSQL Optimization & Query Tuning Services
MSSQL Database Administration Services
MSSQL Backup & Recovery Services
MSSQL High Availability & Replication Services
MSSQL Security & Compliance Services
MSSQL Performance Monitoring & Health Checks
MSSQL Solutions by Industry
Aerospike Consulting Services
Aerospike Migration Services
Aerospike Performance Optimization & Tuning
Aerospike Database Administration
Aerospike Backup & Recovery
Aerospike High Availability
Aerospike Cloud & Hybrid Deployments
Aerospike for Real-Time Applications (AdTech, FinTech, Retail, IoT)
ClickHouse Consulting Services
ClickHouse Migration Services
ClickHouse Optimization & Query Tuning
ClickHouse Database Administration
ClickHouse Backup & Recovery
ClickHouse Security & Maintenance
ClickHouse Cloud Services (ClickHouse Cloud, AWS, GCP, Azure)
ClickHouse Solutions by Industry
ClickHouse High Availability & Replication
TiDB Consulting & Architecture Planning
TiDB Administration & Maintenance
TiDB Security and Privacy Maintenance
TiDB Performance & Query Optimization
TiDB Migration Services
TiDB Backup & Disaster Recovery
TiDB High Availability Solutions
TiDB Solutions by Industry
TiDB Cloud Services
ScyllaDB Consulting & Architecture Planning
ScyllaDB Administration & Maintenance
ScyllaDB Security and Privacy Maintenance
ScyllaDB Performance & Query Optimization
ScyllaDB Migration Services
ScyllaDB Backup & Disaster Recovery
ScyllaDB High Availability Solutions
ScyllaDB Solutions by Industry
ScyllaDB Cloud Services
  1. Home
  2. > Blogs
  3. > TiDB
  4. > The Feature You’ve Been Looking For – Speed Up Your Queries with TiDB Cached Tables!

The Feature You’ve Been Looking For – Speed Up Your Queries with TiDB Cached Tables!

Boost database performance with TiDB cached tables—reduce latency and accelerate read-heavy queries.

Abishek S March 24, 2025

Subscribe for email updates

In the world of high-performance databases, optimizing read operations is essential for applications that rely on fast data retrieval. TiDB, the distributed SQL database, introduces a powerful feature called "cached tables" designed to enhance the performance of read-heavy workloads.

 

Cached tables are specifically beneficial for tables that are frequently accessed but rarely updated. By storing data in memory, TiDB cached tables can deliver faster query responses, reducing latency and improving overall system performance. 

 

In this blog post, we’ll dive deeper into how TiDB cached tables work, the ideal use cases for leveraging them, and the steps for implementing this feature to boost your application’s performance.

 

What is cached table?
 

A cached table in TiDB is a feature particularly beneficial for tables that become hotspots due to frequent reads, as it allows the entire table's data to be loaded into the memory of the TiDB server. By doing so, TiDB can directly access the table data from memory, bypassing TiKV, which significantly improves read performance.

 

Usage Scenarios 

Small size : Data volume is small, typically less than 64 MiB.

Rare updates : The table is mostly read-only, with updates happening less than 10 times per minute. Eg : Configuration tables.

Frequent reads : The table is accessed often, and you want better read performance, especially for small tables experiencing hotspots during direct reads from TiKV.


Caching Mechanism 

 

Data Loading: When a table is set as a cached table, its data is loaded into the memory of the TiDB server. This is particularly beneficial for small tables, typically less than 64 MB, which are frequently accessed. 

 

Direct Memory Access: Once the data is cached, TiDB can directly access the table data from memory without involving TiKV. This reduces the overhead associated with network communication and disk I/O, leading to faster data retrieval.

 

Write Operations: Although cached tables support data writes, these operations may introduce a second-level write latency. This latency is due to the need to ensure data consistency across multiple TiDB instances, which is managed through a lease mechanism.

 

Data Reloading: If the cached data is modified, it becomes invalid and needs to be reloaded into memory to continue benefiting from the caching feature.

 

 

Caching Mechanism

 

 

Lease Mechanism 

 

The lease mechanism works by setting the lease time for each cache. This lease time is controlled by the system variable tidb_table_cache_lease, which determines how long a cached table can hold data before it needs to be refreshed. The default lease time is 3 seconds, but it can be adjusted based on the application's requirements.

 

When a write operation occurs on a cached table, the lease mechanism ensures that the data is not modified until the lease expires. This prevents scenarios where one TiDB instance might read stale data while another instance has already updated the underlying table. 

 

When lease expires the data in cache also expires. In this time data retrieval from TiKV nodes and writes can be performed.

 

Working with cached tables 

 

To convert a normal table into a cached table, you can use the following SQL operation:

ALTER TABLE your_table_name CACHE;

This command will load the table data into memory, enabling the cached table feature. 

 

If you need to revert a cached table back to a normal table, you can use:

ALTER TABLE your_table_name NOCACHE;

After executing the NOCACHE command, you will need to reload the data to continue caching if needed.

 

Example 

 

Consider a financial application dealing with constantly fluctuating exchange rates. By caching the exchange rates table, which is updated periodically, the application can retrieve the latest rates with minimal latency, ensuring accurate and timely financial calculations.

 

Set the table to a cached table

 

Use the ALTER TABLE statement to set the table as a cached table.

 

mysql> Alter table exchange_rates cache;
Query OK, 0 rows affected (2.49 sec)

 

Verify the cached table

 

Using SHOW CREATE TABLE statement

 

Use the SHOW CREATE TABLE statement to verify if a table is cached. If the table is cached, the result will include the CACHED ON attribute.

mysql> show create table exchange_rates\G
*************************** 1. row ***************************
       Table: exchange_rates
Create Table: CREATE TABLE `exchange_rates` (
  `currency_pair` varchar(10) NOT NULL,
  `exchange_rate` decimal(10,4) DEFAULT NULL,
  `last_updated` timestamp DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`currency_pair`) /*T![clustered_index] CLUSTERED */
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin /* CACHED ON */
1 row in set (0.00 sec)

 

Using Trace

 

Use the TRACE statement to check whether the data is loaded into memory. When the cache is not loaded, the returned result contains the regionRequest.SendReqCtx attribute, which indicates that TiDB reads data from TiKV.

mysql> Trace select * from exchange_rates;
+-------------------------------------------+-----------------+------------+
| operation                                 | startTS         | duration   |
+-------------------------------------------+-----------------+------------+
| trace                                     | 11:45:47.971581 | 6.187729ms |
|   ├─session.ExecuteStmt                   | 11:45:47.971596 | 1.603214ms |
|   │ ├─executor.Compile                    | 11:45:47.971638 | 393.902µs  |
|   │ └─session.runStmt                     | 11:45:47.972071 | 1.077769ms |
|   │   └─UnionScanExec.Open                | 11:45:47.972839 | 220.466µs  |
|   │     ├─TableReaderExecutor.Open        | 11:45:47.972844 | 156.434µs  |
|   │     │ └─distsql.Select                | 11:45:47.972890 | 75.069µs   |
|   │     │   └─regionRequest.SendReqCtx    | 11:45:47.973895 | 3.077972ms |
|   │     │       └─tikv.RPC                | 11:45:47.973966 | 542.167µs  |
|   │     │         ├─tikv.Wait             | 11:45:47.973966 | 100.206µs  |
|   │     │         │ └─tikv.GetSnapshot    | 11:45:47.973966 | 43.276µs   |
|   │     │         └─tikv.Process          | 11:45:47.974066 | 162.639µs  |
|   │     └─buildMemTableReader             | 11:45:47.973022 | 9.334µs    |
|   └─*executor.UnionScanExec.Next          | 11:45:47.973227 | 3.947526ms |
|     └─*executor.TableReaderExecutor.Next  | 11:45:47.973238 | 3.884727ms |
+-------------------------------------------+-----------------+------------+
16 rows in set (0.01 sec)

 

After executing TRACE again, the returned result no longer contains the regionRequest.SendReqCtx attribute, which indicates that TiDB no longer reads data from TiKV but reads data from the memory instead.

mysql> Trace select * from exchange_rates;
+------------------------------------------+-----------------+------------+
| operation                                | startTS         | duration   |
+------------------------------------------+-----------------+------------+
| trace                                    | 11:44:49.620587 | 4.028872ms |
|   ├─session.ExecuteStmt                  | 11:44:49.620599 | 3.586808ms |
|   │ ├─executor.Compile                   | 11:44:49.620640 | 400.671µs  |
|   │ └─session.runStmt                    | 11:44:49.621184 | 2.959394ms |
|   │   └─UnionScanExec.Open               | 11:44:49.623882 | 172.453µs  |
|   │     ├─TableReaderExecutor.Open       | 11:44:49.623892 | 43.267µs   |
|   │     └─buildMemTableReader            | 11:44:49.623998 | 16.886µs   |
|   └─*executor.UnionScanExec.Next         | 11:44:49.624213 | 19.765µs   |
+------------------------------------------+-----------------+------------+
8 rows in set (0.00 sec)

 

Using Explain statement


You can see UnionScan in the execution plan of the cached tables.

mysql> Explain  select * from exchange_rates;
+-------------------------+---------+-----------+----------------------+--------------------------------+
| id                      | estRows | task      | access object        | operator info                  |
+-------------------------+---------+-----------+----------------------+--------------------------------+
| UnionScan_5             | 6.00    | root      |                      |                                |
| └─TableReader_7         | 6.00    | root      |                      | data:TableFullScan_6           |
|   └─TableFullScan_6     | 6.00    | cop[tikv] | table:exchange_rates | keep order:false, stats:pseudo |
+-------------------------+---------+-----------+----------------------+--------------------------------+
3 rows in set (0.00 sec)

 

Limitations

  • Currently, the size limit of a cached table is 64 MiB in TiDB. If the table data exceeds 64 MiB, executing ALTER TABLE t CACHE will fail.
  • ALTER TABLE t ADD PARTITION operation on cached tables is not supported.
  • ALTER TABLE t CACHE operation on temporary tables is not supported.
  • ALTER TABLE t CACHE operation on views is not supported.
  • Direct DDL operations on a cached table are not supported.
  • Stale Read is not supported.


Conclusion 

 

TiDB's cached tables improve database performance by accelerating read-heavy queries for frequently accessed but rarely updated tables. By caching data in memory, they reduce latency and enhance application responsiveness, making them ideal for high-read, low-update workloads like configuration management and real-time lookups. Understanding the caching mechanism, lease-based consistency model, and best practices allows developers to fully optimize TiDB for high-performance applications.

 

Author Bio

Abishek S

Abishek S is a MongoDB and TiDB Certified DBA at Mafiree with strong expertise in distributed databases, TiDB architecture, and cross-database consistency tools. He writes technical content focused on practical database solutions, data consistency verification, replication strategies, and performance optimization for modern data platforms. His work helps engineers and DBAs improve reliability and efficiency in real-world database operations.

Leave a Comment

Related Blogs

Manage Large Data Import in TiDB

Efficiently import massive datasets into TiDB clusters using TiDB Lightning for faster analytics and scalable performance.

  1271 views
Automate Expired Data Deletion in TiDB with TTL

Say goodbye to manual cleanup! Learn how TiDB’s TTL feature automates expired data deletion and keeps your database lean and efficient.

  1807 views
Sync-Diff-Inspector: Your Go-To Tool for Verifying Data Consistency Across TiDB and MySQL

A powerful tool to compare, detect, and fix data inconsistencies across distributed databases—ensure accurate migrations and seamless replication with Sync-Diff-Inspector.

  1646 views
Introducing Foreign Keys and more: Exploring New Features in TiDB v8.5.0

Explore the latest features in TiDB v8.5.0, including support for Foreign Keys, improved performance, and enhanced security for modern database management.

  1804 views
Optimizing Data Storage with TiDB Placement Rules

Leveraging TiDB’s Placement Rules for Optimal Performance

  1653 views

Subscribe for email updates

Get in touch with us

Highlights

More than 6000 Servers Monitored

Happy Clients

Certified DBAs

24 x 7 x 365 Support

PCI

Database Services

MySQL MongoDB PostgreSQL SQL Server Aerospike Clickhouse TiDB MariaDB Columnstore

Quick Links

Careers Blog Contact Privacy Policy Disclaimer Policy

Contacts

Linkedin Mafiree Facebook Mafiree Twitter Mafiree

Nagercoil Office

Miru IT Park, Vallankumaranvillai,

Nagercoil, Tamilnadu - 629 002.

Bangalore Office

Unit 303, Vanguard Rise,

5th Main, Konena Agrahara,

Old Airport Road, Bangalore - 560 017.

Call: +91 6383016411

Email: sales@mafiree.com


Copyright © - All Rights Reserved - Mafiree