Mafiree logo
  • About
  • Services
  • Blogs
  • Careers
  • Products
    • orbit logo Orbit
    • streamer logo Xstreami
  • Contact
Schedule a Call
Menu
  • About
  • Services
  • Blogs
  • Careers
  • Products
    • orbit logo Orbit
    • streamer logo Xstreami
  • Contact
  • Schedule a Call
Database
Database Database Managed Services
MySQL MySQL
MySQL Consulting Services
MySQL Migration Services
MySQL Optimization & Query Tuning
MySQL Database Administration
MySQL Backup & Recovery
MySQL Security & Maintenance
MySQL Cloud Services (AWS RDS, Aurora, Google Cloud SQL, Azure)
MySQL for Ecommerce
MySQL High Availability & Replication
MongoDB MongoDB
MongoDB Consulting Services
MongoDB Migration Services
MongoDB Optimization & Query Tuning
MongoDB Database Administration
MongoDB Backup & Recovery
MongoDB Security & Maintenance
MongoDB Cloud (Atlas)
MongoDB Solutions by Industry
MongoDB High Availability & Replication
PostgreSQL PostgreSQL
PostgreSQL Consulting
PostgreSQL Migration & Upgrades
Performance Tuning & Query Optimization
PostgreSQL Administration & Managed Services
High Availability, Clustering & Replication
PostgreSQL Backup, Recovery & Disaster Planning
PostgreSQL Security, Compliance & Auditing
PostgreSQL for Analytics & Data Warehousing
PostgreSQL on Cloud & Containers
PostgreSQL Extensions & Open-Source Integrations
PostgreSQL for Every Industry
SQL Server MSSQL
MSSQL Consulting Services
MSSQL Migration Services
MSSQL Optimization & Query Tuning Services
MSSQL Database Administration Services
MSSQL Backup & Recovery Services
MSSQL High Availability & Replication Services
MSSQL Security & Compliance Services
MSSQL Performance Monitoring & Health Checks
MSSQL Solutions by Industry
Aerospike Aerospike
Aerospike Consulting Services
Aerospike Migration Services
Aerospike Performance Optimization & Tuning
Aerospike Database Administration
Aerospike Backup & Recovery
Aerospike High Availability
Aerospike Cloud & Hybrid Deployments
Aerospike for Real-Time Applications (AdTech, FinTech, Retail, IoT)
Analytics DB
Analytics DB Analytics DB Services
Clickhouse Clickhouse
ClickHouse Consulting Services
ClickHouse Migration Services
ClickHouse Optimization & Query Tuning
ClickHouse Database Administration
ClickHouse Backup & Recovery
ClickHouse Security & Maintenance
ClickHouse Cloud Services (ClickHouse Cloud, AWS, GCP, Azure)
ClickHouse Solutions by Industry
ClickHouse High Availability & Replication
TiDB TiDB
TiDB Consulting & Architecture Planning
TiDB Administration & Maintenance
TiDB Security and Privacy Maintenance
TiDB Performance & Query Optimization
TiDB Migration Services
TiDB Backup & Disaster Recovery
TiDB High Availability Solutions
TiDB Solutions by Industry
TiDB Cloud Services
ScyllaDB ScyllaDB
ScyllaDB Consulting & Architecture Planning
ScyllaDB Administration & Maintenance
ScyllaDB Security and Privacy Maintenance
ScyllaDB Performance & Query Optimization
ScyllaDB Migration Services
ScyllaDB Backup & Disaster Recovery
ScyllaDB High Availability Solutions
ScyllaDB Solutions by Industry
ScyllaDB Cloud Services
DevOps
DevOps DevOps Services
Version Control Version Control
Kubernetes Kubernetes
Infrastructure Infrastructure Management
Web Servers Web Servers
Networking
Networking Networking Services
Basic Basic
Advanced Advanced
MySQL MySQL
MongoDB MongoDB
PostgreSQL PostgreSQL
MSSQL MSSQL
Aerospike Aerospike
Clickhouse Clickhouse
TiDB TiDB
ScyllaDB ScyllaDB
Version Control Version Control
Kubernetes Kubernetes
Infrastructure Infrastructure Management
Web Servers Web Servers
Basic Basic
Advanced Advanced
MySQL Consulting Services
MySQL Migration Services
MySQL Optimization & Query Tuning
MySQL Database Administration
MySQL Backup & Recovery
MySQL Security & Maintenance
MySQL Cloud Services (AWS RDS, Aurora, Google Cloud SQL, Azure)
MySQL for Ecommerce
MySQL High Availability & Replication
MongoDB Consulting Services
MongoDB Migration Services
MongoDB Optimization & Query Tuning
MongoDB Database Administration
MongoDB Backup & Recovery
MongoDB Security & Maintenance
MongoDB Cloud (Atlas)
MongoDB Solutions by Industry
MongoDB High Availability & Replication
PostgreSQL Consulting
PostgreSQL Migration & Upgrades
Performance Tuning & Query Optimization
PostgreSQL Administration & Managed Services
High Availability, Clustering & Replication
PostgreSQL Backup, Recovery & Disaster Planning
PostgreSQL Security, Compliance & Auditing
PostgreSQL for Analytics & Data Warehousing
PostgreSQL on Cloud & Containers
PostgreSQL Extensions & Open-Source Integrations
PostgreSQL for Every Industry
MSSQL Consulting Services
MSSQL Migration Services
MSSQL Optimization & Query Tuning Services
MSSQL Database Administration Services
MSSQL Backup & Recovery Services
MSSQL High Availability & Replication Services
MSSQL Security & Compliance Services
MSSQL Performance Monitoring & Health Checks
MSSQL Solutions by Industry
Aerospike Consulting Services
Aerospike Migration Services
Aerospike Performance Optimization & Tuning
Aerospike Database Administration
Aerospike Backup & Recovery
Aerospike High Availability
Aerospike Cloud & Hybrid Deployments
Aerospike for Real-Time Applications (AdTech, FinTech, Retail, IoT)
ClickHouse Consulting Services
ClickHouse Migration Services
ClickHouse Optimization & Query Tuning
ClickHouse Database Administration
ClickHouse Backup & Recovery
ClickHouse Security & Maintenance
ClickHouse Cloud Services (ClickHouse Cloud, AWS, GCP, Azure)
ClickHouse Solutions by Industry
ClickHouse High Availability & Replication
TiDB Consulting & Architecture Planning
TiDB Administration & Maintenance
TiDB Security and Privacy Maintenance
TiDB Performance & Query Optimization
TiDB Migration Services
TiDB Backup & Disaster Recovery
TiDB High Availability Solutions
TiDB Solutions by Industry
TiDB Cloud Services
ScyllaDB Consulting & Architecture Planning
ScyllaDB Administration & Maintenance
ScyllaDB Security and Privacy Maintenance
ScyllaDB Performance & Query Optimization
ScyllaDB Migration Services
ScyllaDB Backup & Disaster Recovery
ScyllaDB High Availability Solutions
ScyllaDB Solutions by Industry
ScyllaDB Cloud Services
  1. Home
  2. > Blogs
  3. > TiDB
  4. > Automate Expired Data Deletion in TiDB with TTL

Automate Expired Data Deletion in TiDB with TTL

Say goodbye to manual cleanup! Learn how TiDB’s TTL feature automates expired data deletion and keeps your database lean and efficient.

Abishek S September 26, 2025

Subscribe for email updates

In modern applications such as IoT platforms, massive volumes of data are generated and stored continuously. Over time, this can quickly consume available storage and degrade database performance if not managed effectively. Simply deleting large amounts of historical data manually or in bulk can cause spikes in resource usage and slow down critical operations.

 

TiDB’s TTL (Time-To-Live) feature helps by automatically removing expired data at regular intervals, keeping your database lean and efficient without manual cleanup.

 

In this guide, we'll explore how to use TiDB's TTL to manage data retention policies.

 

TiDB’s TTL

Time to Live (TTL) in TiDB is a feature that lets you manage the lifespan of data at the row level. When TTL is enabled on a table, TiDB automatically monitors the age of each row and deletes data once it expires. This helps reduce storage usage and can improve performance in certain workloads.

 

Common use cases for TTL include:

  • Automatically removing expired verification codes and temporary short URLs.
  • Cleaning up outdated or unnecessary historical order records.
  • Deleting intermediate or temporary data generated during calculations

 

TTL Syntax and Configuration

 

Enable TiDB’s TTL

Create a table with TTL that deletes the data older than 2 months.

CREATE TABLE tbl1 (id int PRIMARY KEY, created_at TIMESTAMP) TTL = `created_at` + INTERVAL 2 MONTH;

Modify the TTL for Tables

Modify the retention period of a TTL table

ALTER TABLE tbl1 TTL = `created_at` + INTERVAL 1 MONTH;

Modify the TTL_ENABLE value of a TTL table

ALTER TABLE t1 TTL_ENABLE = 'OFF';

Remove all TTL attributes from TTL table

ALTER TABLE t1 REMOVE TTL;

TTL with default values

Use DEFAULT CURRENT_TIMESTAMP value of a column as the current creation time and use this column as the TTL timestamp column.

CREATE TABLE tbl1 ( id int PRIMARY KEY, status varchar, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP) TTL = `created_at` + INTERVAL 3 MONTH;

 

TTL Jobs

TTL jobs are the background jobs that are scheduled to clean up the data periodically. By setting the TTL_JOB_INTERVAL variable we can control the TTL jobs.

ALTER TABLE orders TTL_JOB_INTERVAL = '24h';

The default value of this variable is 1h.

 

To disable TTL job execution, set the table option TTL_ENABLE = 'OFF'. Additionally, you can disable TTL jobs for the entire cluster by setting the global variable tidb_ttl_job_enable to OFF.

SET GLOBAL tidb_ttl_job_enable = OFF;

Schedule the TTL jobs to run at a particular time.

SET GLOBAL tidb_ttl_job_schedule_window_start_time = '01:00 +0000';
SET GLOBAL tidb_ttl_job_schedule_window_end_time = '05:00 +0000';

 

Working samples

Create a table with TTL period 2 minutes.

CREATE TABLE tbl ( id int PRIMARY KEY, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP) TTL = `created_at` + INTERVAL 2 MINUTE;

Insert the records into the table.

mysql> insert into tbl(id) values(1),(2),(3),(4);
Query OK, 4 rows affected (0.01 sec)
Records: 4 Duplicates: 0 Warnings: 0

Set the TTL job interval for 1 minute.

mysql> alter table tbl TTL_JOB_INTERVAL = '1m';
Query OK, 0 rows affected (0.10 sec)

Checking the table data

mysql> select * from tbl;
+----+---------------------+
| id | created_at          |
+----+---------------------+
| 1  | 2025-05-30 05:40:53 |
| 2  | 2025-05-30 05:40:53 |
| 3  | 2025-05-30 05:40:53 |
| 4  | 2025-05-30 05:40:53 |
+----+---------------------+
4 rows in set (0.00 sec)

Checking again after 2 minutes

mysql> select * from tbl;
Empty set (0.00 sec)

 

TTL for partial Data

We create a table with a TTL column that applies different TTL rules based on a condition.

For instance:

Delete rows after 2 minutes if status = 1, and Delete the other rows after 5 minutes.

mysql> CREATE TABLE message (
-> id INT PRIMARY KEY,
-> created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
-> status INT,
-> expire_at TIMESTAMP GENERATED ALWAYS AS (
-> IF(status = 1,
-> created_at + INTERVAL 2 MINUTE,
-> created_at + INTERVAL 5 MINUTE)
-> ) STORED
-> ) TTL expire_at + INTERVAL 0 MINUTE;
Query OK, 0 rows affected (1.30 sec)

Checking the table data

mysql> select * from message;
+----+---------------------+--------+---------------------+
| id | created_at          | status | expire_at           |
+----+---------------------+--------+---------------------+
| 1  | 2025-06-26 10:26:31 | 1 | 2025-06-26 10:28:31      |
| 2  | 2025-06-26 10:26:31 | 0 | 2025-06-26 10:31:31      |
| 3  | 2025-06-26 10:26:31 | 1 | 2025-06-26 10:28:31      |
| 4  | 2025-06-26 10:26:31 | 2 | 2025-06-26 10:31:31      |
+----+---------------------+--------+---------------------+

When checking again after 2 minutes we can see the rows which are having ‘status = 1’ are deleted.

mysql> select * from message;
+----+---------------------+--------+---------------------+
| id | created_at          | status | expire_at           |
+----+---------------------+--------+---------------------+
| 2  | 2025-06-26 10:26:31 | 0 | 2025-06-26 10:31:31      |
| 4  | 2025-06-26 10:26:31 | 2 | 2025-06-26 10:31:31      |
+----+---------------------+--------+---------------------+
2 rows in set (0.01 sec)

After 3 more minutes the other data was also deleted.

mysql> select * from message;
Empty set (0.01 sec)

This way we can achieve TTL for complex data rules.

Note: Use this approach in scenarios like where you need to delete only PRIME user data while keeping NORMAL user data intact.

 

Checking the TTL jobs

We can check the executed TTL jobs from the mysql.tidb_ttl_table_status, mysql.tidb_ttl_job_history Tables.

 

mysql.tidb_ttl_task table contains information about the ongoing TTL subtasks. This table records the subtasks that are currently being executed.

mysql> TABLE mysql.tidb_ttl_table_status LIMIT 1\G
*************************** 1. row ***************************
table_id: 104
parent_table_id: 104
table_statistics: NULL
last_job_id: 792ca729a9854b75949d5842e0eac0ac
last_job_start_time: 2025-05-30 05:01:08
last_job_finish_time: 2025-05-30 05:01:16
last_job_ttl_expire: 2025-05-30 04:59:08
last_job_summary: {"total_rows":4,"success_rows":4,"error_rows":0,"total_scan_task":1,"scheduled_scan_task":1,"finished_scan_task":1}
current_job_id: NULL
current_job_owner_id: NULL
current_job_owner_addr: NULL
current_job_owner_hb_time: NULL
current_job_start_time: NULL
current_job_ttl_expire: NULL
current_job_state: NULL
current_job_status: NULL
current_job_status_update_time: NULL

mysql.tidb_ttl_job_history table contains information about the TTL jobs that have been executed. The record of TTL job history is kept for 90 days.

mysql> TABLE mysql.tidb_ttl_job_history LIMIT 1\G
*************************** 1. row ***************************
job_id: b671a3166924406999ba9995febc85d5
table_id: 104
parent_table_id: 104
table_schema: test
table_name: tbl
partition_name: NULL
create_time: 2025-05-30 04:51:07
finish_time: 2025-05-30 04:51:16
ttl_expire: 2025-05-30 04:49:07
summary_text: {"total_rows":4,"success_rows":4,"error_rows":0,"total_scan_task":1,"scheduled_scan_task":1,"finished_scan_task":1}
expired_rows: 4
deleted_rows: 4
error_delete_rows: 0
status: finished

 

Monitor TTL behaviors

We can Monitor the TTL through charts.

 

Inserted Rows per second

 

Processed Rows per second

 

Insert / Delete per day


Limitations

  • The TTL attribute cannot be set on local temporary tables and global temporary tables.
  • A table with the TTL attribute does not support being referenced by other tables as the primary table in a foreign key constraint.
  • It is not guaranteed that all expired data is deleted immediately. The time when expired data is deleted depends on the scheduling interval and scheduling window of the background cleanup job.
  • For the tables that use clustered indexes, if the primary key is neither an integer nor a binary string type, the TTL job cannot be split into multiple tasks. If the table contains a large amount of data, the execution of the TTL job might become slow.
  • TTL is not available for TiDB Serverless.
  • If the first column of a table’s primary key uses the utf8 or utf8mb4 character set, TTL job subtasks are split based only on the range of visible ASCII characters. When many primary key values share the same ASCII prefix, this can lead to uneven task distribution.
  • For tables that do not support splitting TTL jobs into multiple subtasks, the job is executed sequentially on a single TiDB node. In such cases, if the table holds a large volume of data, the TTL job may run slowly.

Author Bio

Abishek S

Abishek S is a MongoDB and TiDB Certified DBA at Mafiree with strong expertise in distributed databases, TiDB architecture, and cross-database consistency tools. He writes technical content focused on practical database solutions, data consistency verification, replication strategies, and performance optimization for modern data platforms. His work helps engineers and DBAs improve reliability and efficiency in real-world database operations.

Leave a Comment

Related Blogs

Manage Large Data Import in TiDB

Efficiently import massive datasets into TiDB clusters using TiDB Lightning for faster analytics and scalable performance.

  1296 views
Sync-Diff-Inspector: Your Go-To Tool for Verifying Data Consistency Across TiDB and MySQL

A powerful tool to compare, detect, and fix data inconsistencies across distributed databases—ensure accurate migrations and seamless replication with Sync-Diff-Inspector.

  1658 views
The Feature You’ve Been Looking For – Speed Up Your Queries with TiDB Cached Tables!

Boost database performance with TiDB cached tables—reduce latency and accelerate read-heavy queries.

  1533 views
Introducing Foreign Keys and more: Exploring New Features in TiDB v8.5.0

Explore the latest features in TiDB v8.5.0, including support for Foreign Keys, improved performance, and enhanced security for modern database management.

  1815 views
Optimizing Data Storage with TiDB Placement Rules

Leveraging TiDB’s Placement Rules for Optimal Performance

  1662 views

Subscribe for email updates

Get in touch with us

Highlights

More than 6000 Servers Monitored

Happy Clients

Certified DBAs

24 x 7 x 365 Support

PCI

Database Services

MySQL MongoDB PostgreSQL SQL Server Aerospike Clickhouse TiDB MariaDB Columnstore

Quick Links

Careers Blog Contact Privacy Policy Disclaimer Policy

Contacts

Linkedin Mafiree Facebook Mafiree Twitter Mafiree

Nagercoil Office

Miru IT Park, Vallankumaranvillai,

Nagercoil, Tamilnadu - 629 002.

Bangalore Office

Unit 303, Vanguard Rise,

5th Main, Konena Agrahara,

Old Airport Road, Bangalore - 560 017.

Call: +91 6383016411

Email: sales@mafiree.com


Copyright © - All Rights Reserved - Mafiree