Mafiree logo
  • About
  • Services
  • Blogs
  • Careers
  • Products
    • orbit logo Orbit
    • streamer logo Xstreami
  • Contact
Schedule a Call
Menu
  • About
  • Services
  • Blogs
  • Careers
  • Products
    • orbit logo Orbit
    • streamer logo Xstreami
  • Contact
  • Schedule a Call
Database
Database Database Managed Services
MySQL MySQL
MySQL Consulting Services
MySQL Migration Services
MySQL Optimization & Query Tuning
MySQL Database Administration
MySQL Backup & Recovery
MySQL Security & Maintenance
MySQL Cloud Services (AWS RDS, Aurora, Google Cloud SQL, Azure)
MySQL for Ecommerce
MySQL High Availability & Replication
MongoDB MongoDB
MongoDB Consulting Services
MongoDB Migration Services
MongoDB Optimization & Query Tuning
MongoDB Database Administration
MongoDB Backup & Recovery
MongoDB Security & Maintenance
MongoDB Cloud (Atlas)
MongoDB Solutions by Industry
MongoDB High Availability & Replication
PostgreSQL PostgreSQL
PostgreSQL Consulting
PostgreSQL Migration & Upgrades
Performance Tuning & Query Optimization
PostgreSQL Administration & Managed Services
High Availability, Clustering & Replication
PostgreSQL Backup, Recovery & Disaster Planning
PostgreSQL Security, Compliance & Auditing
PostgreSQL for Analytics & Data Warehousing
PostgreSQL on Cloud & Containers
PostgreSQL Extensions & Open-Source Integrations
PostgreSQL for Every Industry
SQL Server MSSQL
MSSQL Consulting Services
MSSQL Migration Services
MSSQL Optimization & Query Tuning Services
MSSQL Database Administration Services
MSSQL Backup & Recovery Services
MSSQL High Availability & Replication Services
MSSQL Security & Compliance Services
MSSQL Performance Monitoring & Health Checks
MSSQL Solutions by Industry
Aerospike Aerospike
Aerospike Consulting Services
Aerospike Migration Services
Aerospike Performance Optimization & Tuning
Aerospike Database Administration
Aerospike Backup & Recovery
Aerospike High Availability
Aerospike Cloud & Hybrid Deployments
Aerospike for Real-Time Applications (AdTech, FinTech, Retail, IoT)
Analytics DB
Analytics DB Analytics DB Services
Clickhouse Clickhouse
ClickHouse Consulting Services
ClickHouse Migration Services
ClickHouse Optimization & Query Tuning
ClickHouse Database Administration
ClickHouse Backup & Recovery
ClickHouse Security & Maintenance
ClickHouse Cloud Services (ClickHouse Cloud, AWS, GCP, Azure)
ClickHouse Solutions by Industry
ClickHouse High Availability & Replication
TiDB TiDB
TiDB Consulting & Architecture Planning
TiDB Administration & Maintenance
TiDB Security and Privacy Maintenance
TiDB Performance & Query Optimization
TiDB Migration Services
TiDB Backup & Disaster Recovery
TiDB High Availability Solutions
TiDB Solutions by Industry
TiDB Cloud Services
ScyllaDB ScyllaDB
ScyllaDB Consulting & Architecture Planning
ScyllaDB Administration & Maintenance
ScyllaDB Security and Privacy Maintenance
ScyllaDB Performance & Query Optimization
ScyllaDB Migration Services
ScyllaDB Backup & Disaster Recovery
ScyllaDB High Availability Solutions
ScyllaDB Solutions by Industry
ScyllaDB Cloud Services
DevOps
DevOps DevOps Services
Version Control Version Control
Kubernetes Kubernetes
Infrastructure Infrastructure Management
Web Servers Web Servers
Networking
Networking Networking Services
Basic Basic
Advanced Advanced
MySQL MySQL
MongoDB MongoDB
PostgreSQL PostgreSQL
MSSQL MSSQL
Aerospike Aerospike
Clickhouse Clickhouse
TiDB TiDB
ScyllaDB ScyllaDB
Version Control Version Control
Kubernetes Kubernetes
Infrastructure Infrastructure Management
Web Servers Web Servers
Basic Basic
Advanced Advanced
MySQL Consulting Services
MySQL Migration Services
MySQL Optimization & Query Tuning
MySQL Database Administration
MySQL Backup & Recovery
MySQL Security & Maintenance
MySQL Cloud Services (AWS RDS, Aurora, Google Cloud SQL, Azure)
MySQL for Ecommerce
MySQL High Availability & Replication
MongoDB Consulting Services
MongoDB Migration Services
MongoDB Optimization & Query Tuning
MongoDB Database Administration
MongoDB Backup & Recovery
MongoDB Security & Maintenance
MongoDB Cloud (Atlas)
MongoDB Solutions by Industry
MongoDB High Availability & Replication
PostgreSQL Consulting
PostgreSQL Migration & Upgrades
Performance Tuning & Query Optimization
PostgreSQL Administration & Managed Services
High Availability, Clustering & Replication
PostgreSQL Backup, Recovery & Disaster Planning
PostgreSQL Security, Compliance & Auditing
PostgreSQL for Analytics & Data Warehousing
PostgreSQL on Cloud & Containers
PostgreSQL Extensions & Open-Source Integrations
PostgreSQL for Every Industry
MSSQL Consulting Services
MSSQL Migration Services
MSSQL Optimization & Query Tuning Services
MSSQL Database Administration Services
MSSQL Backup & Recovery Services
MSSQL High Availability & Replication Services
MSSQL Security & Compliance Services
MSSQL Performance Monitoring & Health Checks
MSSQL Solutions by Industry
Aerospike Consulting Services
Aerospike Migration Services
Aerospike Performance Optimization & Tuning
Aerospike Database Administration
Aerospike Backup & Recovery
Aerospike High Availability
Aerospike Cloud & Hybrid Deployments
Aerospike for Real-Time Applications (AdTech, FinTech, Retail, IoT)
ClickHouse Consulting Services
ClickHouse Migration Services
ClickHouse Optimization & Query Tuning
ClickHouse Database Administration
ClickHouse Backup & Recovery
ClickHouse Security & Maintenance
ClickHouse Cloud Services (ClickHouse Cloud, AWS, GCP, Azure)
ClickHouse Solutions by Industry
ClickHouse High Availability & Replication
TiDB Consulting & Architecture Planning
TiDB Administration & Maintenance
TiDB Security and Privacy Maintenance
TiDB Performance & Query Optimization
TiDB Migration Services
TiDB Backup & Disaster Recovery
TiDB High Availability Solutions
TiDB Solutions by Industry
TiDB Cloud Services
ScyllaDB Consulting & Architecture Planning
ScyllaDB Administration & Maintenance
ScyllaDB Security and Privacy Maintenance
ScyllaDB Performance & Query Optimization
ScyllaDB Migration Services
ScyllaDB Backup & Disaster Recovery
ScyllaDB High Availability Solutions
ScyllaDB Solutions by Industry
ScyllaDB Cloud Services
  1. Home
  2. > Blogs
  3. > MongoDB
  4. > MongoDB Query Optimization: How Mafiree Reduced E-Commerce Latency by 73%

MongoDB Query Optimization: How Mafiree Reduced E-Commerce Latency by 73%

Slow MongoDB queries silently erode user experience and revenue. This case study walks through how Mafiree’s MongoDB consulting team audited and optimized the query layer of a high-traffic e-commerce platform in India, reducing average API response times from 340ms to 92ms—a 73% improvement. It covers the diagnostic process using explain plans and profiler data, the indexing strategy overhaul, aggregation pipeline refactoring, and the monitoring framework that keeps performance on track.

Abishek S April 16, 2026

Subscribe for email updates

MongoDB Query Optimization: How Mafiree Reduced E-Commerce Latency by 73%
73%
Latency Reduction
$1.6K
Monthly AWS Savings
23→9
Indexes Optimized

A MongoDB database that is fast at 10,000 documents can become painfully slow at 10 million. The queries that worked during development start timing out in production. Collection scans replace index scans, aggregation pipelines choke on unfiltered stages, and end users see spinning loaders instead of search results. The root cause is almost never MongoDB itself — it is how the queries and indexes are structured against the data.

This is the story of how Mafiree's MongoDB consulting team reduced average API latency by 73% for a high-traffic e-commerce platform without adding a single server. If your MongoDB queries are slow and you need a systematic approach to MongoDB query optimization, this guide shares the exact diagnostic steps, fixes, and monitoring practices we used.

Why MongoDB Queries Slow Down

MongoDB query optimization challenges typically emerge when datasets outgrow the assumptions made during initial development. The symptoms are predictable, but the root causes are often buried deep in the interaction between query shapes, index coverage, and data modeling decisions.

Collection Scans (COLLSCAN)

The most common and most damaging issue. Without a matching index, MongoDB scans every document in the collection. On a collection with 50 million products, a single COLLSCAN can take seconds instead of milliseconds.

Inefficient Index Usage

Having indexes is not enough — the wrong indexes can be worse than none. A compound index on {category: 1, price: 1} is useless for a query filtering on {brand: 1, price: 1}. MongoDB's query planner selects the best available plan, but it can only optimize what exists.

Unbounded Aggregation Pipelines

Aggregation stages like $lookup and $unwind that run without an early $match stage force MongoDB to process the entire collection before filtering. This turns what should be a 5ms operation into a 5-second one.

Over-Fetching Documents

Queries that return entire documents when only two or three fields are needed waste network bandwidth, memory, and CPU cycles on deserialization.

For a deeper dive into query plan analysis, refer to MongoDB's official query optimization documentation.

The Client Challenge: An E-Commerce Platform Under Pressure

Our client operated a popular e-commerce marketplace built on MongoDB 6.0, running a 3-node replica set on AWS. The platform served 2 million daily active users, with a product catalog of 12 million documents and an orders collection exceeding 80 million documents.

Performance had degraded gradually over 18 months as the catalog grew. The symptoms were clear: product search API responses averaged 340ms (target: under 100ms), the checkout flow experienced intermittent timeouts during flash sales, and the analytics dashboard for sellers took over 8 seconds to load. The engineering team had added indexes reactively over time, resulting in 23 indexes on the products collection alone — many of which were redundant or unused.

Key Metrics Before & After Optimization

Metric Before After
Avg. Product Search Latency 340ms 92ms
Checkout API (p99) 1,200ms 280ms
Seller Dashboard Load 8.2s 1.8s
Collection Scans / Hour 4,200 12
Active Indexes (Products) 23 9
Monthly AWS Spend $4,800 $3,200
Experiencing similar latency issues? Request a free MongoDB performance audit

Our 3-Step Diagnostic Process

Effective MongoDB query optimization starts with diagnosis, not guesswork. We follow a systematic three-step process that identifies the highest-impact fixes first.

Step 1: Profiler Analysis — Finding the Slow Queries

MongoDB's built-in database profiler captures queries that exceed a configurable threshold. We enabled profiling at level 1 (slow operations only) with a 100ms threshold to capture the worst offenders without generating excessive overhead.

profiler-setup.js
// Enable profiling for operations slower than 100ms db.setProfilingLevel(1, { slowms: 100 }) // Query the profiler for the top 10 slowest operations db.system.profile.find({ op: { $in: ["query", "command"] }, millis: { $gt: 100 } }).sort({ millis: -1 }).limit(10)

Within 24 hours of profiling, we identified 14 distinct query shapes responsible for 87% of all slow operations. The top three offenders were the product search query, the order history aggregation, and the inventory availability check.

Step 2: Explain Plan Analysis — Understanding Why They Are Slow

For each slow query shape, we ran explain("executionStats") to understand exactly how MongoDB was executing the query. The explain output reveals whether the query used an index (IXSCAN) or performed a full collection scan (COLLSCAN), how many documents were examined versus returned, and the execution time breakdown.

explain-analysis.js
// The product search query that was averaging 340ms db.products.find({ category: "electronics", brand: "Samsung", price: { $gte: 100, $lte: 500 }, inStock: true }).sort({ rating: -1 }).limit(20).explain("executionStats") // Key findings from explain output: // stage: COLLSCAN (no matching index!) // totalDocsExamined: 3,847,291 // totalKeysExamined: 0 // executionTimeMillis: 347

The product search query was scanning nearly 4 million documents to return just 20 results. Despite having 23 indexes on the collection, none matched this specific query shape. This is a textbook example of why reactive indexing fails — you end up with many indexes that serve old query patterns while new, critical queries go unindexed.

Step 3: Index Usage Audit — Eliminating the Waste

We used the $indexStats aggregation to audit every index on the high-traffic collections. This revealed that 14 of the 23 indexes on the products collection had zero or near-zero usage over the past 30 days.

index-audit.js
// Audit index usage across the products collection db.products.aggregate([ { $indexStats: {} }, { $project: { name: 1, "stats.accesses.ops": 1, "stats.accesses.since": 1 }}, { $sort: { "stats.accesses.ops": 1 }} ])
Expert Tip Unused indexes are not harmless. Each index on a collection adds overhead to every insert, update, and delete operation. On a write-heavy collection, dropping 14 unused indexes can reduce write latency by 15–25%. Always audit before you optimize.

The Optimization Playbook: What We Fixed and How

Fix 1: Compound Indexes Using the ESR Rule

MongoDB's ESR (Equality, Sort, Range) rule is the foundation of effective compound index design. Fields used in equality filters come first, followed by sort fields, then range filters. For the product search query, we designed:

esr-index.js
// Optimized compound index following the ESR rule db.products.createIndex({ category: 1, // Equality brand: 1, // Equality inStock: 1, // Equality rating: -1, // Sort price: 1 // Range })
Product search latency dropped from 340ms to 92ms. Documents examined: 3.8M → 847.

For a complete reference on compound index strategy, see MongoDB's indexing best practices guide.

Fix 2: Aggregation Pipeline Refactoring

The seller analytics dashboard used a complex aggregation pipeline that processed the entire orders collection before filtering. The fix was straightforward but impactful: move $match and $sort stages to the beginning of the pipeline so MongoDB can leverage indexes before processing downstream stages.

pipeline-optimization.js
// BEFORE: Unoptimized pipeline (8.2s) db.orders.aggregate([ { $lookup: { from: "products", ... } }, { $unwind: "$items" }, { $match: { sellerId: ObjectId("..."), status: "completed" } }, { $group: { _id: "$items.category", revenue: { $sum: "$total" } } } ]) // AFTER: Optimized pipeline (1.8s) db.orders.aggregate([ { $match: { sellerId: ObjectId("..."), status: "completed" } }, { $sort: { orderDate: -1 } }, { $lookup: { from: "products", ... } }, { $unwind: "$items" }, { $group: { _id: "$items.category", revenue: { $sum: "$total" } } } ])
Dashboard load time: 8.2s → 1.8s. Working set reduced from 80M to ~45K documents before $lookup runs.

Fix 3: Projections and Covered Queries

Several API endpoints were fetching entire product documents (averaging 4KB each) when only the name, price, and thumbnail URL were needed for listing pages. By adding projections and designing indexes that covered the projected fields, we eliminated document fetches entirely.

covered-query.js
// Covered query: all returned fields are in the index db.products.find( { category: "electronics", inStock: true }, { name: 1, price: 1, thumbnailUrl: 1, _id: 0 } ) // Supporting covering index db.products.createIndex({ category: 1, inStock: 1, name: 1, price: 1, thumbnailUrl: 1 })
Listing page API latency reduced by 60%. Network bandwidth dropped by 45%.

Fix 4: Dropping Unused Indexes

After verifying with $indexStats and correlating with the profiler data, we safely dropped 14 unused indexes across the three primary collections. This freed approximately 2.8GB of RAM and noticeably improved write performance across the platform.

Write latency improved by 18%. Monthly AWS spend decreased by $1,600.

Best Practices for Production

Based on this engagement and hundreds of similar audits, here are the best practices we recommend for maintaining optimal MongoDB query performance at scale.

1. Follow the ESR Rule for Every Compound Index

Equality fields first, then Sort fields, then Range fields. This ordering ensures MongoDB can use the index most efficiently, creating tight bounds on the scan and avoiding in-memory sorts. Review your existing compound indexes against this rule — many teams discover their field order is suboptimal.

2. Profile Continuously, Not Just During Incidents

Enable the slow query profiler permanently at a reasonable threshold (100–200ms). Review profiler output weekly. Query patterns shift as features evolve, and a query that was fast last month may be slow today due to data growth or new access patterns. Mafiree's managed MongoDB monitoring includes automated profiler analysis that flags new slow query shapes within minutes of their first appearance.

3. Audit Index Usage Quarterly

Run $indexStats on every collection with more than 5 indexes. Drop indexes with zero operations over 30 days. Consolidate overlapping indexes into broader compound indexes that serve multiple query shapes. Fewer, smarter indexes always outperform many redundant ones.

4. Put $match First in Every Aggregation Pipeline

This single rule delivers the largest performance improvement in aggregation workloads. A $match stage at the beginning of the pipeline leverages indexes and dramatically reduces the document count flowing into expensive stages like $lookup, $unwind, and $group.

5. Use Projections in Every Query

Never fetch entire documents unless you genuinely need every field. Projections reduce network transfer, memory consumption, and deserialization time. When combined with a covering index, projections allow MongoDB to serve the query entirely from the index without touching the documents at all.

Conclusion: Optimization is a Discipline, Not a One-Time Fix

MongoDB query optimization is not something you do once and forget. Data grows, query patterns shift, and application features evolve. What made our client's optimization successful was not just the initial fixes but the ongoing monitoring framework we established: continuous profiling, automated index audits, and real-time alerting on latency regressions.

  • 73% reduction in API latency achieved without adding hardware
  • 33% reduction in infrastructure costs from smarter indexing
  • Platform now performs better at 12M products than it did at 8M
  • Write performance improved as a side-effect of removing unused indexes

If your MongoDB queries are slowing down your application, Mafiree is here to help.

Need help transitioning from one MongoDB cluster to another? Learn how in our guide on seamless data transition with MongoDB Cluster to Cluster Sync.

Ready to Optimize Your MongoDB Performance?

Request a free performance audit and discover how our experts can transform your database from a bottleneck into a competitive advantage.

Request a Free Performance Audit →

FAQ

Enable MongoDB’s database profiler with db.setProfilingLevel(1, { slowms: 100 }) to capture queries exceeding 100ms. Then query the system.profile collection to see the slowest operations, their execution plans, and the collections they target. Combine this with the explain("executionStats") method on individual queries to see exactly how many documents were scanned and whether an index was used.
The ESR (Equality, Sort, Range) rule defines the optimal field ordering in compound indexes. Place fields used in equality comparisons ($eq) first, followed by fields used in sort operations, and finally fields used in range queries ($gt, $lt, $gte, $lte). This ordering allows MongoDB to narrow the scan efficiently and avoid costly in-memory sorts.
There is no universal number, but a well-designed collection typically needs 5–10 indexes to cover all major query shapes. More than 15 indexes on a single collection is a warning sign of redundancy or poor index design. Each index adds write overhead, consumes RAM, and increases backup sizes. Audit with $indexStats regularly and drop what you do not use.
Absolutely. In this case study, dropping 14 unused indexes freed 2.8GB of RAM, and the overall optimization reduced our client’s monthly AWS spend by $1,600. Efficient queries consume less CPU, less memory, and less I/O—which directly translates to smaller instance sizes or deferred scaling.
At minimum, review profiler data and index usage monthly. For high-traffic applications, weekly reviews catch regressions early. The best practice is to integrate query performance monitoring into your CI/CD pipeline so that new features are reviewed for database impact before reaching production. Mafiree’s managed MongoDB services include continuous monitoring with 5-minute metric collection and automated alerting on performance anomalies.

Author Bio

Abishek S

Abishek S is a MongoDB and TiDB Certified DBA at Mafiree with strong expertise in distributed databases, TiDB architecture, and cross-database consistency tools. He writes technical content focused on practical database solutions, data consistency verification, replication strategies, and performance optimization for modern data platforms. His work helps engineers and DBAs improve reliability and efficiency in real-world database operations.

Leave a Comment

Related Blogs

MongoDB Transactions: A Comprehensive Guide to ACID Compliance

MongoDB ensures data consistency with single-document atomic operations and multi-document transactions. This guide explains how to implement transactions, their limitations, performance impacts, and best practices for production environments. It also highlights when to use distributed transactions and how expert consulting can help optimize performance.

  1660 views
Let MongoDB Clean Up After Itself: A Complete Guide to TTL Indexes

Tired of bloated log collections and messy data? TTL indexes in MongoDB automate your cleanup no cron jobs, no scripts. Learn how to set up, monitor, and optimize TTL for cleaner, faster, and self-maintaining databases.

  5478 views
Top 10 MongoDB Operators Every Developer Should Know (With Examples)

Unlock MongoDB’s full potential! Discover the most powerful MongoDB query operators to supercharge your queries with practical examples every developer can use. #MongoDB #DeveloperTips

  3045 views
Reclaiming MongoDB Storage Space: A Journey to 50% Space Reduction

Efficiently Reclaim Disk Space in MongoDB: Strategies and Solutions for Optimal Performance.

  2184 views
Don't Let Hackers In: How to Secure and Harden Your MongoDB Database

Learn how to protect your MongoDB database with robust security practices, including user authentication, encryption, IP whitelisting, and input validation. Ensure your data remains safe and accessible in today's evolving threat landscape.

  3835 views

Subscribe for email updates

Get in touch with us

Highlights

More than 6000 Servers Monitored

Happy Clients

Certified DBAs

24 x 7 x 365 Support

PCI

Database Services

MySQL MongoDB PostgreSQL SQL Server Aerospike Clickhouse TiDB MariaDB Columnstore

Quick Links

Careers Blog Contact Privacy Policy Disclaimer Policy

Contacts

Linkedin Mafiree Facebook Mafiree Twitter Mafiree

Nagercoil Office

Miru IT Park, Vallankumaranvillai,

Nagercoil, Tamilnadu - 629 002.

Bangalore Office

Unit 303, Vanguard Rise,

5th Main, Konena Agrahara,

Old Airport Road, Bangalore - 560 017.

Call: +91 6383016411

Email: sales@mafiree.com


Copyright © - All Rights Reserved - Mafiree