Ditch SQL ‘LIKE’: Building 50ms Search-as-you-type with Meilisearch

Database tutorial - IT technology blog
Database tutorial - IT technology blog

The 2 AM Pager: When ‘LIKE %query%’ Kills Your Production

My phone started screaming at 2 AM. Our e-commerce client’s conversion rates were tanking, and the logs pointed to one culprit: the search bar. Users were typing product names, but the database couldn’t keep up. Average response time? 3.5 seconds. On the modern web, three seconds is a lifetime. Users expect results the moment their fingers hit the keys—a standard known as ‘Search-as-you-type.’

The backend revealed the problem. A previous developer had implemented search using standard SQL LIKE operators across three tables. This worked fine with 1,000 products. However, the catalog had grown to 500,000 items with multiple variants. The database was performing full table scans for every single keystroke. Adding ILIKE for case-insensitivity only pushed CPU usage higher. We hadn’t just hit a scalability wall; we had slammed into it at full speed.

Why Relational Databases Fail at Search

Raw row counts aren’t the only villain here. Relational databases like PostgreSQL or MySQL prioritize ACID compliance and complex relationships. They aren’t built for high-speed text ranking. When you force a modern search experience into SQL, you hit three major bottlenecks:

  • Zero Typo Tolerance: If a user types “ipone” instead of “iphone”, LIKE returns nothing. You lose a sale over a single missed letter.
  • Poor Ranking Relevance: SQL doesn’t natively understand which result is “better.” It only knows if a string exists. It can’t easily prioritize a title match over a description match.
  • Faceted Search Overhead: Calculating counts for categories, brands, and price ranges (facets) requires complex GROUP BY queries. These get exponentially slower as your dataset grows.

The Contenders: Elasticsearch vs. Algolia vs. Meilisearch

I evaluated three options to resolve the incident:

  1. Elasticsearch: The heavy hitter. It is incredibly powerful but a resource hog. It requires significant RAM—often needing a 4GB heap just to breathe—and complex JVM tuning. For this project, it was overkill.
  2. Algolia: A premium SaaS product. It is blazing fast and requires zero maintenance. However, costs scale aggressively with search volume. The client also needed to keep data on-premise for strict privacy compliance.
  3. Meilisearch: An open-source, Rust-based engine designed specifically for end-user search. It is lightweight, typo-tolerant by default, and handles faceted search natively.

I chose Meilisearch. It offered the best performance-to-config ratio while keeping our infrastructure lean.

Implementation: Moving from SQL to Meilisearch

1. Firing up the Engine

I deployed Meilisearch via Docker. This is the most reliable way to get a production instance running without worrying about local library conflicts.

docker run -it --rm \
  -p 7700:7700 \
  -v $(pwd)/meili_data:/meili_data \
  getmeili/meilisearch:latest \
  meilisearch --master-key="MY_SECRET_MASTER_KEY"

2. Data Transformation and Import

Meilisearch relies on structured JSON. My client’s data was trapped in a massive legacy CSV export. To avoid writing a one-off Python script, I used toolcraft.app/en/tools/data/csv-to-json to convert the file. Since it runs in the browser, the data never left my machine. This saved about 20 minutes of regex cleanup.

With products.json ready, I pushed the data to the index using cURL:

curl -X POST 'http://localhost:7700/indexes/products/documents' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer MY_SECRET_MASTER_KEY' \
  --data-binary @products.json

3. Configuring Facets and Sorts

Faceted search lets users filter by brand or rating. In Meilisearch, you must explicitly define these attributes to keep the index optimized. This is a mandatory step for high-performance filtering.

curl -X PATCH 'http://localhost:7700/indexes/products/settings' \
  -H 'Authorization: Bearer MY_SECRET_MASTER_KEY' \
  -H 'Content-Type: application/json' \
  --data-binary '{
    "filterableAttributes": ["category", "brand", "rating"],
    "sortableAttributes": ["price", "rating"]
  }'

4. Frontend: The Instant Feedback Loop

To achieve that “instant” feel, I used the meilisearch-js SDK. The frontend now bypasses the main API and queries Meilisearch directly. Because the engine is optimized for read-heavy workloads, it handles hundreds of concurrent requests with sub-50ms latency.

import { MeiliSearch } from 'meilisearch'

const client = new MeiliSearch({
  host: 'http://localhost:7700',
  apiKey: 'SEARCH_ONLY_PUBLIC_KEY', // Always use a restricted key for the client side!
})

const index = client.index('products')

async function searchProducts(query) {
  const results = await index.search(query, {
    facets: ['category', 'brand'],
    attributesToHighlight: ['title'],
  });
  return results;
}

The Payoff: 12ms Response Times

The results were immediate. After deployment, average search latency dropped from 3,500ms to just 12ms. The search felt fluid. Even when users typed “samung,” the engine correctly identified “Samsung” thanks to its built-in Levenshtein distance algorithm.

The sidebar filters also saw a massive boost. Previously, clicking a category required heavy SQL joins. Now, those facets update instantly. By offloading search from the primary database, we saw a 40% reduction in overall DB CPU usage. This stabilized the entire application environment.

The Takeaway

Stop forcing your primary database to do everything. SQL is excellent for transactions and data integrity, but specialized search engines are built for speed and relevance. If your search feels sluggish, don’t just add another index to your SQL table. Move to a dedicated tool like Meilisearch. It completely transformed our user experience and, more importantly, kept the pager quiet at 2 AM.

Share: