Monthly Archives: January 2026

Scoped Vector Search with the MyVector Plugin for MySQL — Part III

From Concepts to Production: Real-World Patterns, Query Plans, and What’s Next

In Part I, we introduced scoped vector search in MySQL using the MyVector plugin, focusing on how semantic similarity and SQL filtering work together.

In Part II, we explored schema design, embedding strategies, HNSW indexing, hybrid queries, and tuning — and closed with a promise to show real-world usage and execution behavior.

This final part completes the series.


Semantic Search with Explicit Scope

In real systems, semantic search is almost never global. Results must be filtered by tenant, user, or domain before ranking by similarity.

SELECT id, title
FROM knowledge_base
WHERE tenant_id = 42
ORDER BY
myvector_distance(embedding, ?, 'COSINE')
LIMIT 10;

This follows the same pattern introduced earlier in the series:

  • SQL predicates define scope
  • Vector distance defines relevance
  • MySQL remains in control of execution

Real-Time Document Recall (Chunk-Based Retrieval)

Document-level embeddings are often too coarse. Most AI workflows retrieve chunks.

SQL
SELECT chunk_text
FROM document_chunks
WHERE document_id = ?
ORDER BY
myvector_distance(chunk_embedding, ?, 'L2')
LIMIT 6;

This query pattern is commonly used for:

  • Knowledge-base lookups
  • Assistant context retrieval
  • Pre-RAG recall stages

Chat Message Memory and Re-Ranking

Chronological chat history is rarely useful on its own. Semantic re-ranking allows systems to recall relevant prior messages.

SQL
SELECT message
FROM chat_history
WHERE session_id = ?
ORDER BY
myvector_distance(message_embedding, ?, 'COSINE')
LIMIT 8;

The result set can be fed directly into an LLM prompt as conversational memory.


Using MyVector in RAG Pipelines

MyVector integrates naturally into Retrieval-Augmented Generation workflows by acting as the retrieval layer.

SQL
SELECT id, content
FROM documents
WHERE MYVECTOR_IS_ANN(
'mydb.documents.embedding',
'id',
?
)
LIMIT 12;

At this point:

  • Embeddings are generated externally
  • Retrieval happens inside MySQL
  • Generation happens downstream

No additional vector database is required.


Query Execution and Fallback Behavior

ANN Execution Path (HNSW Enabled)

Once an HNSW index is created and loaded, MySQL uses the ANN execution path provided by the plugin.
Candidate IDs are retrieved first, followed by row lookups.

This behavior is visible via EXPLAIN.


Brute-Force Fallback (No HNSW Index)

When no ANN index is available, MyVector falls back to deterministic KNN evaluation.

SQL
SELECT id
FROM documents
ORDER BY
myvector_distance(embedding, ?, 'L2')
LIMIT 20;

This results in a full scan and sort — slower, but correct and predictable.

Understanding this fallback is critical for production sizing and diagnostics.


Project Update: MyVector v1.26.1

The project continues to move quickly.

MyVector v1.26.1 is now available, introducing enhanced Docker support for:

  • MySQL 8.4 LTS
  • MySQL 9.0

This release significantly improves:


Stop Moving Data — Start Searching It Where It Lives

Across all three parts, the conclusion is consistent:

Vector search does not require a separate database.

With MyVector, you can:

  • Keep data in MySQL
  • Apply strict SQL scoping
  • Use ANN when available
  • Fall back safely when it isn’t

All with observable execution plans and predictable behavior.


Join the Community

Development happens in the open:

Feedback and contributions are welcome.


Next Up: Powering AI-Ready MySQL — When MyVector Meets ProxySQL

The next step is production architecture.

In the next post, we’ll explore:

  • Integrated MCP Server
  • Improved Full Text Search operations
  • Routing vector-heavy queries with ProxySQL
  • Isolating ANN workloads from OLTP traffic
  • Designing AI-ready MySQL deployments that scale safely

MyVector brings semantic search into MySQL.
ProxySQL helps it run at scale.

Stay tuned…

Fiddling with MySQL MCP Server During Holidays

I was able to get the MySQL MCP Server up and running and it was time to try it first time against a sample data set outside of its test suite.

Let’s start with Stack Overflow and the value of its data for modern LLMs and AI systems. As most of you already know, the questions in forums has significantly dropped since the launch of OpenAI, aka ChatGPT

Source : https://x.com/alojoh/status/2009366134513127462

This is probably true for traditional search engines like Google, although they’re already adopting built-in AI solutions. Other sources, such as Reddit, also provided valuable, quick, and easy information to the masses.

I wanted to spin a subset of StackExchange questions. These are loaded into a local MySQL server. I intended to put the MCP server to a test to find out what’s going on.

Results aren’t only interesting but also very intriguing. The reason is the questions at these forums and their answers. If these questions and answers don’t exist, how will LLMs train themselves to give precise answers? They may otherwise rely on other outdated sources and start hallucinating.


So first thing we spin up the lastes version of the mysql-mcp-server and configure it to work with Claude desktop per the instructions in the documentation. Basically, get the data sample dataset from the StackExchange data dump site. You can ask Cursor, Gemini, or Claude for help. They can set up this data sample in your local MySQL database server. Here’s the github repository for this data set as well.

My prompt: Explore Stack Exchange database


🚁 Stack Exchange: Drones & Model Aircraft – Complete Analysis

📊 Community Overview

Site: Drones and Model Aircraft Stack Exchange
Period: April 2020 – March 2024 (4 years)
Status: 🚨 Declining but quality community

MetricValue
Total Questions908
Total Answers1,277
Total Users5,251
Total Votes9,983
Avg Answers/Question1.41
Acceptance Rate49.45%
Avg Question Views757

I will not copy and paste the rest of the conversation. It’s a big, long one. However, the results are amazing on the subject of Drones.

As a drone pilot for almost a decade, the following questions are seriously intriguing.

🔥 Most Viral Questions (5K+ Views)

TitleViewsScoreAnswers
At what current should I charge a LiPo battery?24,27894
Why do brushless motors have 3 wires?17,83482
What do ESC beeps mean?16,44452
How come DJI Fly App is NOT in Play Store?15,826124
How to configure servo in Betaflight?12,05252

Prompt: Why do multirotors usually have four propellers?

ACCEPTED ANSWER (27 upvotes) – by Kenn Sebesta

TL;DR: Four is the mathematical minimum for simple, robust control


The Physics: 4 Degrees of Freedom

For hovering aircraft, we need to control 4 degrees of freedom (DoF):

  1. Roll (tilt left/right)
  2. Pitch (tilt forward/backward)
  3. Yaw (rotate clockwise/counterclockwise)
  4. Thrust (up/down)

(We can control horizontal movement (x-y) by combining roll/pitch with thrust)

These are some samples. If you run it yourself, there are amazing insights into the data set. Not only that, but you can question the results, trends, and answers.

Going back to MySQL and like any other database out of the box query optimization was terribly wrong. We dug in a bit further. We discovered that most of the queries it ran for the above results were inefficient.

1. Row Count Query – 100x Faster ⚡

Problem: 9 separate full table scans
Solution: Use information_schema.TABLES metadata
Speedup: 2000ms → 20ms

2. Top Users with Posts – 50x Faster ⚡⚡

Problem: Joins ALL posts for ALL users before filtering
Solution: Filter to top 15 users FIRST, then join
Speedup: 800ms → 15ms
✅ Tested successfully – Works perfectly!

3. Monthly Activity – 10x Faster ⚡

Problem: YEAR() and MONTH() functions prevent index usage
Solution: Use DATE_FORMAT() and index-friendly WHERE
Speedup: 500ms → 50ms

4. Vote Percentages – 10x Faster ⚡

Problem: Subquery recalculates total count for each group
Solution: Calculate once with CTE or window function
Speedup: 400ms → 40ms

5. Answer Distribution – 3x Faster ⚡

Problem: CASE expression in GROUP BY
Solution: Group by integer bucket IDs, then label
Speedup: 300ms → 100ms

This led me to create an optimization guide for the Claude agent.

In conclusion, the MCP servers are great resources to explore data sets. With some experimentation and guidance, they can reveal highly valuable analytics use cases. These include marketing and sales data that would normally take too much time and material to cover.

Next up is token usage. If you are also wondering, “Where have all my tokens gone using these AI tools?” I have some thoughts on that topic, too.