Tag Archives: mysql

MySQL MCP Server v1.7.0 is out

April 19, 2026

It took three release candidates and more CI tweaks than I’d like to admit, but v1.7.0 is finally tagged GA. Here’s what actually changed and why it matters.


The thing I kept getting asked about: add_connection

Almost every multi-database user hits the same wall: you configure your connections at startup, and that’s it. Want to point Claude at a different instance mid-session? Restart the server. Not great.

add_connection fixes that. Enable it with MYSQL_MCP_EXTENDED=1 and MYSQL_MCP_ENABLE_ADD_CONNECTION=1, and Claude can register a new named connection on the fly — DSN validation, duplicate-name rejection, and a hard block on the root MySQL user all happen before the connection is accepted. Once it’s in, use_connection it works as usual.

It’s intentionally opt-in behind two flags. Allowing an AI client to register arbitrary database connections at runtime warrants an explicit “yes, I want this” from the operator.


Finding stuff across a big schema: search_schema and schema_diff

Two tools I personally felt the absence of every time I was debugging a large schema.

search_schema does what it sounds like — pattern-match against table and column names across all accessible databases. Before this, you’d either write the query yourself or ask Claude to guess where a column lived. Now you just ask.

schema_diff is the one I’m more excited about. Point it at two databases, and it tells you what’s structurally different. Columns that exist in staging but not prod, type mismatches, missing indexes — all surface immediately. We’ve already caught more than a few “oh, that migration never ran” moments with it.


Pagination, retries, and the unglamorous stuff

run_query now supports an offset parameter for SELECT and UNION queries, returning has_more and next_offset in the response. Big result sets no longer mean hitting row caps and wondering what you missed.

Retries got a proper implementation too. Transient errors — bad pooled connections, deadlocks, lock wait timeouts — now trigger exponential backoff instead of just failing. After a driver.ErrBadConn the pool is re-pinged, which cuts recovery time noticeably after a MySQL restart.

Neither of these is flashy, but they’re the kind of thing that makes the tool feel solid rather than fragile.


Column masking

Set MYSQL_MCP_MASK_COLUMNS=email,password,ssn and those columns are redacted in every run_query response. Nothing leaves the server. No query rewrites, no application changes. It’s a small feature that a few teams have been asking for since before v1.6.


One breaking change worth knowing about: SSH host key verification

This one could bite you on upgrade if you’re using SSH tunnels. Host key verification is now on by default. The tunnel checks ~/.ssh/known_hosts (or MYSQL_SSH_KNOWN_HOSTS, or a pinned MYSQL_SSH_HOST_KEY_FINGERPRINT) before allowing the connection.

If you were running without strict host key checking, your tunnel will fail after upgrading until you either add the host key to known_hosts or explicitly opt out with MYSQL_SSH_STRICT_HOST_KEY_CHECKING=false. The opt-out exists, but it’s a MITM risk — the default is the right behavior.


Upgrading

# Homebrew
brew update && brew upgrade mysql-mcp-server
# Docker
docker pull ghcr.io/askdba/mysql-mcp-server:latest

Full changelog: github.com/askdba/mysql-mcp-server/releases/tag/v1.7.0

Questions and issues are welcome on GitHub.

v1.26.3

MyVector v1.26.3: Maintenance, CI, and Readiness for MySQL 9.7


In my recent series on Scoped Vector Search, we looked at the query patterns that make vector search a first-class citizen in MySQL. While the logic for those searches is now established, the infrastructure supporting them requires constant attention as the MySQL ecosystem moves toward its new release model.

Today, I’m announcing MyVector v1.26.3. This is a foundational release focused on environment compatibility and CI/CD robustness.

What’s in v1.26.3?

This release ensures that MyVector remains stable and buildable across the shifting landscape of MySQL Innovation and LTS releases.

  • MySQL 8.4 & 9.6 Compatibility: We’ve updated the component sources and build logic to align with the headers and requirements for MySQL 8.4 (LTS) and the 9.6 Innovation release.
  • Ready for 9.7: The build system has been adapted to handle the upcoming 9.7 release, ensuring that users can transition to the next Innovation branch without delay.
  • Modernized Release Workflow: We’ve bumped our GitHub Actions (softprops/action-gh-release) from v1 to v2. While invisible to the user, this ensures our release pipeline remains secure and compatible with the latest GitHub runner environments.

Think of v1.26.3 as the “maintenance and readiness” layer that ensures the high-performance HNSW search you rely on continues to compile and run perfectly on the newest versions of MySQL.

Looking Ahead: The Architecture Pivot (PR #76)

While v1.26.3 keeps us current, the real excitement is happening in the lab.

There is a fundamental architecture change currently in development under Component migration (8.4–9.6) and release workflow update.

Unlike the compatibility fixes in today’s release, PR #76 is a structural overhaul. We are re-engineering how the plugin interacts with the MySQL core. This shift is designed to move MyVector closer to a full Component Architecture, which will eventually offer better lifecycle management and even deeper integration with MySQL’s internal services.

This is a significant pivot in how MyVector is built, and it will set the stage for the next generation of vector performance and observability.

Summary

v1.26.3 is the stable, verified update you need for today’s MySQL 8.4/9.6 environments and tomorrow’s 9.7 upgrade. Meanwhile, work continues on the architectural evolution that will define the future of the project.

Scoped Vector Search with the MyVector Plugin for MySQL — Part III

From Concepts to Production: Real-World Patterns, Query Plans, and What’s Next

In Part I, we introduced scoped vector search in MySQL using the MyVector plugin, focusing on how semantic similarity and SQL filtering work together.

In Part II, we explored schema design, embedding strategies, HNSW indexing, hybrid queries, and tuning — and closed with a promise to show real-world usage and execution behavior.

This final part completes the series.


Semantic Search with Explicit Scope

In real systems, semantic search is almost never global. Results must be filtered by tenant, user, or domain before ranking by similarity.

SELECT id, title
FROM knowledge_base
WHERE tenant_id = 42
ORDER BY
myvector_distance(embedding, ?, 'COSINE')
LIMIT 10;

This follows the same pattern introduced earlier in the series:

  • SQL predicates define scope
  • Vector distance defines relevance
  • MySQL remains in control of execution

Real-Time Document Recall (Chunk-Based Retrieval)

Document-level embeddings are often too coarse. Most AI workflows retrieve chunks.

SQL
SELECT chunk_text
FROM document_chunks
WHERE document_id = ?
ORDER BY
myvector_distance(chunk_embedding, ?, 'L2')
LIMIT 6;

This query pattern is commonly used for:

  • Knowledge-base lookups
  • Assistant context retrieval
  • Pre-RAG recall stages

Chat Message Memory and Re-Ranking

Chronological chat history is rarely useful on its own. Semantic re-ranking allows systems to recall relevant prior messages.

SQL
SELECT message
FROM chat_history
WHERE session_id = ?
ORDER BY
myvector_distance(message_embedding, ?, 'COSINE')
LIMIT 8;

The result set can be fed directly into an LLM prompt as conversational memory.


Using MyVector in RAG Pipelines

MyVector integrates naturally into Retrieval-Augmented Generation workflows by acting as the retrieval layer.

SQL
SELECT id, content
FROM documents
WHERE MYVECTOR_IS_ANN(
'mydb.documents.embedding',
'id',
?
)
LIMIT 12;

At this point:

  • Embeddings are generated externally
  • Retrieval happens inside MySQL
  • Generation happens downstream

No additional vector database is required.


Query Execution and Fallback Behavior

ANN Execution Path (HNSW Enabled)

Once an HNSW index is created and loaded, MySQL uses the ANN execution path provided by the plugin.
Candidate IDs are retrieved first, followed by row lookups.

This behavior is visible via EXPLAIN.


Brute-Force Fallback (No HNSW Index)

When no ANN index is available, MyVector falls back to deterministic KNN evaluation.

SQL
SELECT id
FROM documents
ORDER BY
myvector_distance(embedding, ?, 'L2')
LIMIT 20;

This results in a full scan and sort — slower, but correct and predictable.

Understanding this fallback is critical for production sizing and diagnostics.


Project Update: MyVector v1.26.1

The project continues to move quickly.

MyVector v1.26.1 is now available, introducing enhanced Docker support for:

  • MySQL 8.4 LTS
  • MySQL 9.0

This release significantly improves:


Stop Moving Data — Start Searching It Where It Lives

Across all three parts, the conclusion is consistent:

Vector search does not require a separate database.

With MyVector, you can:

  • Keep data in MySQL
  • Apply strict SQL scoping
  • Use ANN when available
  • Fall back safely when it isn’t

All with observable execution plans and predictable behavior.


Join the Community

Development happens in the open:

Feedback and contributions are welcome.


Next Up: Powering AI-Ready MySQL — When MyVector Meets ProxySQL

The next step is production architecture.

In the next post, we’ll explore:

  • Integrated MCP Server
  • Improved Full Text Search operations
  • Routing vector-heavy queries with ProxySQL
  • Isolating ANN workloads from OLTP traffic
  • Designing AI-ready MySQL deployments that scale safely

MyVector brings semantic search into MySQL.
ProxySQL helps it run at scale.

Stay tuned…

2025 Rewind and Thank You

I’m grateful to all my professional and personal networks for this year. It has been full of tears, sweat, and blood all over my face once again. Let’s not worry about that. I want to start with a big Thank You to all of you who made this year possible.

If I look back at what stood out in 2025, just before we hit 2026.

Oracle ACE Pro 

I was thrilled to be nominated to the Oracle ACE Program as an ACE Pro in April. This recognition opened doors to launch a technical blog series on vector search and AI integration with MySQL.

Project Antalya at Altinity, Inc. 

We announced native Iceberg catalog and Parquet support on S3 for ClickHouse. This pushes the boundaries of what’s possible with open lakehouse analytics.

MySQL MCP Server 

Introduced a lightweight, secure MySQL MCP server bridging relational databases and LLMs. Practical AI integration starts with safety and observability.

FOSDEM & MySQL’s 30th Birthday 

I have one of my busiest agendas in ten years. It includes the MySQL Devroom Committee, a talk, and an O’Reilly book signing for #mysqlcookbook4e. Additionally, there are 6 talks from Altinity.

O’Reilly Recognition 

After 50+ hours of flights for conferences, I came home to O’Reilly’s all-time recognition for the MySQL Cookbook. It was a moment I won’t forget.

Sailing While Working 

Once again, months at sea with salt, humidity, and wind were challenging. We handled tickets, RCAs, and meetings. We even recorded a podcast on ferry maneuvering. Born to sail, forced to work, making it work anyway.

I am immensely grateful to the #MySQL, #ClickHouse, and #opensource communities. Thank you to my co-authors Sveta Smirnova and Ibrar Ahmed. I also thank my nominator, Vinicius Grippa. I appreciate the Altinity team and every conference organizer who gave me a stage this year.

Recognition is an invitation to contribute more, not a finish line. Looking forward to more open-source collaboration in 2026.

If you’re passionate about open-source databases, MySQL, ClickHouse, or AI integration, or just want to connect, reach out.

#opensource #mysql #clickhouse #oracleacepro #ai #vectorsearch #sailing #LinkedInRewind #Coauthor #2025wrapped

Introducing Lightweight MySQL MCP Server: Secure AI Database Access


A lightweight, secure, and extensible MCP (Model Context Protocol) server for MySQL designed to bridge the gap between relational databases and large language models (LLMs).

I’m releasing a new open-source project: mysql-mcp-server, a lightweight server that connects MySQL to AI tools via the Model Context Protocol (MCP). It’s designed to make MySQL safely accessible to language models, structured, read-only, and fully auditable.

This project started out of a practical need: as LLMs become part of everyday development workflows, there’s growing interest in using them to explore database schemas, write queries, or inspect real data. But exposing production databases directly to AI tools is a risk, especially without guardrails.

mysql-mcp-server offers a simple, secure solution. It provides a minimal but powerful MCP server that speaks directly to MySQL, while enforcing safety, observability, and structure.

What it does

mysql-mcp-server allows tools that speak MC, such as Claude Desktop, to interact with MySQL in a controlled, read-only environment. It currently supports:

  • Listing databases, tables, and columns
  • Describing table schemas
  • Running parameterized SELECT queries with row limits
  • Introspecting indexes, views, triggers (optional tools)
  • Handling multiple connections through DSNs
  • Optional vector search support if using MyVector
  • Running as either a local MCP-compatible binary or a remote REST API server

By default, it rejects any unsafe operations such as INSERT, UPDATE, or DROP. The goal is to make the server safe enough to be used locally or in shared environments without unintended side effects.

Why this matters

As more developers, analysts, and teams adopt LLMs for querying and documentation, there’s a gap between conversational interfaces and real database systems. Model Context Protocol helps bridge that gap by defining a set of safe, predictable tools that LLMs can use.

mysql-mcp-server brings that model to MySQL in a way that respects production safety while enabling exploration, inspection, and prototyping. It’s helpful in local development, devops workflows, support diagnostics, and even hybrid RAG scenarios when paired with a vector index.

Getting started

You can run it with Docker:

docker run -e MYSQL_DSN='user:pass@tcp(mysql-host:3306)/' \
  -p 7788:7788 ghcr.io/askdba/mysql-mcp-server:latest

Or install via Homebrew:

brew install askdba/tap/mysql-mcp-server
mysql-mcp-server

Once running, you can connect any MCP-compatible client (like Claude Desktop) to the server and begin issuing structured queries.

Use cases

  • Developers inspecting unfamiliar databases during onboarding
  • Data teams writing and validating SQL queries with AI assistance
  • Local RAG applications using MySQL and vector search with MyVector
  • Support and SRE teams need read-only access for troubleshooting

Roadmap and contributions

This is an early release and still evolving. Planned additions include:

  • More granular introspection tools (e.g., constraints, stored procedures)
  • Connection pooling and config profiles
  • Structured logging and tracing
  • More examples for integrating with LLM environments

If you’re working on anything related to MySQL, open-source AI tooling, or database accessibility, I’d be glad to collaborate.

Learn more

If you have feedback, ideas, or want to contribute, the project is open and active. Pull requests, bug reports, and discussions are all welcome.