How Can AI Talk to My PostgreSQL Database?

How Can AI Talk to My PostgreSQL Database?

I admittedly have some work to do to catch up with the AI “trend”. It’s been around (as in, easily accessible) for a few years now, but I can probably still count on my fingers the number of times I’ve used a prompt to ask it anything. That is, discounting the mostly frustrating and usually unrequested interactions with a service provider chatbot of some form… How come most of them still do such a lousy job? I was told part of the problem is more related to a lack of proper domain knowledge than with the current state of AI: if the bots are fed with bad documentation and incomplete or inaccurate data, they can’t possibly provide/fabricate good advice. Thus, the ambition is to give AI access to more than a pile of PDFs that were put together a few months ago.

MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.

In short, while we cannot (for now, at least) simply give AI the coordinates to access our e-mail and database accounts and expect it to find its way to them, we can put it in contact with someone who can. Actually, not someone, but another service, a middleman of sorts, which can both access data (and services, too) that are not publicly available and communicate with AI using a common language.

The use of such intermediary agents is done by design: it allows us control over which portions of our data and services we want AI to have access to, as well as define in which contexts we want this to happen. Context is a catchy word in the AI world, but I take it as follows: it is not because we provide AI access to our e-mail accounts (considering we are brave enough to do that) that we want it to take into consideration the data it will find there for each and every question we ask it.

Thus, we can be specific and selective: we may provide AI with access to that information only when we find it relevant. If you haven’t heard this name before, you, too, have some catching up to do. That’s how those intermediary agents are called.

What You Need to Know About MCP

You can find a collection of Model Context Protocol servers in the MCP GitHub repository. When I first looked at this page a couple of months ago, the list of reference servers was quite long. Now it is limited to half a dozen reference implementations, while most other servers that were once part of that list have been archived.

I suppose the reason for that is simple: things are moving (changing) very fast in this area, in terms of development. Thanks to the availability of standard frameworks, it became ridiculously easy to create a new MCP server from scratch.

Requirements for Creating an MCP Server

If you are creating an MCP server to connect with AI… 1) The MCP server needs to be accessible through a public URL Maybe it’s just my inexperience, but I thought that, due to the fact that we use the AI provider’s own API locally, it could interact with a service that also sits locally. It cannot; the AI “brain” lives abroad.

This requirement is properly advertised in some places, less so in others. 2) AI can only connect to an MCP server over HTTPS And self-signed certificates are not allowed (you can make it work with some hacking, though, depending on how you do it, at the expense of security).

Creating a PostgreSQL MCP Server

The following is my simple procedure to get a PostgreSQL MCP server up and running: Note the database connection details above; I’ll talk about it in a moment.

You should see it running on localhost, port 8000, using the transport method Server Sent Events (SSE); most AI APIs can communicate over SSE or streamable HTTP, but not using the default Stdio method: A simple database to test with

My PostgreSQL test database is deliberately simple; one table with two rows: But in order to test that it works, we need to try connecting to it. To do this, we can create a second uv virtual environment on the test server, install fastmcp, and start an ipython3 session:

Providing the API key as an environment variable avoids the need to provide it as an argument when creating the Anthropic client.

Using ngrok for Testing

In hindsight, that was a lousy example; I should have come up with a better test database idea and prompt. But Claude’s response provided an interesting perspective on how it both interpreted my request and worked around my imperfect English.

Here’s a more human-readable excerpt from the response.content block:

Although it’s a simple example, it helps illustrate both the potential AI has in retrieving and making sense of data in a private database and the rough edges that require further development.

A Real-World Example

To expose my MCP server to the Internet using ngrok I created an account on ngrok and then followed the simple instructions there to get an authentication token. Then, it took three steps to expose my MCP server to the Internet using their service:

It creates a public URL and forwards requests to the local server: It’s this endpoint that we should provide to the AI api to connect to our MCP server, with one important detail to add: the service is available under the /sse location. Thus, the target URL is actually https://.ngrok-free.app/sse/.

Connecting to an MCP Server Using Anthropic’s API

In order for (most) AI APIs to be able to connect to an MCP server, it must be: One way to achieve this is to run the Python code of the MCP server on a web server that can do HTTPS. Another one is to use the reverse proxy utility in Nginx.

In both cases, you still need to configure valid SSL certificates or relax security constraints for those in FastMCP.

Using ngrok and Anthropic’s API

I resigned myself to just doing what their documentation recommended for testing: I created an account on ngrok and then followed the simple instructions there to get an authentication token. Then, it took three steps to expose my MCP server to the Internet using their service:

It creates a public URL and forwards requests to the local server: It’s this endpoint that we should provide to the AI api to connect to our MCP server, with one important detail to add: the service is available under the /sse location. Thus, the target URL is actually https://.ngrok-free.app/sse/.

AI, here’s how you can access my data

Trying it out with ChatGPT

I tried two AI products, Claude and ChatGPT. In order to access the respective APIs from Anthropic and OpenAI, you need to create an account and generate an API key.

But, unless you have some credits with them (or know something I don’t), they won’t provide free service for API access: I decided to give Anthropic a go instead and purchased some credits with them. FastMCP has good documentation on how to integrate your MCP server with Anthropic’s API (as well as OpenAI’s and Gemini).

I did more or less what they suggested there, while also checking Anthropic’s own documentation on MCP server integration.

Trying it out with ChatGPT

From my laptop, which is also running Ubuntu, I started a uv virtual environment, installed the Anthropic Python module, configured my API key as an environment variable, and started an ipython3 session:

We can start the chat now: In hindsight, that was a lousy example; I should have come up with a better test database idea and prompt. But Claude’s response provided an interesting perspective on how it both interpreted my request and worked around my imperfect English.

Here’s a more human-readable excerpt from the response.content block:

It’s interesting to see the “chain of thought” (if we are allowed to call it this way) that Claude employed to answer my question, including recovering from a failed assumption it made initially. [ME]: Are there any indexes that could be added to the table actors to improve that query? This time, I formatted AI’s response like this:

Which resulted in the following output: Although it’s a simple example, it helps illustrate both the potential AI has in retrieving and making sense of data in a private database and the rough edges that require further development.

Conclusion

How about you, what would you ask AI if it could access your database? Also, check out part two, How Can AI Talk to My Database Part Two: MySQL and Gemini.