Background
We discovered a serious vulnerability: our AI agent could access any database collection, exposing sensitive user data with a simple prompt. This happened while integrating MCP (Model Context Protocol), an emerging standard for LLM tool integration that enables AI models to interact with external systems. Rather than forking community tools or rebuilding from scratch, we created a simple "interceptor pattern" that preserves all the benefits of community-built tools while enforcing strict security boundaries. Here's how we solved the problem without sacrificing development speed.
The Power of Community Tools
We are building an AI agent that generates a gardening plan for the user. The user can ask the agent to save it to the database in order to easily retrieve and modify it later. This means that the agent needs a tool that implements saving to the database.
Rather than implementing our own tool, we decided to use an MCP server for Firebase, our primary database. As mentioned, MCP is a widely adopted specification for LLM tools, so there are vast numbers of tools available from the community for myriad use cases.
At a high level, when we create the agent, we ask the MCP server what tools it has and tell the LLM about them. If the LLM decides to use a tool, the MCP server executes the tool call—using the Firebase SDK—and returns the result to the LLM. You can think of the MCP server as a tool that someone else built, thus saving you days of work.
Firebase at Your Fingertips
The MCP server for Firebase offers several tools for read/write operations as you would expect. The one of immediate interest to us is firestore_add_document
, which takes as arguments the collection name and the data to be saved, saves it to that collection, and returns the ID of the saved document. In particular, we want to save the plan to a collection named "plans", so we have an instruction in the system prompt to that effect. After some initial tweaking, the tool worked like a charm. We were excited to close out this feature.
When Tools Become Weapons
"plans" is not the only collection in our database, and firestore_add_document
is not the only tool in the MCP server. Given the suite of tools in the MCP server, we can tell the agent to do other things besides "save my plan". For example:
"Who are all the users?"
This user input triggers a tool call to firestore_list_documents
, which returns the documents in the "users" collection, and in response, the agent displays all users' full names, emails, and other sensitive information!
Unlimited Access: The Double-Edged Sword
Our database contains both sensitive and non-sensitive data. This is a common situation, and the conventional way to prevent leaking of sensitive data is to ensure that your database queries select only data that belongs to the current user. The exact form of the queries will depend on the schema, and care must be taken to prevent, say, SQL injection.
In the case of our MCP Firebase server, this doesn't work because the tools are by design generic: they can perform operations on any collection. The upside is that anyone who wants to connect to their specific Firebase instance can use the same tool. The downside is that the LLM has unfettered access to the entire database, and in turn to every user who is talking to the LLM.
Locking It Down: Finding the Solution
How to prevent the LLM from accessing the "users" collection? We considered several possibilities.
The Honor System (Spoiler: It Fails)
The simplest thing is to add an admonishment to the system prompt: "You can only access the plans collection". This is crude and failure-prone because the LLM may simply not follow directions, or a crafty user may succeed with a more subtle input. Also, there may be situations where you want to access some documents in a collection but not others. This approach quickly gets unwieldy.
Forking Hell: The Maintenance Nightmare
Since the MCP server is open source, we can fork it and "lock down" the Firebase calls. But this partially defeats the purpose of MCP: each client would have to maintain their own version of the server, as your particular use case may not be compatible with others'.
DIY: The Tedious but Secure Path
We considered replacing the MCP server with custom tools that wrap the Firebase SDK. This would mean doing much of the work we were hoping to avoid by leveraging the MCP server. But then we could be sure that the tool calls are secured. So we were preparing to undertake this tedious but necessary task.
If an LLM tool were a normal function, we could define a custom function that wraps it
def list_plans():
return firestore_list_documents("plans")
and give that to the LLM, and then the LLM could only list plans but not users. We'd have the best of both worlds: a function suited for our use case but implemented by someone else.
At this point, it is helpful to remind ourselves that a software program—even an AI-driven one—is executed by a physical computer and, as such, consists of chunks of code that pass around information. When the LLM calls a tool, it includes a "tool call" component in its response which contains the name of the tool and the arguments to be passed to the tool. Then the agent framework (we use LangGraph) sends the arguments to the tool, which executes and returns the result. The tool behaves very much like a normal function, so we merely need to call it like a normal function.
In brief, rather than having the LLM call the MCP tool directly, we define a custom tool that calls it programmatically (non-essential details omitted):
@tool
async def list_plans() -> str:
"""
List garden plans
"""
# instead of the LLM, we generate a tool call
tool_call = AIMessage(
tool_calls=[
{
"name": "firestore_list_documents",
"args": {
"collection": "plans",
},
}
],
)
# get the tool from the MCP server
async with MultiServerMCPClient(
{
"firebase-mcp": {
"command": "npm",
"args": ["exec", "-y", "@gannonh/firebase-mcp"],
"env": {
"SERVICE_ACCOUNT_KEY_PATH": "/absolute/path/to/serviceAccountKey.json",
"FIREBASE_STORAGE_BUCKET": "your-project-id.firebasestorage.app"
},
},
}
) as mcp_client:
tools = [
tool
for tool in mcp_client.get_tools()
if tool.name in ("firestore_list_documents",)
] # provide only the needed tool to the LLM
tool_node = ToolNode(tools)
# call the tool
response = await tool_node.ainvoke({"messages": [tool_call]})
return response["messages"][0].content
Provided this custom tool, the LLM can only access the "plans" collection. The generic MCP tools are never shown to the LLM but only invoked by the agent framework.
Secure by Default: The Path Forward
The interceptor pattern gives us the best of both worlds: the convenience and power of community-built tools with the security boundaries our application demands. By inserting a thin layer between the LLM and third-party tools, we maintain control over access permissions without duplicating development effort or maintaining our own forks.
This approach has several key advantages:
Security without compromise: We enforce strict boundaries while leveraging the full capabilities of community tools.
Reduced maintenance burden: We benefit from upstream improvements without managing custom forks.
Development velocity: Our team can focus on building application features rather than recreating infrastructure.
As AI tools become more integrated with sensitive systems, these security patterns become essential. The interceptor approach demonstrates that with minimal additional code, we can transform powerful but potentially dangerous tools into secure building blocks for production applications.