How to query a name in a database using an MCP Server and an Agent Component
Learn how to build a small end-to-end integration where an Agent receives a name, calls an MCP tool, and retrieves the matching database record.
In this use case, you’ll configure an Agent to trigger an MCP tool and return a summary based on database records.
You will set up:
A search-people tool exposed by an MCP Server pipeline.
An Agent pipeline that triggers this tool automatically.
Step-by-step
Expose the database as an MCP tool
The first step is to create an MCP Server pipeline that makes your database query available as a tool. This is what the Agent will call later.
Create the MCP Server pipeline
Create a pipeline named mcp-search-name-db (the name must be unique in your realm; if it already exists, choose a different one). Then open the trigger configuration and select MCP Server as the trigger type.
Inside the trigger settings:
Click Add tool to create a new tool.
Fill in the following fields:
Name:
search-peopleDescription: “Search for people in a database”
You can leave the schema fields blank for now.
Enable the security options:
External API: Makes the pipeline accessible through a public URL.
API Key: Ensures the endpoint requires authentication.
After saving, the pipeline automatically creates two branches connected to Block Execution connectors:
search-people: Runs when the tool exists.
Tool Not Found: Runs when the Agent calls a tool name that isn’t configured.
We’ll configure both.
Implement the database query logic
The search-people branch is where the actual work happens. Here, we connect to the database and execute the query using values passed by the Agent.
Configure the search-people path
Hover over the Block Execution connector for this branch and click OnProcess to define what happens when the tool runs.
You will see a default JSON Generator. Remove it and add a DB V2 connector configured as follows:
General tab
Account: Select the database account you previously registered on the Platform.
Operation tab
Type:
QueryDatabase URL: Example:
jdbc:mysql://34.224.165.98/db-trainingSQL Statement: Example:
select * from clientes where name = {{ message.arguments.name }}This dynamic value comes from the tool call. When the Agent invokes search-people, it will pass an argument object like:
{
"arguments": {
"name": "João Souza"
}
}The {{ message.arguments.name }} expression extracts that value and uses it directly in the SQL.
Provide a clear message when the tool name is incorrect
Open the Tool Not Found branch, hover the Block Execution connector, and click OnProcess. Inside this flow, adjust the Throw Error connector.
You can customize the error message, but if you leave it unchanged, the MCP Server automatically returns:
HTTP 404
Message:
Tool not found
This is useful for debugging if an Agent configuration references the wrong tool name.
Test the MCP pipeline
Before connecting it to the Agent, run a quick test from the Execution Panel.
Use:
{
"tool": "search-people",
"arguments": {
"name": "João Souza"
}
}A successful result will contain the queried record:
If you see data, the database connection, tool wiring, and SQL statement are working correctly. Make sure to save the pipeline.
Secure and deploy the MCP endpoint
Create an API Key
Open the Platform Settings and go to the Consumers (API Keys) page. You can either create a new consumer or add the mcp-search-name-db pipeline to an existing one.
Copy the generated API Key. The Agent will need it later.
Deploy the pipeline
Open the Run page, search for the pipeline, create and confirm the Deploy.
After deployment, open the pipeline details and copy the public MCP endpoint URL (it ends with /mcp). You’ll paste this URL into the Agent tool configuration.
Register your LLM provider account
Before configuring your Agent, you must configure the account that will be used to access the LLM provider. To do so:
Create an API key on your LLM provider (OpenAI is used in this example).
In the Platform, register it as an account of type Secret Key.
Build the Agent pipeline
Now we create the part of the flow that communicates with the user. The Agent will receive a name, call the MCP tool, retrieve the data, and summarize it.
Set up the pipeline
Create a pipeline named api-search-people-mcp (the name must be unique in your realm; if it already exists, choose a different one). Then configure the trigger as REST and apply the following settings:
Keep only the
GETmethod enabled.Enable these security options:
External API: Exposes the pipeline through a public URL.
API Key: Requires authentication to access the endpoint.
Right after the trigger, add a JSON Generator and insert this JSON:
{
"arguments": {{ message.queryAndPath }}
}This ensures that all query parameters and path parameters sent in the GET request are mapped correctly into a JSON object.
Then, add an Agent Component.
Configure the Agent Component
Inside the Agent Component configuration:
Model: OpenAI – GPT-4.1 Mini (good cost–quality balance for this example)
System Message:
You are a helpful assistant that can look up information about people in a database.
You have access to a tool named "search-people" that retrieves records based on a person's name.
When a user asks about someone, call the "search-people" tool with the correct name as the input.
If the tool returns data, summarize the main details clearly.User Message:
Find information about {{ message.arguments.name }} in the database.This prompt structure means the user sends only the name, and the Agent handles the rest.
Add the tool to the Agent
In the Agent Component, create a new tool with:
Name:
search-people(must match exactly the tool name created in the MCP Server)Server URL: Paste the deployed MCP endpoint URL copied on step 5. Example:
https://test.godigibee.io/pipeline/enablement/v1/mcp-search-name-db/mcpHeaders: Add a key-value pair and paste the API Key copied on step 5. Example:
apikey: 7f3c1e9b4d2a47c8a1f0e6b29c5d803fWith this, the Agent can now reach the MCP endpoint securely. You can now save this pipeline.
Run the use case end-to-end
Inside the Execution Panel
To test the flow inside Digibee, open the Execution Panel of the api-search-people-mcp pipeline and send:
{
"queryAndPath": {
"name": "João Souza"
}
}Here’s what happens behind the scenes:
The Agent reads the name.
It decides to call the
search-peopletool.The MCP pipeline receives the tool call and executes the SQL query.
The result is returned to the Agent.
The Agent writes a simplified, natural-language summary.
Example:
{
"body": {
"text": "João Souza is from Rio de Janeiro, RJ, Brazil. He lives on Rua Visconde de Pirajá, with the postal code 22410-003. His email address is [email protected]. His record was created on the database at a timestamp of 1718717592000 and has a due date of 1724112000000."
},
"tokenUsage": {
"inputTokenCount": 353,
"outputTokenCount": 98,
"totalTokenCount": 451
}
}In an API testing tool
To test the use case outside Digibee, deploy the api-search-people-mcp pipeline (just like you did with mcp-search-name-db) and copy the endpoint URL.
Then follow these steps:
Open your API testing tool (for example, Postman).
Select the
GETmethod.Paste the endpoint URL (for example:
https://test.godigibee.io/pipeline/enablement/v1/api-search-people-mcp).In Params, add the following key–value pair:
name:João Souza(replace with an information available in your database)
In Headers, add the following key–value pair:
apikey:7f3c1e9b4d2a47c8a1f0e6b29c5d803f(replace with the API Key you created for theapi-search-people-mcppipeline)
Click Send.
Example response:
{
"body": {
"text": "I found the following information about João Souza:\n\n- He lives in Rio de Janeiro, RJ, at Rua Visconde de Pirajá, CEP: 22410-003.\n- His email address is [email protected].\n- His record was created on the database at a timestamp of 1718717592000.\n- He has a due date recorded as 1724112000000.\n- His code in the database is 7676."
},
"tokenUsage": {
"inputTokenCount": 353,
"outputTokenCount": 118,
"totalTokenCount": 471
}
}Additional Agent configurations to improve accuracy
After your Agent is working, you can improve consistency by fine-tuning parameters such as temperature, top-p, top-k, and optional JSON Schema validation. Lower temperatures help reduce hallucinations, while schemas ensure that the output always matches the expected structure. These configurations are optional but recommended for production scenarios.
For full explanations and parameter details, see the complete documentation of the Agent Component.
Last updated
Was this helpful?