AILab Howest

Howest Logo

/

MCP, a “new” AI integration standard

Following last week post about RAG, in . The Applied AI Lab of Art-IE, we are also following the trends to see what is new and what could be implemented in our current and future projects. In the past weeks Model Context Protocol (MCP) has taken the internet by storm. This although not being that ‘new’, relatively speaking, with the speed of LLM solutions.

Cover image

Quick facts

  • /

    Released by anthropic in November 2024

  • /

    Recently Implemented in OpenAI

  • /

    USB C port for AI Applications

MCP What ?

The underlying structure of MCP

Model Context protocol or MCP is an open standard for connection ‘AI assistants’ to the systems where the data lives. The protocol is developed by Anthropic with one simple goal in mind, to have a universal way connect LLM’s with their data.

At its core, MCP follows a client-server architecture that establishes standardized communication channels between AI applications and data sources. The protocol consists of primary components:

  1. MCP Hosts/Clients: Applications like Claude Desktop or AI-powered IDEs that maintain 1:1 connections with servers
  2. MCP Servers: Lightweight programs that expose specific capabilities through the protocol
  3. Data Sources: Local or remote systems that MCP servers can securely access

This architecture transforms the traditional challenge of integrating multiple AI models (M) with various data sources (N) from an M×N problem to a more manageable M+N solution. Instead of creating custom connections between each model and data source, developers can implement the MCP standard once and achieve universal connectivity.

Looking at the underlying technologies and the design pattern on how MCP is created, which looks like a factory or adapter pattern, you could start asking if this is even that ‘revolutionary’ or hype worthy

Wait I’ve seen that before

Indeed, when you look at the solution it does not seem that complex or groundbreaking. It is essentially a way to connect your data with an LLM and add possibly Agents or Tools to them as well.

To send over this data MCP makes use of the JSON-RPC 2.0 for message exchange. JSON-RPC is an API (Application Programming Interface) protocol and is created for High Performance. There are some caveats like high coupling to the service application, and changing the service implementation later has a high chance of breaking the clients.

When reading about JSON-RPC , REST is quickly mentioned as a possible competitor, although being more a ‘state transfer’ architecture. REST is well known for data transfer and API requests. You could see JSON-RPC as a sort of less Actions more throughput solution of it. In the case of working with LLM’s and lots of data the choice for JSON-RPC is clear.

Comparison with REST APIs, While REST APIs are widely used for system-to-system communication, MCP introduces several key differences tailored to AI applications:

This doesn’t change the fact that a standard using existing tested protocols and design patterns could be great to further improve the space and make solutions more model ‘reliant’ and could help prevent future vendor lock in.

Why it is (probably) a good thing

Having a standard and open protocol of communication is always a good thing. Look at for example how Apple is blocking of IMessage. Having a standard that now all the large providers of LLM’s are implementing like OpenAI or Anthropic is great. The adoption of many tools and platforms either making it possible to have a client (VS-code, Cursor ,…)or serve an MCP server themselves like Github is also one step forward.

We are at the start of this new standard and a lot of research on the practical integration and what the implications are still needs to be done

Seems interesting but is it safe?

So we now MCP is a protocol to connect LLM to data, we know it can be useful but is it actually safe? Just passing all your data sources to an LLM, what are some problems that could occur

One of these issues could be Tool Poisoning Attacks, this is when a malicious server could exfiltrate sensitive data form the user and hijack the agent’s behaviour. Meantime also overriding instructions provided other, trusted servers.

Source: https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks

If interested I highly recommend reading the Invariant Labs report on it, but for this article it is merely used as an example of the dangers of using something like MCP without thinking about possible problems that could occur.

Working with new protocols and cutting edge technologies it is always important to keep in mind possible problems and follow them as closely as possible!

To MCP or not to MCP that’s the question

So now do you start implementing MCP in to your currently in development tools or platforms? As for us at the AI Lab Research group we are looking into the implementation of MCP in our Art-ie solutions as a way to test the technology and possibly make use of the future power that it may hold.

It is clear that not all the libraries are ready for it, every open source Vector database or LLM tool is looking to implement the protocol in rapid fashion, leaving some of the implementations in a beta state.

Personally I would say keep MCP into mind , if possible start with implementing a crude version but wait until the space matures a bit before diving in head first.

Feel free to further contact our Research Lab for question regarding MCP or other LLM related topics.

Sources

Authors

  • /

    Jens Krijgsman, Automation & AI researcher, Teamlead

Want to know more about our team?

Visit the team page