Introduction to AI Agents: Functioning, Terminology, and Developments
/
Introduction to AI Agents: Functioning, Terminology, and Developments
AI Agents, a term that is increasingly being mentioned by everyone when it comes to automation. But what are these Agents, what do they entail, and how can they be effectively utilized? This is the first part of this blog post. Furthermore, we will also take a look inside an agent and highlight some terms that are in thecontextfrom Agents to come forward (this is a bit of a bad pun, if you don't understand it yet, it will become clear later!)
After this explanation of the technologies behind agents, we reflect on how Agents have evolved from 'smart chatbot' to 'usable Agent' in 2025. To conclude, we also look at what Agents can mean in the future or what we will focus on further.
Main section
Quick facts
/
Agents make decisions, workflows follow scripts.
/
LLM + Tools + Context = Autonomous Agent
/
MCP is the USB-C for AI connectivity
/
SLMs are now running locally on your laptop.
What are Agents now
The first point we will look at is the definition of the word 'Agent'. For many, an Agent is a system that takes action on its own. Of course, there is always some kind of starting point or trigger. This can be receiving an email, a value on a sensor that becomes too high, or a simple start button. You can then ask yourself the following question.
What is the difference between Agents and workflows?

I like to use an analogy for this. A workflow can be seen as a kind of 'train track'. These arevastpaths that follow fixed routes. This does not mean that there cannot be AI or Language Model involved!
Take, for example, the automatic summarization of an email and placing it in a document. This is an example of a workflow where one of the stops is an LLM for the summarization.
An AI agent can be seen as a taxi with a GPS that adapts to the situation. There is not one specific route; here the language model will make a decision on which route will be taken.
Here you can think of a solution that, instead of summarizing an email, will analyze whether it is a positive or negative email and then decide to take action (we only provide the options or locations, but not the paths or tracks).
Hopefully, this makes the difference between an Agent and a workflow clearer. An Agent willObserving(collecting and processing information)Plans(determine what happens next through reasoning),Trading(perform with available tools), andLearning(adjust based on results). This brings me directly to
How an Agent works in the background now

Here you see an example of how to introduce an Agent. A kind of Manager who knows which people are present in his team. And upon receiving a task (in this case, writing an email and performing calculations), he will start by creating a plan to then execute it.
Now, what if we take a technical look at such a solution, what is behind such a solution?
The engine of an Agent is an LLM or large language model. The power of an LLM is thepredictingof words. It actually tries to answer your question as well as possible depending on the text you provide. You may wonder what this has to do with an agent; well, you give the language model your question and also a list ofToolsthat can use it.
ThisToolsyou can see this as functionalities. In the example, the calculator, the (other) language model, and the Email client.
Then the language model will try to estimate as accurately as possible which tools are needed in which order to answer your question, and this will be executed.
For simplicity, I have omitted some more technical terms in the explanation of an Agent, although I would still like to focus on a few.
Terms and Directions that Agents have pursued in the past year
Context engineering
A first term isContext Engineering.Perhaps you are familiar with prompt engineering, where you look to build an optimal prompt. Over the past year, everything has shifted more towards context. The context of a model can be seen as the limit of memory. What is becoming increasingly important for agents is to limit this context. The fewer words used, the lower the cost and the more 'to the point' the answer will remain. (Hence the play on words with context in the intro 😅) For example, it is also easier for people to remember details from 1 book instead of 20 at the same time without mixing them up.

RAG
A next term is RAG or retrieval augmented generation. I don't want to go into too much technical detail here; you can see this as a way to use a language model with the extra limitation that it can only take the answer from various sources. Tools from Google Notebook LM are a good example of this. You can think of it as a kind of selection being made from documents and data to then provide this to the language model.

MCP
MCP of model context protocol is also a term that has come up a lot in the past year. MCP is actually the API or standard 'connector' of tools to language models. This was originally developed by Anthropic (the creators of Claude) and is now also adopted by all the other providers (Google, Grok, OpenAI, etc.). You can see it as the USB-C for connecting functionalities to language models.

SLM
One that did come up sometimes wasSLMsof small language models. These have not received as much attention as their larger counterparts but are quietly improving a lot. This has even reached the point where you can easily run a language model on a laptop without a GPU (graphics card) that still produces understandable output.

These techniques indicate some changes in the field of Agents and will also serve as the basis for further improvements next year.
Bottom section
What is coming?
Now the question is, what follows or what to focus on. As a research group in the topic, we do not have a crystal ball. If you search the internet, you will find everything; it is now also a real hype. What I can share is what we are focusing on further.
The first is to deploy local LLMs (as well as SLMs); we have also made a purchase of an AI server that will be very useful for our research projects. This is interesting both to avoid dependence on cloud providers and to quickly and easily deploy open-source models.
The next focus is on managing the context and further exploring how we can establish better Agent structures and connect them with each other. This will be done using open source, available software as well as our own structures depending on the projects. Further examining these connections, structures, and context will help Agents transition from a 'smart chatbot' to a 'usable Agent'.
Finally, we will also continue to focus on closely monitoring trends and further documenting and incorporating them into our projects.
Hopefully, the article has taught you more about what AI Agents are and provided more context about what is technically involved. If there are any uncertainties or if you have questions about how AI Agents can mean something for you, feel free to get in touch!
Contributors
Authors
/
Jens Krijgsman, Automation & AI researcher, Teamlead
Want to know more about our team?
Visit the team page
