AILab Howest

Howest Logo

/

Physical AI on your desk? Meet Reachy Mini

Original language: English
Dec 2, 2025

A few weeks ago, Howest was asked, as one of the first in the world, to collaborate on thebeta test of Reachy Mini.Reachy Mini is a compact, affordable, and fullyopen-source robotmakes artificial intelligence more accessible than ever. Reachy Mini builds on the vision of its larger predecessor: a robot platform that is not closed and mysterious, but transparent, extensible, and designed for learning, experimenting, and innovating. The big brother of Reachy has been prominently featured in our research office for several years and is part of Computer Vision and Natural Language research.

What makes Reachy Mini so special is the unique combination ofthe low-threshold availabilityinadvanced AI integrationWhile robotics often remains reserved for specialized labs or expensive industrial applications, this robot brings the power of vision models, natural language processing, and autonomous interactions within reach of students, makers, and researchers. The result is a playground where creativity is central: from smart assistants and educational demos to proof-of-concepts for future human-robot interaction.

Quick facts

  • /

    Reachy Mini was designed by Pollen Robotics in France.

  • /

    Consumer model available for 300 (Wired) to 500 (Wireless) euros

  • /

    Fully Open Source, with simulation included

Reachy Mini

Reachy Mini is an open-source robot that fits perfectly on your desk and immediately charms you with its playful appearance. Unlike industrial robots, it is not designed for automation, but to enhance the interaction between humans and robots. You can easily program it to help you with small tasks or interactive experiences.

Due to the growing community around Reachy and Reachy Mini, new applications are emerging rapidly. For example, the robot can help teach a language to children, provide information about galaxies, or even serve as a radio or alarm clock.

The simple programmability of Reachy (Mini) makes it possible to quickly build complex applications. Thanks to the integration with specialized AI models, which we have discussed in more detail in previous blog posts, the robot today has remarkably many capabilities. That power comes not only from the hardware but especially from the intelligence of the underlying AI technology.

VLM, VLA and Edge AI

When ChatGPT was launched three years ago, no one thought that everything would come at such a rapid pace. Today we have a whole bunch of language models available, a large part of which is also capable of performing actions and combining vision. Those models are also referred to asVision/Language Models (VLM)named. The integration on robots has led to the emergence of a new term, Vision Language Actions (VLA). These types of models are specifically aimed at directing robots to perform tasks using both text and images.

These models are available on the powerful AI infrastructure to use more complex reasoning, but often also have smaller-scale alternatives that can be executed directly on the robot. This form ofEdge-AIgains in popularity, the hardware for robots is improving and remains compact. But the optimizations for the models are also increasing more and more.

The Ook Reachy Mini can easily be connected to these models. Do you want the robot to perform a dance? That is possible! The specialized AI models are connected to the robot's motors using MCP-Tools and control them directly. This way, the robot can use the motors to move its head and also make the antennas on its head dance along.

Through the integrated camera, Reachy Mini can send images to a (locally running) Vision Model to respond to them. The design of Reachy Mini allows for the expression of simple emotions. In combination with AI-driven emotion recognition, Reachy Mini can be used to dance and express emotions that are linked to the emotion of the person in front of the camera.

The experience of robots

Physical AImeans that robots integrate AI models into their operation. This naturally has an impact on the programming experience. Simulations are quickly being considered when it comes to testing and executing new code. A robot is not always available, and more expensive versions quickly become riskier to make an expensive mistake.

The extra challenge in programming lies mainly in the complexity of physical hardware such as sensors and motors. Testing an AI model to perform a classification is easier than trying to execute a dance movement on a robot. Despite these difficulties and challenges, the physical tangibility of the robots offers a significant advantage, making an AI model like ChatGPT already impressive.

The future?

In January, Reachy Mini will be further connected to powerful language models that we will launch ourselves, he will learn to speak better Dutch, and he will be used in final year research projects of students in Creative Technologies and AI. With this, we are taking steps to better combine robots and AI.

We connect Reachy Mini to a central Robotics platform, developed by researchers and teachers, with which weSwarm Intelligencepromoting robots. This technique allows multiple robots to collaborate, each with their own strengths and capabilities. Wheel-based robots, humanoid robots, walking robots, or the small charming desk robots, you name it. They are all interconnected and work together on complex projects and applications.

Day of Science, flooded by robots!

For several years, Howest has participated in Science Day, a Flemish initiative to demonstrate science to young people and their families. Our research group showcased Reachy Mini, some programmable robot cars, and a robot dog. This Unitree Go2 robot can be controlled with a remote control, a smartphone app, or programmed via, among others, Python.

The interest among the general public in seeing cute, recognizable, and useful robots was striking! The Unitree Go2 attracted a lot of attention due to the interesting tricks of the robot dog. A robot capable of moving and navigating over difficult uneven terrain also offers many possibilities for new projects.

Even more robotics for our lab in the future!

Robots are often used in the research lab as ideal test environments to integrate AI models. This is also prominently featured in our vision around AI. So expect new blog posts, in which the power of Physical AI will increasingly be highlighted.

Authors

  • /

    Nathan Segers, Lecturer XR and MLOps

Want to know more about our team?

Visit the team page