We work on the entire AI lifecycle, from obtaining the data on a digital platform, through the pre-processing of heterogeneous data (image, time series, natural language, etc.), to the generation of models with one or several types of data by means of: algorithms developed by the team, or integrating and customising existing algorithms to provide intelligence to platforms, machines and services.
We specialize in making artificial intelligence models explainable, including through natural language. Our expertise lies in large language models (LLMs), as well as the implementation of multimodal and multitasking AI. We can provide clients with the technology they need to address their problems.
We work on collaborative AI by strengthening the knowledge of horizontal and vertical federated learning, allowing companies to collaborate without losing confidentiality and privacy of their data and ensuring that AI is responsible (dependable, robust, ethical and secure). We put implemented models into production, specialising in optimising advanced AI models for deployment on cloud platforms and embedded devices, and automating their retraining if necessary (concept drift).
We develop AI algorithms using quantum technology to disruptively process and analyse data for use cases that are inaccessible to classical computing.
Our pioneering and high-level facilities offer the essential tools to carry out tests and validations of different technologies and thus address challenges such as sustainability, energy efficiency and cybersecurity.
Our purpose is to provide intelligence to multi-sector systems such as health, finance, industry or transport, accompanying the progress of society with artificial intelligence.