
From the city of Pisa, Italy, where the team responsible for developing the various technological solutions of the project is gathered until Friday, Paulo Lameiro explained the results of the first year of the “Amplify” project, which utilizes real-time Artificial Intelligence and augmented reality.
“We are closing the first phase, where the project principles take practical shape in the first prototypes – or examples of prototypes – that we want to test next year, in 2026, but which we need to assess.”
The “Amplify” project, which spans three years, aims to make babies into composers.
“This means that the researchers,” who are part of an international consortium of 14 institutions, “will have to develop technology to capture babies’ signals and translate those signals into graphic notation, which is then transmitted to glasses used by musicians in concerts for early childhood.”
The research is currently in the testing phase, “not yet finished.”
In Italy, the process has already been tested “but in separate elements.” In 2026, the first integrated experience is planned for Portugal.
“We already have virtual reality glasses, the application we will use is already made in version 0.1, but we can already read scores on the glasses. The musicians have already been doing this exercise,” revealed the creator, in Portugal, of the Baby Concerts project.
Throughout this year, several parents have been “involved in co-design sessions” because “this project is very formatted by a ‘human-centered approach'” and “not only to produce patents or papers.”
Regarding the starting point, “there is a new dimension in the project,” highlighted Paulo Lameiro.
Besides babies producing the concert music, “older children, aged 2 and 3, will play a crucial role because all the ‘light design,’ the project’s lighting, will also be done based on some musical instruments that are also sensors, which will be spread across the stage.”
The older children, by manipulating and playing them, “are also activating the lights that will be incorporated not only in the musicians’ costumes but also in some other elements.”
The team is also working on a component that “was not fully planned initially.” Capturing “the emotions of the mothers holding the babies” so that “those emotions are also considered in the score” to be created.
“In other words,” clarified Paulo Lameiro, “in fact, the score will be written with three hands (or six): babies on one side; the parents and adults accompanying them; and, thirdly, the artists, because the artists and musicians have an important role here since the notation is not conventional, and they are learning to play a language that is very demanding.”
“Not just any musician will read that score,” he emphasized.
The entire development process takes cultural diversity into account.
“Mothers in Brazil, Stockholm, or Leiria have different reactions, and it’s necessary to fine-tune the software for the specific culture where the concert is happening.”
Because, depending on the geography, “there are concerts where the adults are very focused, in total silence, in complete concentration, and there are cultures where being involved and listening with concentration does not mean having that posture, it’s about having other body dynamics, other movements.”
“It’s a very interesting process we are discussing here because it involves not only technology but all human behavior in different cultures,” concluded the Portuguese researcher.



