Only humans can reflect on such issues, which brings us to the question: What can AI not do, what can only us humans do?
Hoos: The answer changes nearly every month because what AI can do is evolving very quickly. At present, there are clearly weaknesses in the combination of learning and logical reasoning, for example, in speech models or voice assistants like ChatGPT. These are very good at expressing themselves in language terms, perhaps answering questions and generating text. They are, however, very bad at making deep logical inferences and logically consistent arguments. Consequently, bringing learning and reasoning together is a very important issue in modern AI. Another area where AI still has clear limitations is creativity. Of course, there are AI systems that can generate impressive images, perhaps even movie sequences or music. This is so advanced that even experts cannot tell what is AI-generated and what is created by humans. But that does not mean AI is on a par with us. For example, AI has not yet convincingly written a novel. Nor has it generated a piece of really serious, non-reworked music that has the same artistic quality as, say, a fugue by Bach. So, AI definitely still has its limits.
As a researcher, do you approach AI with boundless enthusiasm? Or are you also skeptical, perhaps even apprehensive?
Hoos: For me, AI enthusiasm and the urge to use cutting-edge research to permeate areas where no one has gone before are intertwined with concerns about how to use it responsibly. We reconcile this by advancing the current state of the art, always bearing in mind our responsibility and trying to better understand AI systems. Because AI systems and algorithms are so complex that even experts do not fully understand them. We need to take the time and invest our energy in not always just developing the next best system, but in understanding what we have already built in depth.
Do we have enough experts who can master this topic?
Hoos: That is certainly one of the biggest problems. We have a lack of experts in the field of AI. And we cannot produce them fast enough either. Here at RWTH and at many other universities, we are of course doing our best, but it also takes certain basic skills to develop expertise in the field of AI. So the need is much higher than the available expertise, both in industry and in the public sector. There, the situation is even tougher because they cannot even pay truly competitive salaries. Just imagine if AI systems are used in public administration in the future. The city of Aachen will probably not be able to afford AI experts on a permanent basis, so how are these systems supposed to be developed or even reasonably supported? The lack of experts is a real problem.
Given this background, are you also afraid that AI could get out of control? Or has this perhaps already happened?
Hoos: Of course, I worry that AI use could go wrong. The fear that powerful AI systems could take control is not completely absurd. But that's not the problem we should be primarily concerned about today. Currently, the challenge is that existing systems, and those that will become available in the near future, are not understood well enough to be used responsibly, especially by people with limited expertise. People don't know the weaknesses and limitations of these systems well enough, and that's where we need to start. That's also where my research comes in: providing support through specially designed AI systems, for those with limited expertise.
That means AI would have to be monitored more intensively, but we can't afford it.
Hoos: Monitoring always has a negative connotation. We have to give ourselves guidelines and rules, just like with other products. You wouldn't drive a truck over a bridge built by second-semester students, nor would you sit in an airplane that wasn't subject to rigorous quality standards and controls. But that's exactly what has been lacking in the field of AI so far and we need this, especially when using AI in sensitive areas, for example in medicine, in sections of public administration, where fairness plays an important role, and, of course, in production.
Is the concentration of power also a problem, i.e. access of a few to AI systems?
Hoos: Yes. Many areas of AI research are driven by a few commercial interests and thus a very small number of people. On the one hand, they are pursuing this development, but on the other hand, they are also primarily profiting from these developments. And that cannot be the ideal, at least from a European perspective. However, AI use is subject to cultural differences. One example: In the European healthcare system, everyone should be treated fairly, or at least according to fairly high minimum standards. In the U.S. system, as we all know, this is vastly different. AI use could predict the actual costs that an aging person would incur. In a more profit-driven system, everyone would be responsible for their own health first and would have to bear those costs. This is not our idea of solidarity, which means that we need other AI systems, namely those that already have values such as solidarity and equal treatment built into a much greater extent.
AI use thus leads to specific fears for people. People are also worried about losing their jobs. Is their fear justified?
Hoos: For certain professions, it is of course justified. But this is not the first time that jobs have been threatened by new technology. I'm thinking of the great Industrial Revolution, which also completely turned the world of work upside down. We're seeing something similar now, with certain job descriptions undergoing major changes. For example, the programming profession. It’s relatively easy to predict that what most programmers do today will be increasingly supported by AI systems and then taken over by them. However, this can also enhance these job profiles and make them more interesting.
If work changes so much, not everyone will be able to keep up with this development. In your opinion, is human work devalued by AI?
Hoos: In some areas, human labor risks being devalued by AI, while in others, it will be upgraded. Let me give you an example. I am a fan of the Aachen comic artist Alfred Neuwald. He is intensively involved with AI and its possibilities, and sees great new tools that make it easier for him to let his creativity run wild. But other artists also feel threatened by AI – very understandable. If you want to make this development socially compatible, then it must not be too radical and not too fast. That is one of the reasons why I think a certain deceleration, if not in the development, then at least in the use of AI systems would be desirable, so we can increase people’s acceptance and trust.
Does progress always result in losses? Do you ask yourself this question as a scientist?
Hoos: I ask myself that question, of course. Even in school, we read Dürrenmatt's The Physicists and learned something about the interaction between science and society. And that is incredibly important to me. Of course, here at the chair, we ask ourselves: What impact does this have on people? In basic research, however, that's not always easy to answer.
You also talk about human-centered AI in this context.
Hoos: Human-centered AI is about more than just trusting a technology. The idea is that AI is developed and used to complement human capabilities, to compensate for human weaknesses, and allow people to do things they could not do without such systems. And contrast that with AI that is trying to simply replace human capabilities. And I honestly don't have much faith in that kind of AI.
You are passionate about networking and collaborations. What role does CLAIRE play? What is the idea behind it?
Hoos: Networking is very, very important in AI. Europe needs to be globally competitive and to achieve this, AI expertise and resources must be pooled, and we need close collaborations - in Germany, but also within Europe. And that is precisely the idea behind CLAIRE.
What can CLAIRE do specifically?
Hoos: With the Confederation of Laboratories for Artificial Intelligence Research in Europe, we’re trying to pool our resources in order to advance AI development in Europe. Of course, this is bolstered when we collaborate with others and therefore know what else is going on.
We are also in constant dialogue with the European Commission, and with European parliamentarians. In this way, we try to bring a vision for AI in Europe to the attention of politicians, but we also want to ensure that political developments are based on expert knowledge. Because with a topic as dynamic and complex as AI, politicians are overwhelmed if they have to make decisions on their own or if they have to seek out expertise themselves. It’s much better if the AI research community is organized and can support policymakers here.
Is this enough international networking?
Hoos: We need to step up our game here. The EU has very good mechanisms for promoting top individual researchers or research networks. What is missing, however, is funding for large research institutions. One successful European model in this area is CERN, which is known globally for its cutting-edge research and world-leading research in the field of particle physics. We need something like this for AI, to bring together a critical mass of experts who can then work together in an outstanding environment to focus on socially and economically important applications. AI industry would then also accumulate around such a large research institution – comparable to Silicon Valley. We need something like this in Europe to make our AI research globally competitive.
A ‘CERN for AI’ would also be a platform for ongoing dialogue and exchange ....
Hoos: A CERN for AI would essentially have three functions. It would serve as a meeting place, a platform for experts to interact and exchange ideas. Second, it would offer a research environment that the various existing research centers, even the large ones, including the Max Planck Institutes, simply cannot finance on their own. Third, it would be a global magnet for talent to create an alternative to the U.S.-based big tech companies. As a public-sector institution, the Center would be accountable to the public and largely seek to solve problems in the public interest.
What steps are needed to get this project off the ground? And how much would you need to invest?
Hoos: An AI center of this size would require a one-time investment in the single-digit billion range and probably another 10 billion euros for a ten-year operation period. In other words, we would be talking about a maximum of 20 to 25 billion euros. That sounds like a lot of money, but it certainly can be covered at the EU level – 25 billion euros is far less than a half percent of the annual budget of all member states. And what you get in return is extremely attractive: Unlike, say, particle physics, in AI, the path from the lab to the real world is very short. It would be an investment that would most likely break even within a few years. And we're not talking about important issues such as technological sovereignty yet.
A CERN for AI – are you kicking in open doors with this idea?
Hoos: When we first presented this idea to the public five years ago, it was well-received by the scientific community. Of course, politicians and policymakers were skeptical at first. However, the idea is gaining momentum, and other organizations in Germany and elsewhere have taken up this cause. It would be a big effort, but it would also be a major breakthrough. In addition to CERN, the European Space Agency (ESA) is another European large-scale project that can serve as an example. The European Union and the European states have already shown many times that such large-scale projects are possible, and that they can be very successful. And I would like a similar project for AI to see the light of day.
What would have to happen in Germany and Europe for them to catch up or even become leaders in the field?
Hoos: First of all, we must realize that we cannot achieve this on our own, that it has to happen at the European level. In other words, European networking is key. And of course, we in North Rhine-Westphalia and in Aachen are doing quite well in this respect. Here in the Meuse-Rhine Euroregion, we have a large AI center in Eindhoven, and another one in Leuven in Flanders, Belgium. Of course it makes sense for RWTH and institutions like these to network with each other. In North Rhine-Westphalia, as in other German states, there is a competence center created by the German government. The aim is to close the widening gap with the American and Chinese technology leaders. This is the Lamarr Institute, where the University of Bonn, a Fraunhofer Institute, and TU Dortmund University are involved. It would be very important not only to have this one institute here in North Rhine-Westphalia, but to closely involve the second large competence center we have in the state, namely here at RWTH, with the Lamarr Center. It would be helpful if the state government sent a clear signal by saying: This is important to us. As you can see In Bavaria and Baden-Württemberg, for example, additional state funding can take you to another level.
So more funding is needed to get RWTH out of this isolation?
Hoos: AI is a resource-intensive science. In this respect it is like particle physics. Particle physics needs large accelerators; in AI, we need huge computing capacity. It is also not enough to have the capacity in any old place; it has to be available close to the researchers. It requires a huge investment to keep up with the current major developments at Open AI, Microsoft, Google, Meta, and Apple, for example. What we need is specially equipped data centers and very good employees who are sufficiently knowledgeable about AI and able to operate these data centers. And we also need to offer an appealing working environment, because we are competing for talent with industry.
If we managed to integrate European activities in the field, what will it take to catch up with the US?
Prof. Hoos: We should be open to taking a more dynamic approach. I spent most of my scientific career in Canada, then five years in the Netherlands, and now I am here at RWTH. Even though working here has many advantages, we have significant potential for streamlining and increasing efficiency at the administrative level. For example, we would like to have a faster, more simple procurement process, as time is an important factor, especially in the field of AI, and delays have a strong adverse impact on important research projects and progress. You just need to be faster and more agile. I think university management also sees very clearly that we are not where we want to be – and this applies to other German universities as well. We simply a need to catch up.
Is there a life for you beyond AI and your role as a professor? Or do you suffer from a lack of time?
Hoos: There is the myth of the researcher who is one hundred percent dedicated to science. That’s not my thing. At the beginning of my career I thought about becoming a professional musician, but then I thought I can seriously pursue music on the side. It would not work the other way round – you cannot pursue science on the side.
Classical music? Rock? Jazz? And which instruments do you play?
Hoos: Classical music, of course. Back then I passionately played the bassoon, and I have continued to do so over the years. For the last three or four years I've been playing the instrument I've always dreamed of, the Heckelphone, a rare and fascinating instrument that is insanely fun to play and has a great sound. It's a kind of baritone oboe, and quite accessible to bassoonists – if you ever manage to get hold of an instrument.