"We Need a C.E.R.N. for Artificial Intelligence".

 

Speaking to Professor Holger Hoos

Professor Holger Hoos is one of the directors of the AI Center at RWTH. In this interview, he reveals where the AI journey is headed, what the real dangers are in the use of AI, and where the limits of AI use lie for him. He also has a lot to say about the heckelphone.

 
Play Video
AI Week at RWTH: an Interview With Professor Holger Hoos
 

What is AI?

Holger Hoos: That’s a difficult question that even experts do not agree on. And AI’s capability changes nearly every week. Generally speaking, AI is about imitating, reproducing, and supporting abilities that require human intelligence, on computers.

That means artificial intelligence is by all means comparable to human intelligence?

Hoos: Artificial intelligence is motivated by human intelligence. After all, human intelligence is the only one we really know about, the only role model that exists for intelligence. And that’s why, of course, artificial intelligence is measured against human intelligence and the developments in AI research are inspired by human intelligence or, more precisely, tasks that require human intelligence.

Where do we already encounter AI in our day-to-day lives?

Hoos: In almost all areas, but it's not always immediately obvious. Countless computer chips are installed in modern cars, for example. The hardware on which all this is based must of course function correctly, so it’s extremely important that we can certify that this hardware is correct. And AI methods are available for this purpose. AI is used almost everywhere IT systems are used. And then, of course, there are obvious AI applications, like voice assistants on our smartphones. AI is also increasingly used in creative areas, especially in spectacular movies. While graphics play a big role, more and more AI methods are also used to animate characters in a realistic way. My last example is science, where AI is being used to produce scientific knowledge. This applies to physics, chemistry, biology, medicine, electrical engineering, engineering – just about everywhere. AI has the potential to improve science.

Where do you personally allow AI in your day-to-day life? How smart is your home, for example?

Hoos: I live in a semi-smart home. I find smart home technology and the use of AI, where it makes our lives a little easier, very interesting. I personally draw the line at applications like Amazon Echo or other voice assistants that constantly listen in to what’s being said – even if it’s only to follow instructions. That goes a bit too far for me.

What research areas does AI cover and what do they focus on?

Hoos: AI is a very broad science that is very often reduced to so-called machine learning. This is an area we’re also working on very intensively, but there are many other areas in AI, such as automated reasoning and automated deduction. The focus there is on drawing conclusions, and operating with reason and logic. This type of AI is incredibly important when you want to show the correct function of technical systems or for constructing and writing mathematical proofs. We also have automated planning, which plays an important role in AI-assisted efficiency improvements in production. Or AI-optimized route planning, which is already built into in-car navigation systems to a certain extent but is used primarily in logistics. And then we have areas that deal with connecting different intelligent systems or intelligent systems and people, so-called cooperative AI. These are all extremely important areas, plus there are application areas like image processing, image recognition, facial recognition, or of course AI-based robotics. My focus is on AI methodology, particularly on making AI methods more efficient and making sure that AI methods can be used responsibly and effectively by non-experts. This is an area that is referred to in English as Automated Artificial Intelligence and particularly focuses on machine learning, which is specifically about achieving good results - even without being an expert.

How can AI help industry and how can it help people?

Hoos: AI plays an increasingly important role in all areas of industry because this sector has traditionally been very data-driven, especially in Germany and Europe. In the meantime, even in areas such as mechanical engineering, a great deal of data is used in production, planning, and design processes - and clearly influences the quality of what is feasible.

This applies not only to the operation of plants but also to planning and implementation. In this way, industry can become more efficient and competitive in the global market, while production processes can be more resource-efficient through the use of AI at the same time. AI is therefore important when it comes to business and also when it comes to how industry interacts with society and the environment. I find it problematic when AI research and AI cutting-edge research is mainly or exclusively conducted by a few industrial companies. Why? AI is generally too important to be left to industry alone. Society should have an interest in ensuring that cutting-edge AI research is also conducted in the public sector. One example: the decoding of the human genome took place about 25 years ago, with a single company leading the way. Then a broad coalition of governments said: the subject is too important to leave it to one company, to industry alone. It’s the same with AI.

Has this perspective already reached the public, especially politics?

Hoos: Not really, not yet. The consequences of technical dependence are only just beginning to emerge. What happens if AI development is pooled in the hands of a few major US companies and then, due to some political constellation, we Europeans are cut off - just as we were cut off from gas from Russia? Our economy is already so heavily AI-driven that this kind of scenario is frightening. I find the idea of such a dependency extremely alarming.

The topic of AI is not new, did we miss out on certain developments?

Hoos: We can trace the idea of AI back to over 100 years ago, it was a big dream that a few scientists got behind. This has completely changed in the last 20 years. AI is a reality, no longer just some vision, not just something you see in science fiction movies. This transition from vision to reality is something that policymakers did not quite catch on to in time. And they especially did not understand what the implications might be. And even though both the EU and the German government have taken measures since 2018 to make us worthy competitors in AI, they fall short. Because, unfortunately, they have not yet led to a real shrink of the huge gap between key AI competencies in US-based large-scale industry and European companies or even European public institutions. On the contrary, it has actually widened over the past five years. Not because there has been no investment but rather because there has been too little investment and, above all, too little focused investment. We have a lot of catching up to do.

AI a century years ago? Before computers existed?

Hoos: Actually, the vision goes back much further. The vision of constructing machines that emulate important human abilities, such as playing chess. At the time, this was a popular fairground attraction in which a small human was stuck in a box and appeared to play chess against a human opponent. But this vision that something mechanical, something algorithmic could do that was already there. And the origins of the computer as a concept are very closely interwoven with the origins of the idea that this machine should be able to do more than just calculate.

Is AI therefore a dream of humankind?

Hoos: AI is most certainly one of humankind's great dreams. For me, it’s on a par with the dream of flying or the dream of leaving our planet, of discovering other worlds.

How does AI actually work? How does a machine learn?

Hoos: To explain this well, you need a few semesters of computer science, followed by a few more semesters of specialized AI studies. Basically, it entails the clever combination of statistical methods that are used to draw certain conclusions from data. Optimization is an incredibly important topic in AI. In very rough terms, you can describe machine learning like this: We want to describe a lot of data in a relatively simple mathematical way so that we can relate new data to what is already there. So first we say: This is a face and then: This is the face of Mr. Hoos. And that is the optimization process. In other areas of AI, logic is the central issue. Logic is what makes it possible to run AI algorithms in the first place. Basically, it is an extension of classical logic, which was developed and run by the ancient Greeks over 2,000 years ago. This is automated and made efficient.

Were you a math whizz?

Hoos: That depends on your definition. I was certainly very interested in math. When it comes to math, many people think of calculations. I personally find them useful, but not particularly exciting. But real math has much more to do with proving and modeling, which has fascinated me for many years. And the automation of proofs has a lot to do with AI. So, the step from math to AI was a very natural one for me.

The concept of responsibility was mentioned earlier. What could a responsible approach to AI look like?

Hoos: As with any technology, the first thing is to be able to assess the technology well. This includes a good understanding of its weaknesses and limitations. AI has a lot of catching up to do in this area. Of course, research is being done on this, and we are also researching this issue at our chair. But the trend in AI is to develop what is feasible and less to deal with the limits. This trend needs to change. We need more people dealing with the limits and weaknesses of existing or even future systems. And this will then form the basis for responsible use. Then we need to reflect on what we would like to achieve. Perhaps there are areas in our lives where this consideration leads to us saying: I would rather not use AI here, and that is okay. This reflection is extremely important.

  A man is being filmed Copyright: © RWTH Aachen

Only humans can reflect on such issues, which brings us to the question: What can AI not do, what can only us humans do?

Hoos: The answer changes nearly every month because what AI can do is evolving very quickly. At present, there are clearly weaknesses in the combination of learning and logical reasoning, for example, in speech models or voice assistants like ChatGPT. These are very good at expressing themselves in language terms, perhaps answering questions and generating text. They are, however, very bad at making deep logical inferences and logically consistent arguments. Consequently, bringing learning and reasoning together is a very important issue in modern AI. Another area where AI still has clear limitations is creativity. Of course, there are AI systems that can generate impressive images, perhaps even movie sequences or music. This is so advanced that even experts cannot tell what is AI-generated and what is created by humans. But that does not mean AI is on a par with us. For example, AI has not yet convincingly written a novel. Nor has it generated a piece of really serious, non-reworked music that has the same artistic quality as, say, a fugue by Bach. So, AI definitely still has its limits.

As a researcher, do you approach AI with boundless enthusiasm? Or are you also skeptical, perhaps even apprehensive?

Hoos: For me, AI enthusiasm and the urge to use cutting-edge research to permeate areas where no one has gone before are intertwined with concerns about how to use it responsibly. We reconcile this by advancing the current state of the art, always bearing in mind our responsibility and trying to better understand AI systems. Because AI systems and algorithms are so complex that even experts do not fully understand them. We need to take the time and invest our energy in not always just developing the next best system, but in understanding what we have already built in depth.

Do we have enough experts who can master this topic?

Hoos: That is certainly one of the biggest problems. We have a lack of experts in the field of AI. And we cannot produce them fast enough either. Here at RWTH and at many other universities, we are of course doing our best, but it also takes certain basic skills to develop expertise in the field of AI. So the need is much higher than the available expertise, both in industry and in the public sector. There, the situation is even tougher because they cannot even pay truly competitive salaries. Just imagine if AI systems are used in public administration in the future. The city of Aachen will probably not be able to afford AI experts on a permanent basis, so how are these systems supposed to be developed or even reasonably supported? The lack of experts is a real problem.

Given this background, are you also afraid that AI could get out of control? Or has this perhaps already happened?

Hoos: Of course, I worry that AI use could go wrong. The fear that powerful AI systems could take control is not completely absurd. But that's not the problem we should be primarily concerned about today. Currently, the challenge is that existing systems, and those that will become available in the near future, are not understood well enough to be used responsibly, especially by people with limited expertise. People don't know the weaknesses and limitations of these systems well enough, and that's where we need to start. That's also where my research comes in: providing support through specially designed AI systems, for those with limited expertise.

That means AI would have to be monitored more intensively, but we can't afford it.

Hoos: Monitoring always has a negative connotation. We have to give ourselves guidelines and rules, just like with other products. You wouldn't drive a truck over a bridge built by second-semester students, nor would you sit in an airplane that wasn't subject to rigorous quality standards and controls. But that's exactly what has been lacking in the field of AI so far and we need this, especially when using AI in sensitive areas, for example in medicine, in sections of public administration, where fairness plays an important role, and, of course, in production.

Is the concentration of power also a problem, i.e. access of a few to AI systems?

Hoos: Yes. Many areas of AI research are driven by a few commercial interests and thus a very small number of people. On the one hand, they are pursuing this development, but on the other hand, they are also primarily profiting from these developments. And that cannot be the ideal, at least from a European perspective. However, AI use is subject to cultural differences. One example: In the European healthcare system, everyone should be treated fairly, or at least according to fairly high minimum standards. In the U.S. system, as we all know, this is vastly different. AI use could predict the actual costs that an aging person would incur. In a more profit-driven system, everyone would be responsible for their own health first and would have to bear those costs. This is not our idea of solidarity, which means that we need other AI systems, namely those that already have values such as solidarity and equal treatment built into a much greater extent.

AI use thus leads to specific fears for people. People are also worried about losing their jobs. Is their fear justified?

Hoos: For certain professions, it is of course justified. But this is not the first time that jobs have been threatened by new technology. I'm thinking of the great Industrial Revolution, which also completely turned the world of work upside down. We're seeing something similar now, with certain job descriptions undergoing major changes. For example, the programming profession. It’s relatively easy to predict that what most programmers do today will be increasingly supported by AI systems and then taken over by them. However, this can also enhance these job profiles and make them more interesting.

If work changes so much, not everyone will be able to keep up with this development. In your opinion, is human work devalued by AI?

Hoos: In some areas, human labor risks being devalued by AI, while in others, it will be upgraded. Let me give you an example. I am a fan of the Aachen comic artist Alfred Neuwald. He is intensively involved with AI and its possibilities, and sees great new tools that make it easier for him to let his creativity run wild. But other artists also feel threatened by AI – very understandable. If you want to make this development socially compatible, then it must not be too radical and not too fast. That is one of the reasons why I think a certain deceleration, if not in the development, then at least in the use of AI systems would be desirable, so we can increase people’s acceptance and trust.

Does progress always result in losses? Do you ask yourself this question as a scientist?

Hoos: I ask myself that question, of course. Even in school, we read Dürrenmatt's The Physicists and learned something about the interaction between science and society. And that is incredibly important to me. Of course, here at the chair, we ask ourselves: What impact does this have on people? In basic research, however, that's not always easy to answer.

You also talk about human-centered AI in this context.

Hoos: Human-centered AI is about more than just trusting a technology. The idea is that AI is developed and used to complement human capabilities, to compensate for human weaknesses, and allow people to do things they could not do without such systems. And contrast that with AI that is trying to simply replace human capabilities. And I honestly don't have much faith in that kind of AI.

You are passionate about networking and collaborations. What role does CLAIRE play? What is the idea behind it?

Hoos: Networking is very, very important in AI. Europe needs to be globally competitive and to achieve this, AI expertise and resources must be pooled, and we need close collaborations - in Germany, but also within Europe. And that is precisely the idea behind CLAIRE.

What can CLAIRE do specifically?

Hoos: With the Confederation of Laboratories for Artificial Intelligence Research in Europe, we’re trying to pool our resources in order to advance AI development in Europe. Of course, this is bolstered when we collaborate with others and therefore know what else is going on.

We are also in constant dialogue with the European Commission, and with European parliamentarians. In this way, we try to bring a vision for AI in Europe to the attention of politicians, but we also want to ensure that political developments are based on expert knowledge. Because with a topic as dynamic and complex as AI, politicians are overwhelmed if they have to make decisions on their own or if they have to seek out expertise themselves. It’s much better if the AI research community is organized and can support policymakers here.

Is this enough international networking?

Hoos: We need to step up our game here. The EU has very good mechanisms for promoting top individual researchers or research networks. What is missing, however, is funding for large research institutions. One successful European model in this area is CERN, which is known globally for its cutting-edge research and world-leading research in the field of particle physics. We need something like this for AI, to bring together a critical mass of experts who can then work together in an outstanding environment to focus on socially and economically important applications. AI industry would then also accumulate around such a large research institution – comparable to Silicon Valley. We need something like this in Europe to make our AI research globally competitive.

A ‘CERN for AI’ would also be a platform for ongoing dialogue and exchange ....

Hoos: A CERN for AI would essentially have three functions. It would serve as a meeting place, a platform for experts to interact and exchange ideas. Second, it would offer a research environment that the various existing research centers, even the large ones, including the Max Planck Institutes, simply cannot finance on their own. Third, it would be a global magnet for talent to create an alternative to the U.S.-based big tech companies. As a public-sector institution, the Center would be accountable to the public and largely seek to solve problems in the public interest.

What steps are needed to get this project off the ground? And how much would you need to invest?

Hoos: An AI center of this size would require a one-time investment in the single-digit billion range and probably another 10 billion euros for a ten-year operation period. In other words, we would be talking about a maximum of 20 to 25 billion euros. That sounds like a lot of money, but it certainly can be covered at the EU level – 25 billion euros is far less than a half percent of the annual budget of all member states. And what you get in return is extremely attractive: Unlike, say, particle physics, in AI, the path from the lab to the real world is very short. It would be an investment that would most likely break even within a few years. And we're not talking about important issues such as technological sovereignty yet.

A CERN for AI – are you kicking in open doors with this idea?

Hoos: When we first presented this idea to the public five years ago, it was well-received by the scientific community. Of course, politicians and policymakers were skeptical at first. However, the idea is gaining momentum, and other organizations in Germany and elsewhere have taken up this cause. It would be a big effort, but it would also be a major breakthrough. In addition to CERN, the European Space Agency (ESA) is another European large-scale project that can serve as an example. The European Union and the European states have already shown many times that such large-scale projects are possible, and that they can be very successful. And I would like a similar project for AI to see the light of day.

What would have to happen in Germany and Europe for them to catch up or even become leaders in the field?

Hoos: First of all, we must realize that we cannot achieve this on our own, that it has to happen at the European level. In other words, European networking is key. And of course, we in North Rhine-Westphalia and in Aachen are doing quite well in this respect. Here in the Meuse-Rhine Euroregion, we have a large AI center in Eindhoven, and another one in Leuven in Flanders, Belgium. Of course it makes sense for RWTH and institutions like these to network with each other. In North Rhine-Westphalia, as in other German states, there is a competence center created by the German government. The aim is to close the widening gap with the American and Chinese technology leaders. This is the Lamarr Institute, where the University of Bonn, a Fraunhofer Institute, and TU Dortmund University are involved. It would be very important not only to have this one institute here in North Rhine-Westphalia, but to closely involve the second large competence center we have in the state, namely here at RWTH, with the Lamarr Center. It would be helpful if the state government sent a clear signal by saying: This is important to us. As you can see In Bavaria and Baden-Württemberg, for example, additional state funding can take you to another level.

So more funding is needed to get RWTH out of this isolation?

Hoos: AI is a resource-intensive science. In this respect it is like particle physics. Particle physics needs large accelerators; in AI, we need huge computing capacity. It is also not enough to have the capacity in any old place; it has to be available close to the researchers. It requires a huge investment to keep up with the current major developments at Open AI, Microsoft, Google, Meta, and Apple, for example. What we need is specially equipped data centers and very good employees who are sufficiently knowledgeable about AI and able to operate these data centers. And we also need to offer an appealing working environment, because we are competing for talent with industry.

If we managed to integrate European activities in the field, what will it take to catch up with the US?

Prof. Hoos: We should be open to taking a more dynamic approach. I spent most of my scientific career in Canada, then five years in the Netherlands, and now I am here at RWTH. Even though working here has many advantages, we have significant potential for streamlining and increasing efficiency at the administrative level. For example, we would like to have a faster, more simple procurement process, as time is an important factor, especially in the field of AI, and delays have a strong adverse impact on important research projects and progress. You just need to be faster and more agile. I think university management also sees very clearly that we are not where we want to be – and this applies to other German universities as well. We simply a need to catch up.

Is there a life for you beyond AI and your role as a professor? Or do you suffer from a lack of time?

Hoos: There is the myth of the researcher who is one hundred percent dedicated to science. That’s not my thing. At the beginning of my career I thought about becoming a professional musician, but then I thought I can seriously pursue music on the side. It would not work the other way round – you cannot pursue science on the side.

Classical music? Rock? Jazz? And which instruments do you play?

Hoos: Classical music, of course. Back then I passionately played the bassoon, and I have continued to do so over the years. For the last three or four years I've been playing the instrument I've always dreamed of, the Heckelphone, a rare and fascinating instrument that is insanely fun to play and has a great sound. It's a kind of baritone oboe, and quite accessible to bassoonists – if you ever manage to get hold of an instrument.

  Copyright: © RWTH Aachen

Do you have a favorite composer?

Hoos: That's a difficult question. Richard Strauss is fantastic, and Gustav Mahler is very good, too. And of course Bach, you can't get around Bach.

Because you always had to play Bach at music school?

Hoos: Not at all, but because I come from a very musical family, and we listened to Bach a lot. And Bach’s music, in particular, is highly structured, including mathematically structured, and I find it very interesting, both emotionally and intellectually, how logical structures wonderfully go together with things that are intuitively accessible.

You have a soft spot for design. Is this also due to the fact that design can be very structured and clear?

Hoos: I’ve always had a strong interest in design, both graphic and architectural. But it doesn't always have to be straightforward and orderly. I think Gaudí is great, and his work anything but straightforward. I also find it very exciting to take inspiration from nature – this also happens in AI. I find the contrast between the carefully designed, straightforward forms and organic forms very intriguing. I'm also fascinated when nature-inspired and theory-driven methods come together.

Question: Music, art, architecture – do they offer a sort of balance to your professional life, or are they inextricably linked? For example, as you listen to music, do you think about what AI might make of this piece?

Hoos: Fortunately not. These areas do touch and complement each other, but it is important to me to do things in my spare time that fully engage me, and these activities remain separate from my research endeavors.

You are also a pilot – do you prefer to rely on artificial intelligence when you’re flying?

Hoos: When flying, you must be fully focused, which is why I find it fascinating and love doing it. I prefer to fly old-fashioned airplanes, with analog instead of digital instruments, without autopilot and without AI. I personally find it more exciting.

Back to the future: Where will AI take us, how will our children apply it?

Prof. Hoos: The question of where the AI journey will take us is really difficult to answer. You always go out on a limb when you start speculating. But we are brave, right? So I'm indeed going out on a limb a bit here. I think what we are currently experiencing with ChatGPT, GPT4, Bing and their capabilities are only the beginning. First, what our children will grow up with are AI systems that you can interact with using natural language. Second, I think that today's systems, which at first sight can be very convincing but often have little substance, will be much more powerful in a few years. My wish and my hope is – and this is what we are working on at the department – that AI will become more explainable, more robust, and more comprehensible.

More robust in the sense of less susceptible to errors or disruptions?

Hoos: Less susceptible to disruptions, in the sense of deviations from the norm. It also means secure against targeted attacks, that is, manipulations. As a rule, current AI systems are still vulnerable in this respect.

Is it all about the idea of “higher, faster, further”, especially at the international level? There's a famous call from researchers and developers to slow down development for a change.

Hoos: I signed this appeal, the Open Letter of the Future of Life Institute, and I stand behind this letter 100 percent. I think it would do us good to slow down. That doesn't mean AI research should be halted, but rather that the technology leaders should exercise a bit more caution, because if AI development goes unchecked, it carries some risks. And I do not mean that AI systems may develop consciousness and take control. This is all in the distant future, possible developments you may discuss philosophically about. What we should really be concerned about are other things, for example overestimating the current systems in what they can do. Every day, people – both experts and normal users – are already attributing properties and capabilities to these systems that they simply do not have and perhaps will never have. And I find this a bit worrying.

So you propose monitoring and verifying what ChatGPT, for example, generates?

Hoos: Exactly. The output needs to be critically examined. And the limits and weaknesses of such systems need to be scientifically explored.

Will we soon be able to differentiate between AI-generated and human-generated content?

This is already possible, but the existing tools are not one hundred percent reliable. They aren’t always accurate, but they work surprisingly well. Interestingly – and not surprisingly – they are powered by AI. So you need AI to find out what has been produced by AI. And we are well-advised to develop this further, because it's important to know where AI has been used and where not. Even where it’s not a problem at all to use AI, where it is desirable or even helpful, it would be good to know which content was generated by AI and which was produced by human experts.

There is a great deal of skepticism in the public arena about artificial intelligence. From the perspective of research, would you like to see more enthusiasm for the topic?

Definitely. Currently, mainly the negative aspects of AI are perceived, the risks, the limitations. This is not unhealthy, but it's very one-sided. It is very important to see the positive as well. And today, our intelligence – or, more precisely, the limits to our intelligence - have led us to into a difficult situation: Climate change. It is not necessarily ill will that has led us here, but simply a lack of insight, a lack of prudence, a lack of foresight. And AI systems may help us to better understand climate change and perhaps prevent the worst of its impacts. I personally believe that AI will be a key component in developing climate change solutions. Also, part of my research is on the topic of climate science and AI; some of my doctoral candidates are doing research in this area. We need AI, but we need the right kind of AI and we must use it responsibly.

When it comes to AI, the shining beacon is ChatGPT. Is the hype justified?

Prof. Hoos: That's exactly what it is, a hype, and it’s over the top. It is somewhat justified by the fact that it involves capabilities that were previously unknown and that there has been a huge technological leap. Finding ChatGPT exciting and groundbreaking is justified, but many people overestimate what this system and similar systems can do.

Is it thanks to ChatGPT that the abstract topic of AI is suddenly tangible for everyone?

Prof. Hoos: Definitely. Widely available and easily usable systems fire the imagination, but they also fuel fears and give us hopes that we may not have had before because we were not familiar with the subject. This is also a positive development, because we live surrounded by AI – AI increasingly has a decisive impact on our economy and society, and it also has a influence on our lives. It is good if we deal with it, and ChatGPT makes us deal with this technology more.

Can you tell if a text was written by ChatGPT?

Hoos: I would like to say yes, absolutely. But I'm not sure really.

Am I wrong or is the tone of ChatGPT rather boring?

Hoos: The answers are a bit dry sometimes, but linguistically very good, especially in English. It's mostly a mix of too dry and too chatty. This mix then may sometimes seduce you to feel a kind of intelligence or emotion behind it, which is actually not there at all.

You have been appointed Humboldt Professor at RWTH. What does that mean to you?

Prof. Hoos: The Alexander von Humboldt Professorship is one of the reasons I came back to Germany. I spent 20 very fruitful and personally also very rewarding years in Canada at the University of British Columbia in beautiful Vancouver, but I feel so connected to the culture in Europe that I always wanted to spend a lot of time in Europe. And at some point I moved to the Netherlands, and then the next step from there back to Germany was thanks to the von Humboldt professorship. This is a great tool to attract top researchers to Germany, especially in the field of AI. The generously endowed professorship offers fantastic networking opportunities, and provides work opportunities that you wouldn't have otherwise even at a great university like RWTH. In my case, I have to be honest, it is the Alexander von Humboldt Professorship together with the very, very attractive appointment package offered by RWTH. It was virtually irresistible and convinced me that you can achieve things here – in the heart of Europe, in the westernmost corner of Germany – that you might not be able to achieve elsewhere. And that's why I'm here and part of the AI Center at RWTH Aachen University.

And Merzbrück Airport is not far away.

Hoos: That's right. The proximity to the research airport plays an important role for me. First, because I am an enthusiastic private pilot. Second, because I think aviation is one of the areas where KI can help us. For example, in the design of aircraft that are more environmentally friendly and energy-efficient. Or in improving the use of airspace. RWTH and FH Aachen University are jointly offering a great degree program in Aeronautical Engineering and Astronautics, to which I intend to contribute as an AI researcher.

You just mentioned the Humboldt Professorship as a strategic tool; after all, you will be bringing together some colleagues here at RWTH.

Hoos: This month, we will be hosting a great event, the Aachen AI Week, which we think will attract much attention, certainly in Germany, and most likely also beyond its borders. Then we will also bring together all Alexander von Humboldt professors in the field of AI and from AI-related fields here in Aachen. We will talk about the state of the art in research and discuss where the journey is headed. But we will also be concerned with how to conduct AI research and development in such a way that Europe not only remains competitive, but succeeds in creating trustworthy AI systems humans don’t need to be afraid of.

Is the event intended to put Aachen more firmly on the map as a hub for AI innovation?

Hoos: This is certainly about Aachen as a prime location for AI research. Right now, Aachen is still a well-kept secret in the community. There are six AI competence centers in Germany, but Aachen is not among them. However, we just carried out an analysis using standard metrics – citation metrics, for example – and found that Aachen can easily keep up with the official centers of excellence. So there already is a critical mass of AI expertise here that is really fantastic, but we can make it much more visible and bolster it even further by connecting with other players in the field both nationally and abroad. And that’s what I would like to do.

AI Week is also intended to communicate the topic to the wider public?

Hoos: Absolutely, AI Week is not only aimed at experts, but also quite specifically geared towards the general public. After all, AI is of great public interest. And as we already have the expertise here in Aachen, of course we also want to reach out to and involve the wider public. We already have many ideas on how to make the topic of AI accessible, interesting, and fun. I’m sure it will be a great event!