ILast week, the Artificial Intelligence Regulation (KI-VO) was approved by the EU Parliament by a large majority. If we are to apply the law, it must prove itself in practical application. This also applies to the state bound by fundamental rights, specifically the use of artificial intelligence in administration. This will be funded in accordance with the Federal Government's AI Action Plan.
To this end, the Ministry of the Interior established the Advisory Center for Artificial Intelligence (BeKI), a central contact and coordination point for AI projects in the federal administration. The first projects to promote AI for business are also being presented: the Federal Ministry of Economy is currently planning to establish a testing and testing center to develop standards for the development of AI-based robots.
The European Union Commission, in turn, decided to support the authorities of member states in the safe and trustworthy use of artificial intelligence applications. Management innovation in AI is therefore part of the European data strategy and politically desirable.
In particular, the legislator has introduced a wide range of requirements for the use of language models in administration – prior use cases include the planned use of ChatGPT in schools and AI investigative assistants for police. Therefore, these systems must find their way into the fundamental areas of state sovereignty: the security and freedom of citizens and the core of democratic education in schools and universities.
“AuthoritiesGPT”
What does this mean in practice? Let's take a look at the structured value chain leading to the fictitious use of an AI voice assistant in a German administrative body. The language assistant aims to sort out the least promising applications from a large number of applications for unemployment benefits. The assistant works based on a language model developed by a private company over years of work.
The model was trained with texts. With each imported text module, it modifies the parameters of its probability calculations. Based on these calculations, it is now able to display the most likely word sequences in a given context. The language model is not developed for any specific application. It is not designed by the developer or provider as a knowledge database. Instead, the model is intended to be used as the basis for a variety of artificial intelligence systems and to generate meaningful word sequences for an unspecified number of purposes. This simulation of language creates contexts of meaning that appear convincingly compelling.
The EU legislature has decided to establish special regulatory measures for models with a wide range of uses, which serve as the basis for many artificial intelligence applications. Additional obligations apply to models with systemic risk. The legislator makes systemic risks dependent, among other things, on the computing power needed to train the model.
“Food practitioner. Bacon guru. Infuriatingly humble zombie enthusiast. Total student.”
More Stories
How can Israel attack the Iranian nuclear program?
Commission sues Hungary over sovereignty law
“Linked to Hezbollah”: Israel attacks the Beirut suburb