The European Union has recently passed the AI Act, making it the world’s first major law for regulating artificial intelligence. This development comes as governments worldwide are scrambling to introduce measures to control the use of AI technology.
The AI Act aims to establish a comprehensive set of rules for artificial intelligence, emphasizing trust, transparency, and accountability when dealing with new technologies. Belgian Secretary of State for Digitization Mathieu Michel hailed the adoption of the AI Act as a significant milestone for the European Union, acknowledging the need to balance innovation with regulation in a fast-changing technology landscape.
One of the key features of the AI Act is its risk-based approach to regulating artificial intelligence. This means that different applications of AI will be treated differently based on their risks to society. The law prohibits certain AI applications deemed “unacceptable” due to their potential harm, such as social scoring systems, predictive policing, and emotional recognition in certain settings like workplaces and schools.
High-risk AI systems like autonomous vehicles and medical devices will undergo stringent evaluation to ensure they do not pose risks to public health, safety or fundamental rights. Additionally, financial services and education applications will be scrutinized for any biases embedded in their algorithms. This comprehensive approach aims to strike a balance between fostering innovation and safeguarding individual rights and societal interests as a whole.
The passage of this act marks a significant step towards creating regulations that govern the development and deployment of artificial intelligence technology while still promoting innovation in this rapidly evolving field.