In many countries, the absence of incident reporting frameworks for AI systems can lead to serious problems. These issues can become systemic if not addressed promptly. For example, AI systems can cause harm to the public by revoking access to social security payments improperly. The research conducted by CLTR in the UK has shown that the government’s Department for Science, Innovation & Technology (DSIT) lacks a centralized and up-to-date system for monitoring incidents involving AI systems. While some regulators may collect incident reports, they may not be equipped to capture the unique harms presented by cutting-edge AI technologies.

CLTR’s findings highlight the importance of recognizing the potential risks associated with high-powered generative AI models and the need for a more comprehensive incident reporting framework in these situations. As such, it is crucial for governments and regulatory bodies to develop effective incident reporting frameworks that are tailored to the unique challenges presented by advanced AI technologies. By doing so, we can ensure that these technologies are used safely and ethically, and that any potential harm is quickly identified and addressed.