• The European Union’s Artificial Intelligence Act – explained

    In June 2023 the European Parliament adopted its negotiating position on the Artificial Intelligence (AI) Act ons ahead of talks with EU member states on the final shape of the law The cornerstone of the AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The Act prohibits certain AI practices. This WEF articls highlights some aspects. Security of AI is not specifically addressed by the ACT.

  • Compromising LLMs: The advent of AI Malware

    Document from Black hat showing how advanced is the security treat on large language models (as ChatGPT). A whole new generation of malware and manipulation can now run entirely inside of large language models Your personal assistant may be already compromised. The security of these new systems needs to be rethought.

  • G7 Hiroshima Process on Generative Artificial Intelligence (AI)

    This document issued by OECD for the G7 meeting in May 2023 in Hiroshima provides a brief overview of the development of generative AI over time and across countries. The report is indicative of trends identified in the first half of 2023 in a rapidly evolving area of technology.

  • The Road to Secure and Trusted AI

    This report shows a good overview of the last decade in security of AI

  • Hacking AI Is Surprisingly Easy

    How to transform a stop sign in a green sign and other examples on how hacking is surprisingly easy:

  • Europäische Kommission

    Vorschlag für einen Rechtsrahmen für künstliche Intelligenz

  • How to attack Machine Learning

    Machine learning is fragile and relatively exposed to hacking as described in this article in Toward Data Science by Alex Poliakov