• AI-Act: Was die neue KI-Verordnung regelt

    The EU Act is quite extensive. To facilitate its understanding, one can refer to documents from organizations that have analyzed it. Here, you can find an interesting navigation structure.

  • EU AI Act explorer

    The EU Act is quite extensive. To facilitate its understanding, one can refer to documents from organizations that have analyzed it. Here, you can find an interesting navigation structure.

  • The complex patchwork of US AI regulation has already arrived

    The use of AI involves various types of risks, which is why there are significant initiatives aimed at regulating its development and use. In Europe, a law called the EU Act has been enacted and will be introduced in all Union countries. In the United States, there are several initiatives at both the federal level and within certain states. Switzerland has not yet defined its own law but will likely align with the EU Act. Due to its impact for persons and companies it is important to be awre of the requirements that these regulations set.

  • Blueprint for an AI Bill of Rights (US)

    The use of AI involves various types of risks, which is why there are significant initiatives aimed at regulating its development and use. In Europe, a law called the EU Act has been enacted and will be introduced in all Union countries. In the United States, there are several initiatives at both the federal level and within certain states. Switzerland has not yet defined its own law but will likely align with the EU Act. Due to its impact for persons and companies it is important to be awre of the requirements that these regulations set.

  • EU Act

    The use of AI involves various types of risks, which is why there are significant initiatives aimed at regulating its development and use. In Europe, a law called the EU Act has been enacted and will be introduced in all Union countries. In the United States, there are several initiatives at both the federal level and within certain states. Switzerland has not yet defined its own law but will likely align with the EU Act. Due to its impact for persons and companies it is important to be awre of the requirements that these regulations set.

  • Auditing Large Language Models

    To assess the current state and audit readiness for managing AI and its associated risks, there are available guidelines and checklists:

  • Auditing Artificial Intelligence from ISACA

    To assess the current state and audit readiness for managing AI and its associated risks, there are available guidelines and checklists:

  • Die KI-Strategie des ITZBund

    There are increasing guidelines on how to protect systems from AI-related threats.

  • Microsoft Responsible AI Standard General Requirements

    There are increasing guidelines on how to protect systems from AI-related threats.

  • Guidelines for secure AI system development

    There are increasing guidelines on how to protect systems from AI-related threats.

  • Deploying AI Systems Securely

    There are increasing guidelines on how to protect systems from AI-related threats.

  • LLM AI Cybersecurity & Governance Checklist

    There are increasing guidelines on how to protect systems from AI-related threats.

  • Crisis Management Simulations for Top Management

    Effective crisis management, especially in cyber crises, is crucial for organizations. This research explores how companies can prepare for cyber crises, focusing on the role of training and simulations directed at the top management. Findings emphasize recognizing crisis characteristics, clear role assignments, and the importance of communication and decision-making. Training, especially through simulations, is proven effective for crisis preparation. However, there appears to be a gap in training products specifically aimed at top management. The study also outlines key elements and design guidelines for cyber crisis simulations tailored to top managers.

  • AI vulnerabilities / NIST

    AI-based tools are spreading rapidly and will soon be available not only on the internet but directly in our devices (PCs, smartphones, and various other common objects). The security risks have been widely demonstrated both theoretically and practically. Numerous analyses have been published, and countless studies continue to investigate new aspects of the risk. The following documents NIST provide a good taxonomy of the vulnerabilities.

  • AI vulnerabilities / MITRE

    AI-based tools are spreading rapidly and will soon be available not only on the internet but directly in our devices (PCs, smartphones, and various other common objects). The security risks have been widely demonstrated both theoretically and practically. Numerous analyses have been published, and countless studies continue to investigate new aspects of the risk. The following from MITRE provide a good taxonomy of the vulnerabilities.

  • The Enterprise Strikes Back

    The Enterprise Strikes Back: Conceptualizing the HackBot - Reversing Social Engineering in the Cyber Defense Context

  • The European Union’s Artificial Intelligence Act – explained

    In June 2023 the European Parliament adopted its negotiating position on the Artificial Intelligence (AI) Act ons ahead of talks with EU member states on the final shape of the law The cornerstone of the AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The Act prohibits certain AI practices. This WEF articls highlights some aspects. Security of AI is not specifically addressed by the ACT.

  • Compromising LLMs: The advent of AI Malware

    Document from Black hat showing how advanced is the security treat on large language models (as ChatGPT). A whole new generation of malware and manipulation can now run entirely inside of large language models Your personal assistant may be already compromised. The security of these new systems needs to be rethought.

  • G7 Hiroshima Process on Generative Artificial Intelligence (AI)

    This document issued by OECD for the G7 meeting in May 2023 in Hiroshima provides a brief overview of the development of generative AI over time and across countries. The report is indicative of trends identified in the first half of 2023 in a rapidly evolving area of technology.

  • The Road to Secure and Trusted AI

    This report shows a good overview of the last decade in security of AI

  • Hacking AI Is Surprisingly Easy

    How to transform a stop sign in a green sign and other examples on how hacking is surprisingly easy:

  • Europäische Kommission

    Vorschlag für einen Rechtsrahmen für künstliche Intelligenz

  • How to attack Machine Learning

    Machine learning is fragile and relatively exposed to hacking as described in this article in Toward Data Science by Alex Poliakov