In the run-up to the 2023 IoTSF annual conference we invited Tim Snape, Artificial Intelligence Group and chair of the conference panel session: ‘Strategy, Ethics and Governance in the Age of AI-Powered Cybersecurity’, to give us some insights into the discussion in advance.

In the previous article we discussed the challenges society faces with respect to AI. This article focuses on strategies and methods that can be used to address these challenges. You’ll discover:

  • Why the Eu Cyber Resilience Act (CRA) will have a bigger impact on industry than GDPR.
  • Why a Bill of Materials (BOM) is critical to maintaining security.
  • How to achieve trust and transparency without compromising security.
Tim Snape

Tim Snape, Artificial Intelligence Group Ltd

Many corporate leaders have stated that AI must be part of their company strategy – or else they will lose out to competitors: Advances in AI seem to be unstoppable.

In the UK and the EU, we place great value on regulations to protect society and the individual citizen’s within it, but these regulations do not apply and have no force when the technology used is for a National Security purpose.

The problem you see is not with the good guys, who will use their best endeavours to ensure they do the right thing. The potential existential threat we face is led by agencies, institutions and corporations who are implementing AI solutions because they believe they have to, to survive, to succeed or win where ethical and societal considerations are secondary priorities.

In these scenarios, many experts predict that AI technologies will evolve faster than society can control them.

Given this somewhat gloomy forecast are we doomed? What can and should we do about it?

The purpose of this article is to try and articulate an approach to AI Risk Management that might help organisations without impeding their deployment of ethical AI solutions. I will refer to this approach as Safe AI By Design.

The EU AI Act will require organisations deploying AI services to follow a Safe AI by Design risk-based approach (although it is not called that yet), so what will that mean?

Compliance teams are familiar with the GDPR regulations. Companies that do not comply with the GDPR requirements risk punitive penalties, based on percentages of annual gross turnover. It has been interesting to see how rapidly the larger corporates (i.e. those with the most to lose) have implemented privacy-by-design processes. Despite the frequent announcements of GDPR breaches, its requirements appear now to be being taken seriously and there are indications that the legislation is working, unfortunately at quite a high cost.  My personal experiences with GDPR have been very positive. For the first time, compliance teams have had teeth with the support of board-level management to address and fix GDPR non-compliances, and by association, cyber vulnerabilities as well.

When you add CRA (EU Cyber Resilience Act) and other legislation that is being created to address security and privacy requirements, things start to get interesting. Pundits are suggesting that CRA will have as big an impact on Industry as GDPR. I disagree, I think it is going to have an even bigger impact because the CRA will place legal obligations on organisations to report security vulnerabilities and inform customers what steps they should take to mitigate and protect themselves against the reported vulnerabilities.

CRA will affect all products, produced, sold or used within the EU; 90% of systems are expected to fall within the CRA non-critical definition and can follow a self-assessment process. The remaining 10% are defined as Category 1 or Category 2 critical and will require more formal assessment and approval processes to be performed.  An interesting example of a Critical Class 1 product would be a router or a switch used in the home. Not only will the manufacturer of the product have to certify the security of the products, but also any company selling or supplying routers to end users, will have to assure the products as well.  Or at the very least, reference (i.e. inherit) the compliance status from the supplier they used to procure the product.

The effect of all these regulations will be cumulative, as each set of requirements adds to the whole – and these requirements will have a cost.  It is not clear exactly what the additional economic impact will be of the AI legislation, or who will end up paying for it.  Yet the thing that will be on the legislators’ minds will be whether the costs are outweighed by the benefits. The concern of many people is the difficulties of defining & imposing legislation will mean that it is unpoliceable and the societal values gained by using regulation to control AI deployments will be far outweighed by the compliance cost.

Secure by Design and Legacy Software

Anyone who has used security analysis tools will know, that even the most rigorously developed product is likely to have vulnerabilities. It’s the reality of developing software – there will always be bugs. And with legacy systems, as the software ages, it becomes increasingly costly to support. This can sometimes happen when Open Source Software (OSS) is used.

There has been much debate regarding OSS and CRA. If a Critical (CRA-defined) application contains OSS, the original developers of the OSS are not responsible for the software they provided – it’s the deployer’s responsibility and accountability.  This means the deployer has a reliance on the diligence of unpaid community developers to continue to support and maintain code.

Products that have not been patched or use OSS that is more than a few years (or even just a few months) old can be expected to have vulnerabilities. Fixing these vulnerabilities is not a trivial task, and sometimes it is an impossible one.

To effectively manage this specific issue, an organisation needs to be very careful and selective about the software components it acquires and uses.  If the organisation makes a change to an OSS software product to satisfy some requirement it is not going to be sufficient just to apply the change, test and then use it. The organisation needs to think 10 years ahead, and how will any changes they introduced, however small, need to be supported in the future. They may ensure that the changes made are committed into the OSS build, so it becomes a standard supported feature or maintain a local build of the software that parallels the standard supported version.

Many companies do not even think about this support cost until cyber compliance assessors perform an analysis and discover vulnerabilities due to unpatched libraries being used.

The solution to this problem requires a transparent approach.

One of the activities that needs to be performed is an analysis of the componentry used to build a system. This analysis should provide a detailed inventory of all software components used in a product or service.  This Software Bill of Materials (SBOM), allows the identification of all the releases of software and the ability to keep track of issues affecting each component.

In a world that is becoming increasingly software-dependent, the role of the SBOM in maintaining software integrity, transparency, and security is critical.

[See sidebar: ’10 Benefits of SBoMs’]

There are multiple Static Composition Analysis (SCA) products in the market that support this activity and can be used to scan and analyse bodies of code to generate an SBOM, there are also a number of standards that define the elements of an SBOM. This ‘analysis and documentation’ approach using SBOMs is the first step.  When you have an SBOM, you can quickly analyse supplier and support requirements to identify potential issues. With a reasonably complete inventory in your SBOM, it becomes a relatively simple exercise to analyse your entire software base for known vulnerabilities.

Typically the cadence for pen-testing Critical systems is “at least” annually. But if the organisation has an SBOM it is possible to automate inventory analysis on a daily or even an hourly basis. This form of transparency, is internal, where the information is available within the organisation.

Where transparency is likely to be challenging in the future will be the need to communicate vulnerabilities.  The better companies are very used to identifying and remediating adverse cyber findings, yet what they are not used to is having to communicate adverse findings to their customers. The justification for not communicating vulnerabilities is that it could be security sensitive – much as a company might want to make its customers aware of vulnerabilities, it certainly does not want to inform potential attackers of them. Generally, customers are only made aware of product vulnerabilities when there is a remedy requiring some action by the customer or when there is a successful attack against the product or service and there is a legal/contractual obligation to inform the customer.

A big advantage of managing an SBOM and using an SBOM to identify known vulnerabilities is that you can be transparent about the methodology you are using with your external stakeholders – customers, investors, assessors and regulators – without having to supply them with the specific vulnerability findings.  Thus maintaining the confidentiality of this security-sensitive information.

The CRA legislation will require companies to be transparent and disclose vulnerabilities. Assuming the CRA Legislation comes into force in 2024 then there will be a two-year transition period after which this legislation will come into force and it will be interesting to see how companies address this requirement. Now add the EU AI Act to this body of legislation and there is the potential for extremely “Interesting Times”! This will be especially the case for legacy uses of AI that will fall into the prohibited category and are known to be non-compliant.

What do we need to do?

If you examine the requirements of security and privacy by design and then include what would be required for a “Safe AI by design” approach you get an interesting response.

[See sidebar: ‘Safe AI by Design ChatGPT Requirements’]

There are two things interesting about this set of common compliance requirements:

  1. The degree of overlap between the different compliance areas &;
  2. The need for high-quality documentation for each of them

The key message to legislators is that all these compliance areas, Privacy, Security, Resilience, H&S, IPR, Regulatory and AI can all be best met, by focusing on good quality information and documentation.  The transparency created by having high-quality documentation empowers an organisation to meet legal compliance requirements – whatever they may be. And importantly,  organisations can also expect to create better, more reliable products and services, with lower development and maintenance costs.

However, this is not an approach that will give an immediate and complete solution, if such a perfect solution is even possible. Yet it is the first and most important step to understanding an organisation’s IT systems and levels of compliance across all areas.  With this information, an organisation has the detail required to risk assess each and every non-compliance and then make decisions about how to manage each one.

This does not mean an organisation will have zero risks – far from it. Transparency allows an organisation to make pragmatic and evidence-based decisions about which risks to fix and which to tolerate.vIn Japan, there is a term for this, Kaizen, where the philosophy is to continuously improve all functions and involve all employees from the CEO to the assembly line workers.

In practice it is not that easy, you cannot change an organisation’s products overnight so that they are of high quality when the legacy systems, cultures and processes they are built on are of low quality. In these situations, transforming an organisation is likely to be a long hard and costly slog.  But, and here is the big but, by transitioning from a “just do enough” culture to a quality culture focusing on transparency, will make the organisation stronger and more able to evolve, adapt, and succeed.

So how, and where should an organisation start?  The key is in the list of compliance “Safe AI by Design” functions kindly provided by ChatGPT in the sidebar. The overlap between them highlights transparency as being a requirement for all compliance areas.  In fact without transparency, quality becomes a siloed activity with limited added value. By adopting the Kaizen principles and involving all stakeholders, it maximises the opportunities and benefits of a holistic quality approach to the whole organisation.

AI Transparency

With Software systems, we can achieve quite a large transparency benefit by having good documentation, managing its access appropriately and making sure it is kept accurate and up to date.

Managing an SBOM or an equivalent BOM for AI products presents unique challenges given the complex nature of AI development processes, which include not just software components but also datasets, training parameters, models, and more. And on top of that, these aspects over the lifecycle of the AI product. Here is a simplified list of the general requirements to consider:

  • Component Identification
  • Provenance & Attribution
  • Version Control & Change Management
  • Security & Vulnerability Management
  • Transparency & Documentation
  • Integration & Compatibility Management
  • Quality & Performance Metrics
  • Ethics & Bias Management
  • Reproducibility
  • Data Privacy & Regulatory Compliance

When managing an SBOM or BOM for AI products, organisations should follow a holistic approach that covers the intricacies of AI/ML development while ensuring transparency, security, and compliance throughout the product’s lifecycle.

Conclusion

The EU Regulators are moving towards a risk-based approach for AI that is consistent with similar requirements in Cyber and Privacy. To maximise the quality and value of an organisation’s compliance activities, it would do well to focus on understanding and describing its products and services and the interactions and dependencies between them.

Maintaining a Bill of Materials appropriately and consistently is a necessary pre-requisite for meeting all other compliance requirements.

When this approach is adopted across multiple compliance areas, it facilitates and simplifies meeting the transparency and explainability principle, which makes it possible, to satisfy the other principles:

  • Safety, security and robustness
  • Fairness
  • Accountability and governance
  • Contestability and redress

And finally, don’t let perfection be the enemy of good. There will always be improvements to make to any quality system, so the system itself needs to follow a Kaizen iterative approach to improvement.

10 Benefits of SBOMs:

A Software Bill of Materials (SBOM) is essentially a comprehensive list of components, libraries, and modules that make up a software product. It often details their versions, origins, and other relevant information. There are numerous benefits and opportunities in maintaining an SBOM – here are 10:

  1. Improved Security: Vulnerability Management & Incident Response
  2. Better Compliance Management: License and Regulatory Compliance.
  3. Efficient Dependency Management: Up-to-date Components and Conflict Avoidance.
  4. Enhanced Transparency and Trust: Vendor and End-users Relationships.
  5. Streamlined Development and Operations (DevOps): Faster Troubleshooting and Optimized Builds.
  6. Mergers and Acquisitions (M&A): Having an SBOM can streamline the due diligence process as it offers a clear picture of the software assets.
  7. Facilitation of Patch Management: Providing a clear view of all software components and their versions.
  8. Cost Savings: By managing licenses effectively and avoiding redundant purchases.
  9. Improved Collaboration: Teams within an organization (development, security, operations) can collaborate more efficiently with a shared, transparent view of the components used in software projects.
  10. Business Continuity: By understanding dependencies, organizations can plan more effectively for business continuity and disaster recovery.

Safe AI by Design ChatGPT Requirements

If you examine the requirements of security and privacy by design, and then include what would be required for a “Safe AI by design” approach you get an interesting response when you feed the query into ChatGPT using the 4.0 model :

  1. Proactive Integration
    • Objective: Integrate safety, privacy, and security from the initial stages and maintain these principles throughout the project lifecycle.
  2. Transparent Objectives and Boundaries
    • Objective: Define the purpose of the system, its capabilities, and limitations.
  3. Comprehensive Data Management
    • Objective: Ensure data is managed with integrity, respect for privacy, and robust security.
  4. User-Centric Principles
    • Objective: Prioritize user rights, interests, and feedback.
  5. Multi-layered Defense and Robustness
    • Objective: Implement multiple layers of protection measures and ensure system resilience.
  6. Visibility, Transparency, and Accountability
    • Objective: Ensure clarity and responsibility in system operations.
  7. Ethical and Fair Operations
    • Objective: Ensure that the system operates within ethical bounds and promotes fairness.
  8. Secure Defaults and User Empowerment
    • Objective: Ensure default configurations prioritize security and privacy while allowing users meaningful control.
  9. Continuous Learning and Evolution
    • Objective: Adopt a mindset of continuous improvement and evolution.
  10. Training, Awareness, and Collaboration
    • Objective: Foster a culture of knowledge and cooperation across stakeholders.

We’ll be exploring these issues further at our annual conference which takes place on November 7th at the IET in London with the theme: Securing the Internet of Things in the Era of Artificial Intelligence – why not come along, you’re invited!