Introduction
This extended Roadmap is designed to complement the CDEIs Roadmap to an effective AI assurance ecosystem, which sets out the key steps, and the roles and responsibilities required to develop an effective, mature AI assurance ecosystem.
Where the short version of the roadmap is designed to be accessible as a quick-read for decision makers, this extended version incorporates further research and examples to provide a more detailed picture of the ecosystem and necessary steps forward.
Additionally, chapter 1 of this extended roadmap offers further context on the AI assurance process, delivering AI assurance, the role of AI assurance in broader AI governance and the roles and responsibilities for AI assurance. Chapter 3 discusses some of the ongoing tensions and challenges in a mature AI assurance ecosystem.
This extended roadmap will be valuable for readers interested in finding out more information about how to build an effective, mature assurance ecosystem for AI.
Data-driven technologies, such as artificial intelligence (AI), have the potential to bring about significant benefits for our economy and society. AI systems offer the opportunity to make existing processes faster and more effective, and in some sectors offer new tools for decision-making, analysis and operations.
AI is being harnessed across the economy, helping businesses to improve their day-to-day operations, such as achieving more efficient and adaptable supply chain management. AI has also enabled researchers to make a huge leap forward in solving one of biologys greatest challenges, the protein folding problem. This breakthrough could vastly accelerate efforts to understand the building blocks of cells, and could improve and speed up drug discovery. AI presents game changing opportunities in other sectors too, through the potential for operating an efficient and resilient green energy grid, as well as helping tackle misinformation on social media platforms.
However, AI systems also introduce risks that need to be managed. The autonomous, complex and scalable nature of AI systems (in particular, machine learning) pose risks beyond that of regular software. These features pose fundamental challenges to our existing methods for assessing and mitigating the risks of using digital technologies.
The autonomous nature of AI systems makes it difficult to assign accountability to individuals if harms occur; the complexity of AI systems often prevents users or affected individuals from explaining or understanding the link between a systems output or decision and its causes, providing further challenges to assigning accountability; and the scalability of AI makes it particularly difficult to define legitimate values and governance frameworks for a systems operation e.g. across social contexts or national jurisdictions.
As these technologies are more widely adopted, there is an increasing need for a range of actors to check that these tools are functioning as expected and demonstrate this to others. Without being able to assess the trustworthiness of an AI system against agreed criteria, buyers or users of AI systems will struggle to trust them to operate effectively, as intended. Furthermore they will have limited means of preventing or mitigating potential harms if a system is not in fact trustworthy.
Assurance as a service draws originally from the accounting profession, but has since been adapted to cover many areas such as cyber security and quality management. In these areas, mature ecosystems of assurance products and services enable people to understand whether systems are trustworthy. These products and services include: process and technical standards; repeatable audits; certification schemes; advisory and training services. For example, in financial accounting, auditing services provided by independent accountancy firms enable an assurance user to have confidence in the trustworthiness of the financial information presented by a company.
AI assurance services have the potential to play a distinctive and important role within AI governance. Its not enough to set out standards and rules about how we expect AI systems to be used. It is also important that we have trustworthy information about whether they are following those rules.
Assurance is important for assessing efficacy, for example through performance testing; addressing compliance with rules and regulations, for example performing an impact assessment to comply with data protection regulation; and also for assessing more open ended risks. In the latter category, rules and regulations cannot be relied upon to ensure that a system is trustworthy, more individual judgement is required. For example, assessing whether an individual decision made by an AI system is fair in a specific context.
By ensuring both trust in and the trustworthiness of AI systems, AI assurance will play an important enabling role in the development and deployment of AI, unlocking both the economic and social benefits of AI systems. Consumer trust in AI systems is crucial to widespread adoption, and trustworthiness is essential if systems are going to perform as expected and therefore bring the benefits we want without causing unexpected harm.
An effective AI assurance ecosystem is needed to coordinate appropriate responsibilities, assurance services, standards and regulations to ensure that those who need to trust AI have the sort of evidence they need to justify that trust. In other industries, we have seen healthy assurance ecosystems develop alongside professional services to support businesses from traditional accounting, to cybersecurity services. Encouraging a similar ecosystem to develop around AI in the UK would be a crucial boon to the economy.
For example, the UKs cyber security industry employed 43,000 full-time workers, and contributed nearly 4bn to the UK economy in 2019. More recently, research commissioned by the Open Data Institute (ODI) on the nascent but buoyant data assurance market found that 890 data assurance firms are now working in the UK with 30,000 staff. The research carried out by Frontier Economics and glass.ai noted that 58% of these firms incorporated in the last 10 years. Following this trend, AI assurance is likely to become a significant economic activity in its own right. AI assurance is an area in which the UK, with particular strengths in legal and professional services, has the potential to excel.
The roadmap provides a vision of what a mature ecosystem for AI assurance might look like in the UK and how the UK can achieve this vision. It builds on the CDEIs analysis of the current state of the AI assurance ecosystem and examples of other mature assurance ecosystems.
The first section of the roadmap looks at the role of AI assurance in ensuring trusted and trustworthy AI. We set out how assurance engagements can build justified trust in AI systems, drawing on insights from more mature assurance ecosystems, from product safety through to quality management and cyber security. We illustrate the structure of assurance engagements and highlight assurance tools relevant to AI and their applications for ensuring trusted and trustworthy AI systems. In the latter half of this section we zoom out to consider the role of assurance within the broader AI governance landscape and highlight the responsibilities of different actors for demonstrating trustworthiness and their needs for building trust in AI.
The second section sets out how an AI assurance ecosystem needs to develop to support responsible innovation and identifies six priority areas:
We set out the current state of the ecosystem and highlight the actions needed in each of these areas to achieve a vision for an effective, mature AI assurance ecosystem. Following this, we discuss the ongoing tensions that will need to be managed, as well as the promise and limits of assurance. We conclude by outlining the role that the CDEI will play in helping deliver this mature AI assurance ecosystem.
We have combined multiple research methods to build the evidence and analysis presented in this roadmap. We carried out literature and landscape reviews of the AI assurance ecosystem to ground our initial thinking and performed further desk research on a comparative analysis of mature assurance ecosystems. Based on this evidence, we drew on multidisciplinary research methods to build our analysis of AI assurance tools and the broader ecosystem.
Our desk-based research is supported by expert engagement, through workshops, interviews and discussions with a diverse range of expert researchers and practitioners. We have also drawn on practical experience from assurance pilot projects with organisations adopting or considering deploying AI systems, across both private sector organisations (in partnership with researchers from University College London), along with the CDEIs work with public sector organisations across recruitment, policing and defence.
Building and maintaining trust is crucial to realising the benefits of AI systems. If organisations dont trust AI systems, they will be less willing to adopt these technologies because they dont have the confidence that an AI system will actually work or benefit them. They might not adopt for fear of facing reputational damage and public backlash. Without trust, consumers will also be cautious about using data-driven technologies, as well as sharing the data that is needed to build them.
The difficulty is, however, that these stakeholders often have limited information, or lack the appropriate specialist knowledge to check and verify others claims to understand whether AI systems are actually deserving of their trust.
This is where assurance is important. Being assured is about having confidence or trust in something, for example a system or process, documentation, a product or an organisation. Assurance engagements require providing evidence - often via a trusted independent third party - to show that the AI system being assured is reliable and trustworthy.
The distinction between trust and trustworthiness is important here: when we talk about trustworthiness, we mean whether something is deserving of peoples trust. On the other hand, when we talk about trust, we mean whether something is actually trusted by someone. Someone might trust something, even if it is not in fact trustworthy.
A successful relationship built on justified trust requires both trust and trustworthiness:
Trust without trustworthiness = misplaced trust. If we trust technology or the organisations deploying a technology when they are not in fact trustworthy, we incur potential risks by misplacing our trust.
Trustworthy but not trusted = (unjustified) mistrust. If we fail to trust a technology or organisation which is in fact trustworthy, we incur the opportunity costs of not using good technology.
Fulfilling both of these requirements produces justified trust.
There are two key problems which organisations must overcome to build justified trust:
An information problem: Organisations need to reliably and consistently evaluate whether an AI system is trustworthy to provide the evidence base for whether or not people should trust it.
A communication problem: Organisations need to communicate their evidence to other assurance users and translate this evidence at the appropriate level of complexity so that they can direct their trust or distrust accordingly.
The value of assurance is overcoming both of these problems to enable justified trust.
Assurance requires measuring and evaluating a variety of information to show that the AI system being assured is reliable and trustworthy. This includes how these systems perform, how they are governed and managed, whether they are compliant with standards and regulations, and whether they will reliably operate as intended. Assurance provides the evidence required to demonstrate that a system is trustworthy.
Assurance engagements rely on clear metrics and standards against which organisations can communicate that their systems are effective, reliable and ethical. Assurance engagements therefore provide a process for (1) making and assessing verifiable claims to which organisations can be held accountable and (2) for communicating these claims to the relevant actors so that they can build justified trust, where a system is deserving of their trust.
This challenge of assessing the trustworthiness of systems, processes and organisations to build justified trust is not unique to AI. Across different mature assurance ecosystems, we can see how different assurance models have been developed and deployed to respond to different types of risks that arise in different environments. For example: from risks around professional integrity, qualifications and expertise in legal practice; to assuring operational safety and performance risks in safety critical industries, such as aviation or medicine.
The requirements for a robust assurance process are most clearly laid out in the accounting profession, although we see very similar characteristics in a range of mature assurance ecosystems.
In the accounting profession, the 5 elements of assurance are specified as:
The accounting model is helpful for thinking about the structure that AI assurance engagements need to take. The five elements help to ensure that information about the trustworthiness of different aspects of AI systems is reliably evaluated and communicated.
While this roadmap draws on the formal definitions developed by the accounting profession, similar roles, responsibilities and institutions for standard setting, assessment and verification are present across the range of assurance ecosystems - from cybersecurity to product safety - providing transferable assurance approaches.
Within these common elements, there is also variation in the use of different assurance models across mature ecosystems. Some rely on direct performance testing, while others rely on reviewing processes or ensuring that accountable people have thought about the right issues at the right time. In each case, the need to assure different subject matters has led to variation in the development and use of specific assurance models, to achieve the same ends.
In this section, we will build on our analysis of AI assurance and how it can help to build justified trust in AI systems, by briefly explaining some of the mechanisms that can be used to deliver AI assurance. We will explore where they are useful for assuring different types of subject matter that are relevant to the trustworthiness of AI systems.
A more detailed exploration of AI assurance mechanisms and how they apply to different subject matter in AI assurance is included in our AI assurance guide.
There are multiple approaches to delivering assurance. The spectrum of assurance techniques offer different processes for providing assurance, enabling assurance users to have justified trust in a range of subject matters relevant to the trustworthiness of AI systems.
On one end of this spectrum, impact assessments are designed to account for uncertainty, ambiguity and the unobservability of potential future harms. Impact assessments require expertise and subjective judgement to account for these factors, but they enable standardised processes for qualitatively assessing potential impacts. Assurance can be provided against these processes and the mitigation strategies put in place to deal with potential adverse impacts.
At the other end of this spectrum, formal verification is appropriate for assessing trustworthiness for subject matters which can be measured objectively and with a high degree of certainty. It is ineffective if the subject matter is ambiguous, subjective or uncertain. For example, formal guarantees of fairness cannot be provided for an AI systems outputs.
AI assurance services are a distinctive and important aspect of broader AI governance. AI governance covers all the means by which the development, use, outputs and impacts of AI can be shaped, influenced and controlled, whether by government or by those who design, develop, deploy, buy or use these technologies. AI governance includes regulation, but also tools like assurance, standards and statements of principles and practice.
Regulation, standards and other statements of principles and practice set out criteria for how AI systems should be developed and used. Alongside this, AI assurance provides the infrastructure for checking, assessment and verification, to provide reliable information about whether organisations are following these criteria.
An AI assurance ecosystem can offer an agile regulatory market of assurance services, consisting of both for-profit and not-for-profit services. This regulatory market can support regulators as well as standards development bodies and other responsible AI authorities to ensure trustworthy AI development and deployment while enabling industry to innovate at pace and manage risk.
AI assurance services will play a crucial role in a regulatory environment by providing a toolbox of mechanisms and processes to monitor regulatory compliance, as well as the development of common practice beyond statutory requirements to which organisations can be held accountable.
Compliance with regulation
AI assurance mechanisms facilitate the implementation of regulation and the monitoring of regulatory compliance in the following ways: implementing and elaborating rules for the use of AI systems in specific circumstances; translating rules into practical forms useful for end users and evaluating alternative models of implementation; and providing technical expertise and capacity to assess regulatory compliance across the system lifecycle.
Assurance mechanisms are also important in the international regulatory context. Assurance mechanisms can be used to facilitate assessment against designated technical standards that can provide a presumption of conformity with essential legal requirements. The presumption of conformity can enable interoperability between different regulatory regimes, to facilitate trade. For example, the EUs AI act states that compliance with standardsshould be a means for providers to demonstrate conformity with the requirements of this Regulation.
Managing risk and building trust
Assurance services also enable stakeholders to manage risk and build trust by ensuring compliance with standards, norms and principles of responsible innovation, alongside or as an alternative to more formal regulatory compliance. Assurance tools can be effective as post compliance tools where they can draw on alternative, commonly recognised sources of authority. These might include industry codes of conduct, standards, impact assessment frameworks, ethical guidelines, public values, organisational values or preferences stated by the end users.
Post-compliance assurance is particularly useful in the AI context where the complexity of AI systems can make it very challenging to craft meaningful regulation for them. Assurance services can offer means to assess, evaluate and assign responsibility for AI systems impacts, risks and performance without the need to encode explicit, scientific understandings in law.
Effective AI assurance will rely on a variety of actors with different roles and responsibilities for evaluating and communicating the trustworthiness of AI systems. In the diagram below we have categorised four important groups of actors who will need to play a role in the AI assurance ecosystem: the AI supply chain, AI assurance service providers, independent research and oversight, and supporting structures for AI assurance. The efforts of different actors in this space are both interdependent and complimentary. Building a mature assurance ecosystem will therefore require an active and coordinated effort.
The actors specified in the diagram are not meant to be exhaustive, but represent the key roles in the emerging AI assurance ecosystem. For example, Business to Business to Consumer (B2B2C) deployment models can greatly increase the complexity of assurance relationships in the real-world, where the chain of deployment between AI developers and end consumers can go through multiple client layers.
Similarly, it is important to note that while the primary role of the supporting structures in developing an AI assurance ecosystem is to set out the requirements for trustworthy AI through regulation, standards or guidance, these actors can also play other roles in the assurance ecosystem. For example, regulators also provide assurance services via advisory, audit and certification functions e.g. the Information Commissioners Offices (ICO) investigation and assurance teams assess the compliance of organisations using AI. Government and other public sector organisations also play the executive role when procuring and deploying AI systems.
These actors play a number of interdependent roles within an assurance ecosystem. The table below illustrates each actors role in demonstrating the trustworthiness of AI systems and their own requirements for building trust in AI systems.
This section sets out a vision for a mature AI assurance ecosystem, and the practical steps that can be taken to make this vision a reality. We have based this vision on our assessment of the current state of the AI assurance ecosystem, as well as comparison with more mature ecosystems in other domains.
An effective AI assurance ecosystem matters for the development of AI. Without it, we risk either trust without trustworthiness, where risky, unsafe or inappropriately used AI systems are deployed, leading to real world harm to people, property, and society. Alternatively, the prospect of these harms could lead to unjustified mistrust in AI systems, where organisations hesitate to deploy AI systems even where they could deliver significant benefit. Worse still, we risk both of these happening simultaneously.
An effective AI assurance ecosystem will rely on accommodating the perspectives of multiple stakeholders who have different concerns about AI systems and their use, different incentives to respond to those concerns, and different skills, tools and expertise for assurance. This coordination task is particularly challenging for AI, as it is a general purpose group of technologies that can be applied in many domains. Delivering meaningful assurance requires understanding not only the technical details of AI systems, but also relies on subject matter expertise and knowledge of the context in which these systems are used.
The current ecosystem contains the right ingredients for success, but is highly fragmented and needs to mature in a number of different ways to become fully effective. Responsibilities for assurance need to be distributed appropriately between actors, the right standards need to be developed and the right skills are needed throughout the ecosystem.
The market for AI assurance is already starting to grow, but action is needed to shape this ecosystem into an effective one that can respond to the full spectrum of risks and compliance issues presented by AI systems. To distribute responsibilities effectively and develop the skills and supporting structures needed for assurance, we have identified six key areas for development. These are:
In the following sections, we will outline the current state of the AI assurance ecosystem with regard to these six areas and compare this with our vision for a mature future ecosystem. We highlight the roles of different actors and outline important next steps for building towards a mature AI assurance ecosystem.
Early demand for AI assurance has been driven primarily by the reputational concerns of actors in the AI supply chain, along with proactive efforts by AI developers to build AI responsibly. However, pressure on organisations to take accountability for their use of AI is now coming from a number of directions. Public awareness of issues related to AI assurance (especially bias) is growing in response to high profile failures. We are also seeing increasing interest from regulators, higher customer expectations, and concerns about where liability for harms will sit. The development community is being proactive in this space, managing risks as part of the responsible AI movement, however we need others in the ecosystem to better recognise assurance needs.
Increased interest and higher consumer expectations mean that organisations will need to demand more evidence from their suppliers and their internal teams to demonstrate that the systems they use are safe and otherwise trustworthy.
Organisations developing and deploying AI systems already have to respond to existing regulations including data protection law, equality law and sector specific regulations. As existing motivations to regulate AI appear, organisations will need to anticipate future regulation to remain competitive. This will include both UK sector-based regulation and for organisations exporting products, non-UK developments such as the EU AI regulations and the Canadian AIA.
Regulators are starting to demand evidence that AI systems being deployed are safe and otherwise trustworthy, with some regulators starting to set out assurable recommendations and guidelines for the use of AI systems.
In a mature assurance ecosystem, assurance demand is driven by:
Organisations desire to know that their systems or processes are effective and functioning as intended.
The need for organisations to earn and keep the trust of their customers and staff, by demonstrating the trustworthiness of the AI systems they deploy. This will partly need to happen proactively but will also be driven by commercial pressures.
An awareness of and a duty to address real material risks to the organisation and wider society.
The need to comply with, and demonstrate compliance with regulations and legal obligations.
Demonstrating trustworthiness to the wider public, competing on the basis of public trust.
The importance of these drivers will vary by sector. For example, in safety-critical industries the duty to address material risks and build consumer trust will be stronger in driving assurance demand, compared to low risk industries. In most industries where AI is being adopted, the primary driver for assurance services will be gaining the confidence that their systems will actually work effectively, as they intend them to.
To start to respond to these demands, organisations building or deploying AI systems should be developing a clear understanding of the concrete risks and concerns that arise. Regulators and professional bodies have an important supporting role here in setting out guidance to inform industry about key concerns and drive effective demand for assurance. When these concerns have been identified, organisations need to think about the sorts of evidence that is needed to understand, manage and mitigate these risks, to provide assurance to other actors in the ecosystem.
In a mature AI assurance ecosystem, those accountable for the use of AI systems will demand and receive evidence that these systems are fit-for-purpose. Organisations developing, procuring or using AI systems should be aware of the risks, governance requirements and performance outcomes that they are accountable for, and provide assurance accordingly. Organisations that are aware of their accountabilities for risks will be better placed to demand the right sort of assurance services to address these risks. As well as setting accountabilities, regulation and standards will play an important role in structuring incentives for assurance i.e. setting assurance requirements and criteria to incentivise effective demand.
The drivers discussed above will inevitably increase the demand for AI assurance, but we still need to take care to ensure that the demand is focused on services that add real value. Many current AI assurance services are focused primarily on aspects of risk and performance that are most salient to an organisations reputation. This creates risks of deception and ethics washing, where actors in the supply chain can selectively commision or perform assurance services primarily to benefit their reputation, rather than address the underlying drivers of trustworthiness.
This risk of ethics washing relates to an incentive problem. The economic incentives of actors within the AI supply chain come into conflict with incentives to provide reliable, trustworthy assurance. Incentive problems in the AI supply chain currently prevent demand for AI assurance from satisfactorily ensuring AI systems are trustworthy, and misalign demand for AI assurance with broader societal benefit. Avoiding this risk will require a combination of ensuring that assurance services are valuable and attractive for organisations, but also that assurance requirements whether regulatory or non-regulatory are clearly defined across the spectrum of relevant risks, and organisations are held to account on this basis.
Demand is also constrained by challenges with skills and accountability within the AI supply chain. In many organisations there is a lack of awareness about: the types of risks and different aspects of systems and development processes that need to be assured for AI systems to be trustworthy, and appropriate assurance approaches for assessing trustworthiness across these different areas. There is also a lack of knowledge and coordination across the supply chain around who is accountable for assurance across different areas. Clearer understanding of accountabilities is required to drive demand for assurance.
As demand for AI assurance grows, a market for assurance services needs to develop in response to limitations in skills and competing incentives that actors in the AI supply chain, government and regulators are not well placed to overcome.
Organisations in the AI supply chain will increasingly demand evidence that systems are trustworthy and compliant as they become aware of their own accountabilities for developing, procuring and deploying AI systems.
However, actors in the AI supply chain wont have the expertise required to provide assurance in all of these areas. In some cases building specialist in-house capacity to serve these needs will make sense. For example, in the finance industry, model risk management is a crucial in-house function. In other areas, building specialist in-house capacity will be difficult and will likely not be an efficient way to distribute skills and resources for providing assurance services.
The business interests of actors in the AI supply chain means that without independent verification, in many cases first and second party assurance will be insufficient to build justified trust. Assurance users will be unable to have confidence that the assurance provided by the first party faithfully reflects the trustworthiness of the AI system.
Therefore, as demand for AI assurance grows, a market for external assurance providers will need to grow to meet this demand. This market for independent assurance services should include a dynamic mix of small and large providers offering a variety of services to suit a variety of needs.
A market of AI-specific assurance services has started to emerge, with a range of companies including established professional services firms, research institutions and specialised start-ups beginning to offer assurance services. There is a more established market of services addressing data protection issues, with a relatively new but growing sector of services addressing the fairness and robustness of AI systems. More novel services are also emerging to enable effective assurance, such as testbeds to promote the responsible development of autonomous vehicles. Similarly, the Maritime Autonomy Surface Testbed enables the testing of autonomous maritime systems for verification and proof of concept.
Not all AI assurance will be new though. In some use-cases and sectors, existing assurance mechanisms will need to evolve to adapt to AI. For example, routes such as conformity assessment, audit and certification used in safety assurance mechanisms will inevitably need to be updated to consider AI issues. Regulators in safety critical industries are leading the way here. The Medicines and Healthcare Products Regulatory Agency (MHRA) is committed to developing the worlds leading regulatory system for the regulation of Software as a Medical Device (SaMD) including AI.
The ICO has also begun to develop a number of initiatives to ensure that AI systems are developed and used in a trustworthy manner. The ICO has produced an AI Auditing Framework alongside Draft Guidance, which is designed to complement their guidance on Explaining decisions made with AI, produced in collaboration with the Alan Turing Institute. In September 2021, Healthily, the creator of an AI-based smart symptom checker submitted the first AI explainability statement to the ICO. ForHumanity, a US led non-profit, has submitted a draft UK GDPR Certification scheme for accreditation by the ICO and UK Accreditation Body (UKAS).
There are a range of toolkits and techniques for assuring AI emerging. However, the AI assurance market is currently fragmented and at a nascent stage. We are now in a window of opportunity to shape how this market emerges. This will involve a concerted effort across the ecosystem, in both the public and private sectors, to ensure that AI assurance services can meet the UKs objectives for ethical innovation.
Assuring AI systems requires a mix of different skills. Data scientists will be needed to provide formal verification and performance testing, audit professionals will be required to assess organisations compliance with regulations, and risk management experts will be required to assess risks and develop mitigation processes.
Given the range of skills, it is perhaps unlikely that demand will be met entirely by multi-skilled individuals; multi-disciplinary teams bringing a diverse range of expertise will be needed. A diverse market of assurance providers needs to be supported to ensure the right specialist skills are available. The UKs National AI Strategy has begun to set out initiatives to help develop, validate and deploy trustworthy AI, including building AI and data science skills through skills bootcamps. It will be important for the UK to develop both general AI skills and the specialist skills in assurance to do this well.
In addition to independent assurance providers, there needs to be a balance of skills for assurance across different roles in the ecosystem. For example, actors within the AI supply chain will require a baseline level of skills for assurance to be able to identify risks to provide or procure assurance services effectively. This balance of skills should reflect the complexity of different assurance processes, the need for independence, and the role of expert judgement in building justified trust.
Supporting an effective balance of skills for assurance, and more broadly enabling a trustworthy market of independent assurance providers, will rely on the development of two key supporting structures. Standards (both regulatory and technical) are needed to set shared reference points for assurance engagements enabling agreement between assurance users and independent providers. Secondly, professionalisation will be important in developing the skills and best-practice for AI assurance across the ecosystem. Professionalisation could involve a range of complementary options, from university or vocational courses to more formal accreditation services.
The next section will outline the role of standards in enabling independent assurance services to succeed as part of a mature assurance ecosystem. After exploring the role of standards, the following section will expand on the possible options for developing an AI assurance profession.
Standards are crucial enablers for AI assurance. Across a whole host of industries, the purpose of a standard is to provide a reliable basis for people to share the same expectations about a product, process, system or service.
Without commonly accepted standards to set a shared reference point, a disconnect between the values and opinions of different actors can prevent assurance from building justified trust. For example, an assurance user might disagree with the views of an assurance provider about the appropriate scope of an impact assessment, or how to measure the level of accuracy of a system. As well as enabling independent assurance, commonly understood standards will also support the scalability and viability of self-assessment and assurance more generally across the ecosystem.
There are a range of different types of standards that can be used to support AI assurance, including technical, regulatory and professional standards. The rest of this section will specifically focus on the importance of Global technical standards for AI assurance. Global technical standards set out good practice that can be consistently applied to ensure that products, processes and services perform as intended safely and efficiently. They are generally voluntary and developed through an industry-led process in global standards developing organisations, based on the principles of consensus, openness, and transparency, and benefiting from global technical expertise and best practice. As a priority, independent assurance requires the development of commonly understood technical standards which are built on consensus.
Read more here:
The roadmap to an effective AI assurance ecosystem - extended version - GOV.UK
- Protein folding - Wikipedia [Last Updated On: August 18th, 2024] [Originally Added On: September 11th, 2019]
- Protein Folding: The Good, the Bad, and the Ugly - Science ... [Last Updated On: August 18th, 2024] [Originally Added On: September 13th, 2019]
- Protein Folding - Chemistry LibreTexts [Last Updated On: August 18th, 2024] [Originally Added On: September 14th, 2019]
- Protein Structure and Folding [Last Updated On: August 18th, 2024] [Originally Added On: September 19th, 2019]
- Structural Biochemistry/Proteins/Protein Folding ... [Last Updated On: August 18th, 2024] [Originally Added On: September 28th, 2019]
- Proteopathy - Wikipedia [Last Updated On: August 18th, 2024] [Originally Added On: October 1st, 2019]
- Folding@home - Wikipedia [Last Updated On: August 18th, 2024] [Originally Added On: October 1st, 2019]
- Denaturation and Protein Folding | Introduction to Chemistry [Last Updated On: August 18th, 2024] [Originally Added On: October 4th, 2019]
- Protein Folding - Anfinsen's Experiment ~ Biology Exams 4 U [Last Updated On: August 18th, 2024] [Originally Added On: October 4th, 2019]
- Protein Structures: Primary, Secondary, Tertiary, Quaternary ... [Last Updated On: August 18th, 2024] [Originally Added On: October 4th, 2019]
- Protein Folding - an overview | ScienceDirect Topics [Last Updated On: August 18th, 2024] [Originally Added On: October 6th, 2019]
- Thermodynamics of spontaneous protein folding: role of ... [Last Updated On: August 18th, 2024] [Originally Added On: October 8th, 2019]
- Molecular Biology 02: 'Thermodynamics of protein folding' [Last Updated On: August 18th, 2024] [Originally Added On: October 8th, 2019]
- The Science Behind Foldit | Foldit [Last Updated On: August 18th, 2024] [Originally Added On: October 8th, 2019]
- Diseases Folding@home [Last Updated On: August 18th, 2024] [Originally Added On: October 8th, 2019]
- DeepMind timeline: The history of the UK's pioneering AI firm - Techworld.com [Last Updated On: August 18th, 2024] [Originally Added On: October 10th, 2019]
- Geroscience and it's Impact on the Human Healthspan: A podcast with John Newman - GeriPal - A Geriatrics and Palliative Care Blog [Last Updated On: August 18th, 2024] [Originally Added On: October 10th, 2019]
- Yumanity Therapeutics Initiates Phase 1 Clinical Trial of Lead Candidate YTX-7739 for the Treatment of Parkinson's Disease | Small Molecules | News... [Last Updated On: August 18th, 2024] [Originally Added On: October 10th, 2019]
- Tenure-Track or Tenure-Eligible Position in the Laboratory of Chemical Physics job with National Institutes of Health | 28302 - Chemical &... [Last Updated On: August 18th, 2024] [Originally Added On: October 10th, 2019]
- Food for the soul: Traditional gyza makers and eaters in Utsunomiya try to keep the dumplings rolling - The Japan Times [Last Updated On: August 18th, 2024] [Originally Added On: October 19th, 2019]
- UT molecular evolution professor named 2019 American Physical Society Fellow - UT The Daily Texan [Last Updated On: August 18th, 2024] [Originally Added On: October 19th, 2019]
- 9 must-have Instant Pot accessories for healthy eating - CNET [Last Updated On: August 18th, 2024] [Originally Added On: October 19th, 2019]
- Researchers Find Fish Wearing Natural 'Bullet-Proof Vest' to Thwart Piranhas in Amazon - News18 [Last Updated On: August 18th, 2024] [Originally Added On: October 19th, 2019]
- Christopher Dobson: chemist whose work on proteins advanced research into neurodegenerative diseases - The BMJ [Last Updated On: August 18th, 2024] [Originally Added On: October 19th, 2019]
- Two years in the making, Pizza Hut tests a round pizza box - Fast Company [Last Updated On: August 18th, 2024] [Originally Added On: October 23rd, 2019]
- Fava Is All About Balance - East Bay Express [Last Updated On: August 18th, 2024] [Originally Added On: October 23rd, 2019]
- Amazon fish wears nature's 'bullet-proof vest' to thwart piranhas - Reuters [Last Updated On: August 18th, 2024] [Originally Added On: October 23rd, 2019]
- RNA Folding Insights Lead to New Therapeutics and Synthetic Biology Technologies - Technology Networks [Last Updated On: August 18th, 2024] [Originally Added On: October 23rd, 2019]
- The Hidden Inactive Ingredient: Biological Products in Recombinant Pharmaceuticals - P&T Community [Last Updated On: August 18th, 2024] [Originally Added On: October 23rd, 2019]
- Insights into Parkinson's Onset May Lie in New Model of Cell Aging and Damage - Parkinson's News Today [Last Updated On: August 18th, 2024] [Originally Added On: October 23rd, 2019]
- Antibiotics with novel mechanism of action discovered - Drug Target Review [Last Updated On: August 18th, 2024] [Originally Added On: October 25th, 2019]
- The top AI lighthouse projects to watch in biopharma - FierceBiotech [Last Updated On: August 18th, 2024] [Originally Added On: October 25th, 2019]
- UCI vision scientist Krzysztof Palczewski elected to National Academy of Medicine - UCI News [Last Updated On: August 18th, 2024] [Originally Added On: October 26th, 2019]
- Rett Syndrome Tied to Altered Protein Levels in Brain in Early Study - Rett Syndrome News [Last Updated On: August 18th, 2024] [Originally Added On: October 26th, 2019]
- Bulls-Eye: Imaging Technology Could Confirm When a Drug Is Going to the Right Place - On Cancer - Memorial Sloan Kettering [Last Updated On: August 18th, 2024] [Originally Added On: October 26th, 2019]
- Discover: Science is often wrong and that's actually a really good thing - Sudbury.com [Last Updated On: August 18th, 2024] [Originally Added On: November 2nd, 2019]
- IBM vs. Google and the Race to Quantum Supremacy - Citizen Truth [Last Updated On: August 18th, 2024] [Originally Added On: November 7th, 2019]
- Microprotein ID'd Affecting Protein Folding and Cell Stress Linked to Diseases Like Huntington's, Study Finds - Huntington's Disease News [Last Updated On: August 18th, 2024] [Originally Added On: November 7th, 2019]
- IBM vs. Google and the race to quantum supremacy - Salon [Last Updated On: August 18th, 2024] [Originally Added On: November 11th, 2019]
- That Junk DNA Is Full of Information! - Advanced Science News [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2019]
- Argonne Researchers to Share Scientific Computing Insights at SC19 - HPCwire [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2019]
- How to Make the Most of Your Old Tech - New York Magazine [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2019]
- 2 tricked-out pies to be thankful for: pear with cranberries and pumpkin with ginger praline - The Gazette [Last Updated On: August 18th, 2024] [Originally Added On: November 17th, 2019]
- From Mediterranean Lentil Salad to Cinnamon Raisin Bread: Our Top 10 Vegan Recipes of the Day! - One Green Planet [Last Updated On: August 18th, 2024] [Originally Added On: November 22nd, 2019]
- What is Biophysical Analysis? - The John Innes Centre [Last Updated On: August 18th, 2024] [Originally Added On: November 22nd, 2019]
- Thermodynamic probes of instability: application to therapeutic proteins - European Pharmaceutical Review [Last Updated On: August 18th, 2024] [Originally Added On: November 22nd, 2019]
- In science, its better to be curious than correct - The Conversation CA [Last Updated On: August 18th, 2024] [Originally Added On: November 22nd, 2019]
- New Study Reveals US Airlines With the Healthiest Food Options - TravelPulse [Last Updated On: August 18th, 2024] [Originally Added On: November 29th, 2019]
- Study Reveals Hepatitis A Originated in Insects - Advanced Science News [Last Updated On: August 18th, 2024] [Originally Added On: November 29th, 2019]
- How Home-Baked Bread Is Defying the Industrial Food System - YES! Magazine [Last Updated On: August 18th, 2024] [Originally Added On: November 29th, 2019]
- Black Friday Is Absolutely Massive. Here Are a Bunch of Deals We Couldn't Call Out Individually - Gear Patrol [Last Updated On: August 18th, 2024] [Originally Added On: November 30th, 2019]
- A conserved ATP- and Scc2/4-dependent activity for cohesin in tethering DNA molecules - Science Advances [Last Updated On: August 18th, 2024] [Originally Added On: November 30th, 2019]
- Ancient Worm Reveals Way to Destroy Toxic Cells Potential New Therapy for Huntingtons and Parkinsons - SciTechDaily [Last Updated On: August 18th, 2024] [Originally Added On: December 11th, 2019]
- Biologics Market Size Expand at a CAGR of 3.9 With $399.5 Billion By 2025 - MENAFN.COM [Last Updated On: August 18th, 2024] [Originally Added On: December 11th, 2019]
- Exploring the Diversity of Parkinson's Proteins - Technology Networks [Last Updated On: August 18th, 2024] [Originally Added On: December 11th, 2019]
- Early detection of brain degeneration on the horizon with innovative sensor - UNM Newsroom [Last Updated On: August 18th, 2024] [Originally Added On: December 16th, 2019]
- Holiday cookies from around the world | Features - yoursun.com [Last Updated On: August 18th, 2024] [Originally Added On: December 22nd, 2019]
- The Art of Origami is Now A Key Tool That Helps Doctors Save Lives - Nature World News [Last Updated On: August 18th, 2024] [Originally Added On: December 23rd, 2019]
- Nanopores can identify the amino acids in proteins, the first step to sequencing - University of Illinois News [Last Updated On: August 18th, 2024] [Originally Added On: December 23rd, 2019]
- Wow your New Year's Eve guests with a puff pastry appetizer - KARE11.com [Last Updated On: August 18th, 2024] [Originally Added On: January 1st, 2020]
- The 10 most compelling product innovations of 2019 - Fast Company [Last Updated On: August 18th, 2024] [Originally Added On: January 1st, 2020]
- Our best recipes from 2019 | Food and cooking - STLtoday.com [Last Updated On: August 18th, 2024] [Originally Added On: January 1st, 2020]
- The best WIRED long reads of 2019 - Wired.co.uk [Last Updated On: August 18th, 2024] [Originally Added On: January 1st, 2020]
- Structure of Drosophila melanogaster ARC1 reveals a repurposed molecule with characteristics of retroviral Gag - Science Advances [Last Updated On: August 18th, 2024] [Originally Added On: January 1st, 2020]
- Gocycle to partner with nutrition brand Fuel10k to promote benefits of e-bikes - Bike Biz [Last Updated On: August 18th, 2024] [Originally Added On: January 16th, 2020]
- The Importance of Understanding TargetProtein Interactions in Drug Discovery - Technology Networks [Last Updated On: August 18th, 2024] [Originally Added On: January 16th, 2020]
- How DeepMind is unlocking the secrets of dopamine and protein folding with AI - VentureBeat [Last Updated On: August 18th, 2024] [Originally Added On: January 16th, 2020]
- How To Grow (Almost) Anything - Hackaday [Last Updated On: August 18th, 2024] [Originally Added On: January 19th, 2020]
- U of T's Peter Wittek, who will be remembered at Feb. 3 event, on why the future is quantum - News@UofT [Last Updated On: August 18th, 2024] [Originally Added On: January 19th, 2020]
- The DeepMind algorithm to solve two complex problems of biology - The Times Hub [Last Updated On: August 18th, 2024] [Originally Added On: January 19th, 2020]
- High Focus on Product Innovation & Development to Assist the Growth of the Folding Cartons Market between and . 2017 2025 Dagoretti News -... [Last Updated On: August 18th, 2024] [Originally Added On: January 24th, 2020]
- Folded, frozen, and faster: JUST Egg is now more convenient, and cheaper, to enjoy - FoodNavigator-USA.com [Last Updated On: August 18th, 2024] [Originally Added On: January 24th, 2020]
- Phyllo, cheese, heaven: Balkan women have been making these treats for centuries - The Gazette [Last Updated On: August 18th, 2024] [Originally Added On: January 26th, 2020]
- Phyllo, cheese, heaven: Balkan women have been making these treats for centuries - Waterbury Republican American [Last Updated On: August 18th, 2024] [Originally Added On: January 28th, 2020]
- The keto diet: Its highs and lows plus 5 recipes - The Gazette [Last Updated On: August 18th, 2024] [Originally Added On: February 12th, 2020]
- Study Shows How Soap Molecules Alter the Protein Structure - AZoM [Last Updated On: August 18th, 2024] [Originally Added On: February 12th, 2020]
- CryoEM of CBD Tau Suggests Another Unique Protofibril - Alzforum [Last Updated On: August 18th, 2024] [Originally Added On: February 16th, 2020]
- Working In Science Was A Brutal Education. Thats Why I Left. - BuzzFeed News [Last Updated On: August 18th, 2024] [Originally Added On: February 17th, 2020]
- The Evolution of the Eye, Demystified - Discovery Institute [Last Updated On: August 18th, 2024] [Originally Added On: February 28th, 2020]
- L-serine could be used to treat ALS, after promising study results - Drug Target Review [Last Updated On: August 18th, 2024] [Originally Added On: February 28th, 2020]