A group of prominent tech executives will join the Artificial Intelligence Safety and Security Board, a panel tasked with advising the federal government on the use of AI in critical infrastructure.
The Wall Street Journal reported the development today. According to the paper, the panel comprises not only representatives of the tech industry but also academics, civil rights leaders and the chief executives of several critical infrastructure companies. In all, the Artificial Intelligence Safety and Security Board will have nearly two dozen members.
Microsoft Corp. Chief Executive Satya Nadella, Nvidia Corp. CEO Jensen Huang and OpenAIs San Altman are among the participants. They will be joined by their counterparts at Advanced Micro Devices Inc., Amazon Web Services Inc., Anthropic PBC, Cisco Systems Inc., Google LLC and IBM Corp.
Secretary of Homeland Security Alejandro Mayorkas is leading the panel. According to the Journal, the Artificial Intelligence Safety and Security Board will advise the Department of Homeland Security on how to safely apply AI in critical infrastructure. The panels members will convene every three months starting in May.
Besides providing advice to the federal government, the panel will also produce AI recommendations for critical infrastructure organizations. The effort is set to focus on companies such as power grid operators, manufacturers and transportation service providers. The panels recommendations will reportedly focus on two main topics: ways of applying AI in critical infrastructure and the potential risks posed by the technology.
Multiple cybersecurity companies have observed hacking campaigns that make use of generative AI. In some of the campaigns, hackers are leveraging large language models to generate phishing emails. In other cases, AI is being used to support the development of malware.
The Artificial Intelligence Safety and Security Board was formed through an executive order on AI that President Joe Biden signed last year. The order also called on the federal government to take a number of other steps to address the technologys risks. The Commerce Department will develop guidance for identifying AI-generated content, while the National Institute of Standards and Technology is working on AI safety standards.
The executive order established new requirements for private companies as well. In particular, tech firms developing advanced AI must now share data about new models safety with the government. This data includes the results of so-called red team tests, evaluations that assess neural networks safety by simulating malicious prompts.
Several of the AI ecosystems largest players have made algorithm safety a focus of their research efforts. OpenAI, for example, in December revealed that its developing an automated approach to addressing the risks posed by advanced neural networks. The method involves supervising an advanced AI models output using a second, less capable neural network.
THANK YOU
Link:
CEOs of Microsoft, Nvidia and other tech giants join federal AI advisory board - SiliconANGLE News