Increasing Transparency at the National Security Commission on Artificial Intelligence – Lawfare

In 2018, Congress established the National Security Commission on Artificial Intelligence (NSCAI)a temporary, independent body tasked with reviewing the national security implications of artificial intelligence (AI). But two years later, the commissions activities remain little known to the public. Critics have charged that the commission has conducted activities of interest to the public outside of the public eye, only acknowledging that meetings occurred after the fact and offering few details on evolving commission decision-making. As one commentator remarked, Companies or members of the public interested in learning how the Commission is studying AI are left only with the knowledge that appointed people met to discuss these very topics, did so, and are not yet releasing any information about their recommendations.

That perceived lack of transparency may soon change. In June, the U.S. District Court for the District of Columbia handed down its decision in Electronic Privacy Information Center v. National Security Commission on Artificial Intelligence, holding that Congress compelled the NSCAI to comply with the Federal Advisory Committee Act (FACA). Under FACA, the commission must hold open meetings and proactively provide records and other materials to the public. This decision follows a ruling from December 2019, holding that the NSCAI must also provide historical documents upon request under the Freedom of Information Act (FOIA). As a result of these decisions, the public is likely to gain increased access to and insight into the once-opaque operations of the commission.

Lawmakers established the NSCAI in the John S. McCain National Defense Authorization Act (NDAA) for fiscal 2019 1051, which tasked the commission with consider[ing] the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States. The commissions purview includes an array of issues related to the implications and uses of artificial intelligence and machine learning for national security and defense, including U.S. competitiveness and leadership, research and development, ethics, and data standards.

The NSCAI is currently chaired by Eric Schmidt, the former executive chairman of Googles parent company, Alphabet. The commissions 15 membersappointed by a combination of Congress, the secretary of defense and the secretary of commercereceive classified and unclassified briefings, meet in working groups and engage with industry. They report their findings and recommendations to the president and Congress, including in an annual report.

The Electronic Privacy Information Center (EPIC), a research center focused on privacy and civil liberties issues in the digital age, submitted a request to the NSCAI in September 2019, seeking access to upcoming meetings and records prepared by the commission under FACA and FOIA. In the six-month period prior to the request, the NSCAI held more than a dozen meetings and received more than 100 briefings, according to EPIC. At the time it filed the lawsuit, EPIC noted that the commissions first major report was also one month overdue for release. When the commission did not comply with the requests under FOIA and FACA, EPIC brought suit under the two laws.

EPICs complaint alleged that the NSCAI had conducted its operations opaquely in its short lifespan. Since its establishment, the commission has operated almost entirely in secret with meetings behind closed doors[,] and has failed to publish or disclose any notices, agendas, minutes, or materials. If Congress had intended the NSCAI to comply with FOIA and FACA, such activity would not satisfy the statutes requirements. Given the potential implications of federal artificial intelligence decisions for privacy, cybersecurity, human rights, and algorithmic bias, EPIC argued that [p]ublic access to the records and meetings of the AI Commission is vital to ensure government transparency and democratic accountability. The complaint also noted the potential ramifications of commission activities for the government, private sector, and public, as well as the importance of artificial intelligence safeguards in the national security context due to limited public oversight. According to EPIC, increasing public participation would permit greater input into the development of national AI policy by those whose privacy and data security could potentially be affected.

The U.S. District Court for the District of Columbia addressed EPICs FOIA claim in a December 2019 decision. FOIA requires agencies to disclose their records to a party upon request, barring exemptions (including for information classified to protect national security). EPIC alleged that the NSCAI failed to uphold its obligations under FOIA to process FOIA requests in a timely fashion; to process EPICs FOIA requests in an expedited manner, in accordance with EPICs claims of urgency; and to make available for public inspection and copying its records, reports, transcripts, minutes, appendixes, working papers, drafts, studies, agenda, or other documents. The commission, which at the time did not have a FOIA processing mechanism in place or other pending FOIA requests, argued that it was not an agency subject to FOIA.

The courts inquiry centered on whether the NSCAI is an agency under FOIA. Comparing the language establishing the NSCAI with FOIAs definition of agency, the court held that the NSCAI is subject to FOIA. In his decision, District Judge Trevor McFadden noted that Congress could have hardly been clearer. As a result, since that time, the commission has had to produce historical documents in response to FOIA requests.

FACA, by contrast, applies forward-looking requirements specifically to federal advisory committees. These mandates include requiring committees to open meetings to the public and announce them in the Federal Register, and to make reports, transcripts and other commission materials publicly available. The measures aim to inform the public about and invite public engagement with the committees that provide expertise to the executive branch. EPIC alleged that the NSCAI violated FACA by failing to hold open meetings and provide notice of them, and by failing to make records available to the public. EPIC sought mandamus relief pursuant to the alleged FACA violations.

In its June decision, the district court ruled that FACA applies to the NSCAI. The commission had filed a motion to dismiss the FACA claims, arguing that it could not be subject to both FOIA and FACA. Since the court had previously held the NSCAI to be an agency for purposes of FOIA, the commission reasoned that it could not simultaneously be an advisory committee under FACA. McFadden disagreed. Invoking the Roman God Januss two facesone forward-looking and the other backward-facinghe wrote, [L]ike Janus, the Commission does indeed have two faces, and ... Congress obligated it to comply with FACA as well as FOIA. The court could not identify a conflict between the requirements of the two statutes, despite differences in their obligations and exceptions. Rather, it noted that if such conflicts arise, it will be incumbent on the parties and the Court to resolve any difficulties. The court dismissed additional claims under the Administrative Procedure Act (APA) for lack of subject matter jurisdiction, as it determined that the commission is not an agency under the APA definition.

The courts decision turned on whether the NSCAI is an advisory committee subject to FACA. The court determined that the statutory text of the 2019 NDAA establishing the NSCAI fit[s] the [FACA] definition of advisory committee like a glove. Furthermore, turning to the full text of the 2019 NDAA, the court noted that the law contains at least two instances in which it explicitly exempts a government body from FACA. The court read the 2019 NDAA as silent when FACA applies and explicit when FACA does not apply. Given Congresss silence on the applicability of FACA to the NSCAI in the 2019 NDAAand again in the 2020 NDAAthe court reasoned that Congress intended the NSCAI to be subject to FACA.

In determining the NSCAI to be subject to FACA, in addition to FOIA, the court has compelled the commission to adopt a more transparent operating posture going forward. Since the December 2019 decision on FOIA, the NSCAI has produced a number of historical records in response to FOIA requests. The recent ruling on FACA grounds requires the NSCAI to hold open meetings, post notice of meetings in advance and make documents publicly available. As a result, the commissions process of compiling findings and developing recommendations for government action related to artificial intelligence and machine learning will likely become more accessible to the public.

The two court decisions come in time to have a noticeable impact on the remaining term of the temporary commission. While the NSCAI was previously due to disband later in 2020, the NDAA for fiscal 2020 1735 extended the commissions lifespan by one year, to October 1, 2021. Citing federal budgetary timelines and the pace of AI development, the commission released its first set of recommendations in March 2020 and expressed its intent to publish additional recommendations on a quarterly basis thereafter. The commission is due to submit its final report to Congress by March 1, 2021. As the NSCAI prepares to enter its final year of operations and develop its closing recommendations, the public will have a clearer window into the commissions work.

View post:
Increasing Transparency at the National Security Commission on Artificial Intelligence - Lawfare

Artificial intelligence levels show AI is not created equal. Do you know what the vendor is selling? – Spend Matters

Just like there are eight levels to analytics as mentioned in a recent Spend Matters PRO brief, artificial intelligence (AI) has various stages of the technology today even though there is no such thing as true AI by any standard worth its technical weight.

But just because we dont yet have true AI doesnt mean todays AI cant help procurement improve its performance. We just need enough computational intelligence to allow software to do the tactical and non-value-added tasks that software should be able to perform with all of the modern computational power available to us. As long as the software can do the tasks as well as an average human expert the vast majority of the time (and kick up a request for help when it doesnt have enough information or when the probability it will outperform a human expert is less than the expert performing a task) thats more than good enough.

The reality is, for some basic tactical tasks, there are plenty of software options today (e.g., intelligent invoice processing). And even for some highly specialized tasks that we thought could never be done by a computer, we have software that can do it better, like early cancerous growth detection in MRIs and X-rays.

That being said, we also have a lot of software on the market that claims to be artificial intelligence but that is not even remotely close to what AI is today, let alone what useful software AI should be. For software to be classified as AI today, it must be capable of artificial learning and evolving its models or codes and improve over time.

So, in this PRO article, we are going to define the levels of AI that do exist today, and that may exist tomorrow. This will allow you to identify what truth there is to the claims that a vendor is making and whether the software will actually be capable of doing what you expect it to.

Not counting true AI, there are five levels of AI that are available today or will likely be available tomorrow:

Lets take a look at each group.

See more here:
Artificial intelligence levels show AI is not created equal. Do you know what the vendor is selling? - Spend Matters

Protecting inventions which use Machine Learning and Artificial Intelligence – Lexology

Protecting inventions which use Machine Learning and Artificial Intelligence

There has been a lot of talk recently about the DABUS family of patent applications where DABUS, an artificial intelligence (AI), was named as an inventor. This has prompted a lot of discussion around whether an inventor must be a human being and there is no doubt that this discussion will continue as AI finds its way into more and more aspects of our lives.

However, one of the other parts of the discussion around AI in patents is around the patentability of inventions which apply machine learning (ML) and AI based concepts to the solution of technical problems.

Why consider patent protection?

Patents protect technical innovations and technical solutions to problems. They can offer broad legal protection for the technical concept you develop, albeit in exchange for disclosure of the invention.

Here in the UK, a patent can give you the right to prevent others from exploiting your invention and can help you to mark out legal exclusivity around a patented product.

Can I not just keep the invention a secret?

It is an option to utilise the invention as a trade secret, but the protection of the trade secret involves considerable effort to implement the technical and administrative environment which will enable the trade secret to stay as a secret. This can include changing your physical workplace to confine certain environments where trade secret-protected inventions are being used. This can also include implementing technical measures to inhibit access to trade secrets from unauthorised individuals. Such technical measures are particularly important for AI and ML-focused inventions as they are often embodied in computer program code which can simply be transferred from one computer to another

What is perhaps more pertinent is that if your AI or ML-enabled concept is to be implemented in association with hardware which is to be sold publicly, then this will by definition negate the value of the concept as a trade secret as it will become publicly available. It may require decompilation or reverse engineering to access the code, but this does not mean that the code is secret.

There may be additional know-how associated with your invention which is worth protecting as a trade secret but as part of a suite of IP rights (including patents) which are focused on protecting your invention.

How much information does the patent application require?

All patent applications are drafted for the skilled person who in this context would be somebody skilled in the techniques of ML and AI, although not necessarily an expert. That is to say, it needs to be enough information to enable such a person to put the invention into effect.

This should include technical information about features which provide an advantage over previous systems and clear identification of advantageous features and why they are advantageous. This will give your Patent Attorney the best possible chance of framing the invention in a way which convinces patent offices around the world to grant a patent.

It is also advisable to include disclosure of at least one set of training data and details of how it has been trained.

In the context of AI and ML it is particularly important to draw attention to technically advantageous features as some patent offices will need a lot of convincing to grant patents for these inventions. It is particularly useful to draw attention to features which solve technical problems or are motivated by technical considerations rather than economic or commercial considerations.

The EPO have stressed that patents will be granted when ML or AI based inventions are limited to a specific technical application or required a specific technical implementation which are directed to a technical purpose. These advantages and details of implementation will enable a patent attorney skilled in drafting patent applications for ML/AI to present your invention in the best light as possible from the perspective of the EPO or the UKIPO as they will enable us to clearly set out how the invention delivers the technical application and solves the technical problem.

Our software patents team are specifically noted for their skill in drafting computer implemented inventions for the UKIPO and the EPO.

Although a lot of information is required, we do not necessarily need program code. It would help, however, to at least include a pseudocode description of the invention so that we can garner an understanding of how the invention works as a series of steps this helps with the description.

Are AI and ML not just like software, i.e. cannot be patented?

It is possible to patent software-based inventions but, like other inventions, the invention needs to solve a technical problem. This is the same with inventions which apply AI and ML.

AI and ML inventions are treated in Europe like other mathematical methods in that they are rejected as excluded from patentability if they do not solve a technical problem. It is best to illustrate this by example.

If your invention is to improve a technique which is used to analyse data such as, for example, your invention improves K-means clustering with no other benefit to a technical field, then you can expect to face considerable obstacles to obtaining a patent to protect your invention. However, if your invention applies K-means clustering to achieve a specific improvement to a specific technical system then you are likely to face less obstacle to obtaining a patent for your invention.

That is to say, when considering whether you wish to pursue patent protection for the technology you have developed then focus on what the innovation achieves in a technical field.

What if the technique has been applied elsewhere? Can I still get a patent?

Referring back to our K-means clustering example, if you see that K-means clustering has been used in sensing of rain droplets on a car window to determine the appropriate setting for the windscreen wipers, then that does not necessarily mean that you cannot get a patent for K-means clustering applied to determining the likelihood of a denial of service attack on a server.

That is to say, if you are applying known technology to a new field and solving a technical problem in that field, there is an arguable case for a patentable invention.

Are there differences between Europe, US and other jurisdictions?

The approach to these inventions across jurisdictions can be different and complete consistency is difficult to guarantee. However, in drafting your patent application we would seek to make the language as flexible as possible in order to admit differing interpretations of the law across jurisdictions and to give the prosecution of your patent applications in those jurisdictions the greatest possible chance of success.

What do I do next?

If you have developed technology which applies AI or ML, then consider whether you could achieve patent protection for that invention. Contact one of our software patent experts to discuss the invention and your options.

It is also useful to note that having a pending patent application can be a useful deterrent for competitors and the uncertainty created for third parties by the existence of the patent application can provide you with the space in the market to establish your exclusivity, develop your customer base and build your brand.

See original here:
Protecting inventions which use Machine Learning and Artificial Intelligence - Lexology

Artificial intelligence is on the rise – Independent Australia

New developments and opportunities are opening up in artificial intelligence, says Paul Budde.

I RECENTLY followed a "lunch box lecture", organised by the University of Sydney.In thetalk, Professor Zdenka Kuncic explored the very topical issue of artificial intelligence.

The world is infatuated with artificial intelligence (AI), and understandably so, given its super-human ability to find patterns in big data as we all notice when using Google, Facebook, Amazon, eBay and so on. But the so-called general intelligence that humans possess remains elusive forAI.

Interestingly, Professor Kuncic approached this topic from a physics perspective. By viewing the brains neural network as a physical hardware system, rather than the algorithm-based software as for example AI-based research used insocial media.

Her approach reveals clues that suggest the underlying nature of intelligence is physical.

Basically, what this means is that a software-based system will require ongoing input from software specialists to make updates based on new developments.Her approach, however, is to look at a physical system based on nanotechnology and use these networks as self-learning systems, where human intervention is no longer required.

Imagine the implications of the communications technologies that are on the horizon, where basically billions of sensors and devices will be connected to networks.

The data from these devices need to be processed in real-time and dynamic decisions will have to be made without human intervention. The driverless car is, of course, a classic example of such an application.

The technology needed to make such a system work will have to be based on edge technology in the device out there in the field. It is not going to work in any scaled-up situation if the data from these devices will first have to be sent to the cloud for processing.

Nano networks are a possible solution for such situations. A nanonetwork or nanoscale network is a set of interconnected nanomachines (devices a few hundred nanometers or a few micrometres at most in size), which at the moment can perform only very simple tasks such as computing, data storing, sensing and actuation.

However, Professor Kuncik expects that new developments will see expanded capabilities of single nanomachines both in terms of complexity and range of operation by allowing them to coordinate, share and fuse information.

Professor Kuncik concentrates, in her work, on electromagnetics for communication in the nanoscale.

This is commonly defined as the 'transmission and reception of electromagnetic radiation from components based on novel nanomaterials'.

Professor Kuncik mentioned this technology was still in its infancy. She was very upbeat about the future, based on the results of recent research and international collaboration. Advancements in carbon and molecular electronics have opened the door to a new generation of electronic nanoscale components such as nanobatteries, nanoscale energy harvesting systems, nano-memories, logical circuitry in the nanoscale and even nano-antennas.

From a communication perspective, the unique properties observed in nanomaterials will decide on the specific bandwidths for the emission of electromagnetic radiation, the time lag of the emission, or the magnitude of the emitted power forinput energy.

The researchers are looking at the output of these nanonetworks rather than the input. The process is analogue rather than digital. In other words, the potential output provides a range of possible choices, rather than one (digital) outcome.

The trick is to understand what choices are made in a nanonetwork and why.

There are two main alternatives for electromagnetic communication in the nanoscale the one as pursued by Professor Kuncik the other one being based on molecular communication.

Nanotechnology could have an enormous impact on for example the future of 5G. If nanotechnology can be included in the various Internet of Things (IoT) sensors and devices than this will open an enormous amount of new applications.

It has been experimentally demonstrated that is possible to receive and demodulate an electromagnetic wave by means of a nano radio.

Second, graphene-based nano-antennas have been analysed as potential electromagnetic radiators in the terahertz band.

Once these technologies are further developed and commercialised, we can see a revolution in edge-computing.

Paul Buddeis an Independent Australia columnist and managing director ofPaul Budde Consulting, an independent telecommunications research and consultancy organisation. You can follow Paul on Twitter@PaulBudde.

Support independent journalism Subscribeto IA.

More:
Artificial intelligence is on the rise - Independent Australia

How Coronavirus and Protests Broke Artificial Intelligence And Why Its A Good Thing – Observer

Until February 2020, Amazon thought that the algorithms that controlled everything from their shelf space to their promoted products were practically unbreakable. For years they had used simple and effective artificial intelligence (AI) to predict buying patterns, and planned their stock levels, marketing, and much more based on a simple question: who usually buys what?

Yet as COVID-19 swept the globe they found that the technology that they relied on was much more shakable than they had thought. As sales of hand sanitizer, face masks, and toilet paper soared, sites such as Amazon found that their automated systems were rendered almost useless as AI models were thrown into utter disarray.

Elsewhere, the use of AI in everything from journalism to policing has been called into question. As long-overdue action on racial inequalities in the US has been demanded in recent weeks, companies have been challenged for using technology that regularly displays sometimes catastrophic ethnic biases.

Microsoft was recently held to account after the AI algorithms that it used on its MSN news website confused mixed-race members of girlband Little Mix, and many companies have now suspended the sale of facial recognition technologies to law enforcement agencies after it was revealed that they are significantly less effective at identifying images of minority individuals, leading to potentially inaccurate leads being pursued by police.

The past month has brought many issues of racial and economic injustice into sharp relief, says Rediet Abebe, an incoming assistant professor of computer science at the University of California, Berkeley. AI researchers are grappling with what our role should be in dismantling systemic racism, economic oppression, and other forms of injustice and discrimination. This has been an opportunity to reflect more deeply on our research practices, on whose problems we deem to be important, whom we aim to serve, whom we center, and how we conduct our research.

SEE ALSO: Artificial Intelligence Is on the Case in the Legal Profession

From the COVID-19 pandemic to the Black Lives Matter protests, 2020 has been a year characterized by global unpredictability and social upheaval. Technology has been a crucial medium of effecting change and keeping people safe, from test and track apps to the widespread use of social media to spread the word about protests and petitions. But amidst this, machine learning AI has sometimes failed to meet its remit, lagging behind rapid changes in social behavior and falling short on the very thing that it is supposed to do best: gauging the data fed into it and making smart choices.

The problem often lies not with the technology itself, but in a lack of data used to build algorithms, meaning that they fail to reflect the breadth of our society and the unpredictable nature of events and human behavior.

Most of the challenges to AI that have been identified by the pandemic relate to the substantial changes in behavior of people, and therefore in the accuracy of AI models of human behavior, says Douglas Fisher, an associate professor of computer science at Vanderbilt University. Right now, AI and machine learning systems are stovepiped, so that although a current machine learning system can make accurate predictions about behaviors under the conditions under which it learned them, the system has no broader knowledge.

The last few months have highlighted the need for greater nuance in AIin short, we need technology that can be more human. But in a society increasingly experimenting with using AI to carry out such crucial roles as identifying criminal suspects or managing food supply chains how can we ensure that machine learning models are sufficiently knowledgeable?

Most challenges related to machine learning over the past months result from change in data being fed into algorithms, explains Kasia Borowska, Managing Director of AI consultancy Brainpool.ai. What we see a lot of these days is companies building algorithms that just about do the job. They are not robust, not scalable, and prone to bias this has often been due to negligence or trying to cut costsbusinesses have clear objectives and these are often to do with saving money or simply automating manual processes, and often the ethical sideremoving biases or being prepared for changeisnt seen as the primary objective.

Kasia believes that both biases in AI algorithms and an inability to adapt to change and crisis stem from the same problem and present an opportunity to build better technology in the future. She argues that by investing in building better algorithms, issues such as bias and an inability to predict user behavior in times of crisis can be eliminated.

Although companies might previously have been loath to invest time and money into building datasets that did much more than the minimum that they needed to operate, she hopes that the combination of COVID and an increased awareness of machine learning biases might be the push that they need.

I think that a lot of businesses that have seen their machine learning struggle will now think twice before they try and deploy a solution that isnt robust hasnt been tested enough, she says. Hopefully the failure of some AI systems will motivate data scientists as well as corporations to invest time and resources in the background work ahead of jumping into the development of AI solutions we will see more effort being put into ensuring that AI products are robust and bias-free.

The failures of AI have been undeniably problematic, but perhaps they present an opportunity to build a smarter future. After all, in recent months we have also seen the potential of AI, with new outbreak risk software and deep learning models that help the medical community to predict drugs and treatments and develop prototype vaccines. These strides in progress demonstrate the power of combining smart technology with human intervention, and show that with the right data AI has the power to enact massive positive change.

This year has revealed the full scope of AI, laying bare the challenges that developers face alongside the potential for tremendous benefits. Building datasets that encompass the broadest scope of human experience may be challenging, but it will also make machine learning more equitable, more useful, and much more powerful. Its an opportunity that those in the field should be keen to corner.

Go here to see the original:
How Coronavirus and Protests Broke Artificial Intelligence And Why Its A Good Thing - Observer

The 4th World Intelligence Congress Closed Online With Great Achievements – PRNewswire

At the closing ceremony, InferVision,Danish Carenborg Eco-Industrial Park, and Sino-Singapore Eco-City shared their development experiences, and Wei Ya, a popular Taobao livestream host,explained in detail the scientific and technological elements in "influencer marketing". Yu Lin, general manager of the strategic development department of Alibaba Group in Tianjin, introduced three modes of onlinepoverty alleviation. Liu Gang, deputy dean of Chinese Institute ofNew Generation Artificial Intelligence Development Strategies, released the Report on the Development of China's New Generation Artificial Intelligence Technology Industry (2020): The Development of China's New Generation Artificial Intelligence Technology Industry under New Challenges and Opportunities, and Yin Jihui, director of Tianjin Bureau of Industry and Information Technology, released the Annual Report on the Development of Tianjin Intelligent Technology Industry (2020), which pointed out the direction for the development trend of the artificial intelligence technology industry.

With the help of intelligent technology, this congress held six online events. The unique experience made people feel the charm of intelligent technology and deeply perceive Tianjin's past, present, and future in the field of intelligent technology. According to statistics, the congress released 26 achievements including reports, policies, and products to the world. Among them, the national ministries and commissions issued 12 achievements, including Talent Development Report of Artificial Intelligence Industry issued by the Ministry of Industry and Information Technology, smart travel products and solutions, and White Paper on Digital Health issued by the National Health Commission, etc. Tianjin released two achievements, namely, China's New Generation Artificial Intelligence Technology Industry Development Report (2020) and the Annual Report of Tianjin Intelligent Technology Industry Development (2020). Enterprises and districts in Tianjin released 12 achievements, including Galaxy Kylindesktop operating system V10 and advanced server operating system V10 released by KylinSoft, support policies of Tianjin Binhai New District, and Kunpeng Ecological Innovation Center of Huawei Company.

On the closing of the 4th World Intelligence Congress, Tianjin Municipal People's Government formally extends an invitation to industry leaders, world talents, and friends from all over the world. Welcome to attend the 5th World Intelligence Congress!

Contact: Cui KejiaTel: +86-400-019-0516, +86-15120084132E-mail: [emailprotected]

SOURCE The 4th World Intelligence Congress

https://www.wicongress.org/en

More here:
The 4th World Intelligence Congress Closed Online With Great Achievements - PRNewswire

Here Is How The United States Should Regulate Artificial Intelligence – Forbes

The U.S. Congress should create a federal agency for artificial intelligence. Photographer: Rich ... [+] Clement/Bloomberg.

In 1906, in response to shocking reports about the disgusting conditions in U.S. meat-packing facilities, Congress created the Food and Drug Administration (FDA) to ensure safe and sanitary food production.

In 1934, in the wake of the worst stock market crash in U.S. history, Congress created the Securities and Exchange Commission (SEC) to regulate capital markets.

In 1970, as the nation became increasingly alarmed about the deterioration of the natural environment, Congress created the Environmental Protection Agency (EPA) to ensure cleaner skies and waters.

When an entire field begins to create a broad set of challenges for the public, demanding thoughtful regulation, a proven governmental approach is to create a federal agency focused specifically on engaging with and managing that field.

The time has come to create a federal agency for artificial intelligence.

Across the AI community, there is growing consensus that regulatory action of some sort is essential as AIs impact spreads. From deepfakes to facial recognition, from autonomous vehicles to algorithmic bias, AI presents a large and growing number of issues that the private sector alone cannot resolve.

In the words of Alphabet CEO Sundar Pichai: There is no question in my mind that artificial intelligence needs to be regulated. It is too important not to.

Yet there have been precious few concrete proposals as to what this should look like.

The best way to flexibly, thoroughly, and knowledgeably regulate artificial intelligence is through the creation of a dedicated federal agency.

Though many Americans do not realize it, the primary manner in which the federal government enacts public policy today is not Congress passing a law, nor the President issuing an executive order, nor a judge making a ruling in a court case. Instead, it is federal agencies like the FDA, SEC or EPA implementing rules and regulations.

Though barely contemplated by the framers of the U.S. Constitution, federal agenciescollectively referred to as the administrative statehave in recent decades come to assume a dominant role in the day-to-day functioning of the U.S. government.

There are good reasons for this. Federal agencies are staffed by thousands of policymakers and subject matter experts who focus full-time on the fields they are tasked with regulating. Agencies can move more quickly, get deeper into the weeds, and adjust their policies more flexibly than can Congress.

Imagine if, every time a pharmaceutical company sought government approval for a new drug, or every time a given air pollutants parts-per-million concentration guidelines needed to be revised, Congress had to familiarize itself with all of the relevant technical details and then pass a law on the topic. Government would grind to a halt.

Like pharmaceutical drugs and environmental science, artificial intelligence is a deeply technical and rapidly evolving field. It demands a specialized, technocratic, detail-oriented regulatory approach. Congress cannot and should not be expected to respond directly with legislation whenever government action in AI is called for. The best way to ensure thoughtful, well-crafted AI policy is through the creation of a federal agency for AI.

How would such an agency work?

One important principle is that the agency should craft its rules on a narrow, sector-by-sector basis rather than as one-size-fits-all mandates. As R. David Edelman aptly argued, AI is a tool with various applications, not a thing in itself.

Rather than issuing overbroad regulations about, say, explainability or data privacy to which any application of AI must adhere, policymakers should identify concrete AI use cases that merit novel regulatory action and develop domain-specific rules to address them.

Stanford Universitys One Hundred Year Study on AI made this point well: Attempts to regulate AI in general would be misguided, since there is no clear definition of AI (it isnt any one thing), and the risks and considerations are very different in different domains. Instead, policymakers should recognize that to varying degrees and over time, various industries will need distinct, appropriate, regulations that touch on software built using AI or incorporating AI in some way.

This new federal agency would need to work closely with other agencies, as there will be extensive overlap between its mandate and the work of other regulatory bodies.

For instance, in crafting policies about the admissible uses of machine learning algorithms in criminal sentencing and parole decisions, the agency would collaborate closely with the Department of Justice, lending its subject matter expertise to ensure that the regulations are realistically designed.

Similarly, the agency might work in tandem with the Treasury Department and the CFPB to create rules about the proper use of AI in banks loan underwriting decisions. Such cross-agency collaboration is the norm in Washington today.

There are numerous additional areas in which smart, well-designed AI policy is already needed: autonomous weapons, facial recognition, social media content curation, and adversarial attacks on neural networks, to name just a few.

As AI technology continues its breathtaking advance in the years ahead, it will create innumerable benefits and opportunities for us all. It will also generate a host of new challenges for society, many of which we cannot yet even imagine. A federal agency dedicated to artificial intelligence will best enable the U.S. to develop effective public policy for AI, protecting the public while positioning the nation to capitalize on what will be one of the most important forces of the twenty-first century.

See more here:
Here Is How The United States Should Regulate Artificial Intelligence - Forbes

Artificial Intelligence Can’t Deal With Chaos, But Teaching It Physics Could Help – ScienceAlert

While artificial intelligence systems continue to make huge strides forward, they're still not particularly good at dealing with chaos or unpredictability. Now researchers think they have found a way to fix this, by teaching AI about physics.

To be more specific, teaching them about the Hamiltonian function, which gives the AI information about the entirety of a dynamic system: all the energy contained within it, both kinetic and potential.

Neural networks, designed to loosely mimic the human brain as a complex, carefully weighted type of AI, then have a 'bigger picture' view of what's happening, and that could open up possibilities for getting AI to tackle harder and harder problems.

"The Hamiltonian is really the special sauce that gives neural networks the ability to learn order and chaos," says physicist John Lindner, from North Carolina State University.

"With the Hamiltonian, the neural network understands underlying dynamics in a way that a conventional network cannot. This is a first step toward physics-savvy neural networks that could help us solve hard problems."

The researchers compare the introduction of the Hamiltonian function to a swinging pendulum it's giving AI information about how fast the pendulum is swinging and its path of travel, rather than just showing AI a snapshot of the pendulum at one point in time.

If neural networks understand the Hamiltonian flow so where the pendulum is, in this analogy, where it might be going, and the energy it has then they are better able to manage the introduction of chaos into order, the new study found.

Not only that, but they can also be built to be more efficient: better able to forecast dynamic, unpredictable outcomes without huge numbers of extra neural nodes. It helps AI to quickly get a more complete understanding of how the world actually works.

A representation of the Hamiltonian flow, with rainbow colours coding a fourth dimension. (North Carolina State University)

To test their newly improved AI neural network, the researchers put it up against a commonly used benchmark called the Hnon-Heiles model, initially created to model the movement of a star around a sun.

The Hamiltonian neural network successfully passed the test, correctly predicting the dynamics of the system in states of order and of chaos.

This improved AI could be used in all kinds of areas, from diagnosing medical conditions to piloting autonomous drones.

We've already seen AI simulate space, diagnose medical problems, upgrade movies and develop new drugs, and the technology is, relatively speaking, just getting started there's lots more on the way. These new findings should help with that.

"If chaos is a nonlinear 'super power', enabling deterministic dynamics to be practically unpredictable, then the Hamiltonian is a neural network 'secret sauce', a special ingredient that enables learning and forecasting order and chaos," write the researchers in their published paper.

The research has been published in Physical Review E.

Read more:
Artificial Intelligence Can't Deal With Chaos, But Teaching It Physics Could Help - ScienceAlert

Artificial intelligence helping NASA design the new Artemis moon suit – SYFY WIRE

Last fall, NASA unveiled the new suits that Artemis astronauts will wear when they take humanitys first steps on the lunar surface for the first time since way back in 1972. The look of theA7LB pressure suit variants that accompanied those earlierastronauts to the Moon, and later to Skylab, has since gone on to signify for many the definitive, iconic symbol of humanitys most ambitiously-realized space dreams.

With Artemis 2024 launch target approaching, NASAs original Moon suit could soon be supplanted in the minds of a new generation of space dreamers with the xEMU, the first ground-up suit made for exploring the lunar landscape since Apollo 17s Eugene Cernan and Harrison Schmitt took humanitys last Moon walk (to date). Unlike those suits, the xEMUs design is getting an assist from a source of "brain" power that simply wasnt available back then: artificial intelligence.

Specifically, AI is reportedly crunching numbers behind the scenes to help engineer support components for the new, more versatile life support system thatll be equipped to the xEMU (Extravehicular Mobility Unit) suit. WIRED reports that NASA is using AI to assist the new suits life support system in carrying out its more vital functions while streamlining its weight, component size, and tolerances for load-bearing pressure, temperature, and the other physical demands that a trip to the Moon (and back) imposes.

Recruiting AI isnt just about speed though speed is definitely one of the perks to meeting NASAs ambitious 2024 timeline and all that lies beyond. The machines iterative process is 100 or 1,000 times more than we could do on our own, and it comes up with a solution that is ideally optimized within our constraints, Jesse Craft, a senior design engineer at a Texas-based contractor working on the upgraded version of the xEMU suit, told WIRED.

But in some instances, AI even raises the bar for quality, as Craft also noted. Were using AI to inspire design, he explained. We have biases for right angles, flat surfaces, and round dimensions things youd expect from human design. But AI challenges your biases and allows you to see new solutions you didnt see before.

So far, NASA is relying on AI only to design physical brackets and supports for the life support system itself in other words, not the kind of stuff that might spell life or death in the event of failure. But that approach is already paying off by cutting mass without sacrificing strength, yielding component weight reductions of up to 50 percent, according to the report.

Even at 1/6 the gravity that astronauts experience back on Earth, that kind of small weight savings here and there can add up to make a big difference on the Moon. And even a slight slimming down cant hurt the xEMUs chances at perhaps becoming a new standard bearer in space fashion, as Artemis captivates a new generation with its sights set on the stars.

View original post here:
Artificial intelligence helping NASA design the new Artemis moon suit - SYFY WIRE

Global Artificial Intelligence (AI) in Education Market Growth (Status and Outlook) 2020-2026 – Cole of Duty

A research report on the Global Artificial Intelligence (AI) in Education Market delivers complete analysis regarding the size, trends, market share, and growth prospects. In addition, the report includes market volume with an exact opinion offered in the report. This research report assesses the market growth rate and the industry value depending on the growth such as driving factors, market dynamics, and other associated data. The information provided in this report is integrated based on the trends, latest industry news, as well as opportunities. The Artificial Intelligence (AI) in Education market report is major compilation of major information with respect to the overall competitor data of this market. Likewise, the information is an inclusive of the number of regions where the global Artificial Intelligence (AI) in Education industry has fruitfully gained the position. This research report delivers a broad assessment of the Artificial Intelligence (AI) in Education market. The global Artificial Intelligence (AI) in Education market report is prepared with the detailed verifiable projections, and historical data about the Artificial Intelligence (AI) in Education market size.

Request a sample of this report @ https://www.orbisresearch.com/contacts/request-sample/4499322

Moreover, the report also includes a full market analysis and supplier landscape with the help of PESTEL and SWOT analysis of the leading service providers. In addition, the projections offered in this report have been derived with the help of proven research assumptions as well as methodologies. By doing so, the Artificial Intelligence (AI) in Education research study offers collection of information and analysis for each facet of the Artificial Intelligence (AI) in Education industry such as technology, regional markets, applications, and types. The report has been made through the primary research interviews, complete surveys, as well as observations, and secondary research. Likewise, the Artificial Intelligence (AI) in Education market report delivers major illustrations and presentations about the market which integrates graphs, pie charts, and charts and offers the precise percentage of the different strategies implemented by the major providers in the global Artificial Intelligence (AI) in Education market. This report delivers a separate analysis of the foremost trends in the accessible market, regulations and mandates, micro & macroeconomic indicators are also included in this report.

Top Players:

GoogleIBMPearsonMicrosoftAWSNuanceCognizantMetacogQuantum Adaptive LearningQueriumThird Space LearningAleksBlackboardBridgeUCarnegie LearningCenturyCogniiDreamBox LearningElemental PathFishtreeJellynoteJenzabarKnewtonLuilishuo

Browse the complete report @ https://www.orbisresearch.com/reports/index/global-artificial-intelligence-ai-in-education-market-size-status-and-forecast-2020-2026

By doing so, the study forecast the attractiveness of each major segment over the prediction period. The global Artificial Intelligence (AI) in Education market study extensively features a complete quantitative and qualitative evaluation by studying data collected from various market experts and industry participants in the market value chain. The report also integrates the various market conditions around the globe such as pricing structure, product profit, demand, supply, production, capacity, as well as market growth structure. In addition, this study provides important data about the investment return data, SWOT analysis, and investment feasibility analysis.

Types:

Machine Learning and Deep LearningNatural Language Processing

Applications:

Virtual Facilitators and Learning EnvironmentsIntelligent Tutoring SystemsContent Delivery SystemsFraud and Risk Management

In addition, the number of business tactics aids the Artificial Intelligence (AI) in Education market players to give competition to the other players in the market while recognizing the significant growth prospects. Likewise, the research report includes significant information regarding the market segmentation which is designed by primary and secondary research techniques. It also offers a complete data analysis about the current trends which have developed and are expected to become one of the strongest Artificial Intelligence (AI) in Education market forces into coming future. In addition to this, the Artificial Intelligence (AI) in Education report provides the extensive analysis of the market restraints that are responsible for hampering the Artificial Intelligence (AI) in Education market growth along with the report also offers a comprehensive description of each and every aspects and its influence on the keyword market.

If enquiry before buying this report @ https://www.orbisresearch.com/contacts/enquiry-before-buying/4499322

About Us :

Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.

Contact Us :

Read more:
Global Artificial Intelligence (AI) in Education Market Growth (Status and Outlook) 2020-2026 - Cole of Duty