Excessive top-down federal funding and governance of scientific and technology research will be increasingly incompatible with a future of lightly regulated science and technology specifically, and with limited government generally.
Neither political party takes that view though. In a rule-of-experts, send-the tax-dollars-home environment, America risks becoming vulnerable to industrial policy and market socialist mechanisms as frontier technologies become more complex.
Addressing infrastructure and other broad initiatives a year ago in his February 5, 2019, State of the Union address, for example, president Donald Trump called for legislation including investments in the cutting edge industries of the future and proclaimed, This is not an option, this is a necessity.
AI, Artificial Intelligence concept,3d rendering,conceptual image.
Along with such spending having thick strings attached and accompanying regulatory effects that propagate, it is not proper for the sciences nor practical applications of them to proceed walled off from one another in the arbitrary legislative appropriations and regulatory environments that prevail in Washiington.
Artificial intelligence in particular serves as a case study or warning. Emblematic was Executive Order 13859 of February 11, 2019 on Maintaining American Leadership on Artificial Intelligence and the establishment of the AI Initiative, which were followed by the March 19, 2019 launching of the federal hub AI.gov (now whitehouse.gov/ai).
Executive orders are not law, but they can influence policy, and this one promotes sustained investment in AI R&D in collaboration with industry, academia, and other doings.
E.O 13859 also calls for federal collection of data among other centrally coordinated moves. Actions shall be implemented by agencies that conduct foundational AI R&D, develop and deploy applications of AI technologies, provide educational grants, and regulate and provide guidance for applications of AI technologies.
Whew. This federalization is concerning on its own, but it occurs in an environment in which much AI research at the federal level happens under the auspices of the Department of Defense.
Bet you didnt know that the Pentagon, on the very day after Trumps 2019 AI executive order, released its own AI strategy, subtitled Harnessing AI to Advance Our Security and Prosperity, describing use, plans, and ethical standards in deployment. There are now new promises by DoD to adopt rules for how it develops and uses AI.
But where, indeed, is the only spot where a definition of AI is codified in federal statute? In the John S. McCain National Defense Authorization Act for Fiscal Year 2019.
When it comes to robotics and military, the concern is that Isaac Asimovs famous Laws of Robotics (devised to forbid the harm of humans) are programmed out, not in. This is a part of what makes fusion of government and private AI deployment problematic. Where a tech titans one-time motto had been Dont Be Evil, a fitting admonition now for the technology sector as a whole is:
Dont Be Government.
The most recent development is the White House Office of Management and Budgets 2020 Guidance for Regulation of Artificial Intelligence Applications, directed at heads of federal executive branch agencies. In fulfillment of Trump E.O. 13859 and building upon it, the January 2020 document at first blush strikes the right tone, aiming at engaging the public and forbearance, limiting regulatory overreach, eliminating duplication and redundancy across agencies, improving access to government-data and models, recognizing that a one-size regulatory shoe does not fit all, using performance based objectives rather than rigid rules, and in particular, avoiding over-precaution. For example, the guidance on p. 2 instructs:
Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits.
The OMBs Request for Comments on the Guidance at one point seems to adopt the same reasoned laissez-faire stance: OMB guidance on these matters seeks to support the U.S. approach to free-market capitalism, federalism, and good regulatory practices (GRPs).
Michael Kratsios, Chief Technology Officer of the United States, called the Guidance the first-of-its-kind set of regulatory principles to govern AI development in the private sector to address the challenging technical and ethical questions that AI can create.
But make no mistake, the new AI guidance constitutes a set of regulatory principles, especially as they will be interpreted by less market-oriented administrations that later assume the helm.
The Guidance states:
When considering regulations or policies related to AI applications, agencies should continue to promote advancements in technology and innovation, while protecting American technology, economic and national security, privacy, civil liberties, and other American values, including the principles of freedom, human rights, the rule of law.
The guidance mentions American values five times, without recognizing the degree of incompatibility of the top-down administrative state form of governance that now prevails, as distinct from Article I lawmaking, with those values.
Nor is there sufficient appreciation of the extent to which the regulatory bureaucracy can hold conflicting visions of rule of law. Todays administrative state has its own set of value pursuits and visions, of what are costs and what are benefits, and the sources of each. As such, the administrations AI Guidance contains elements that can be exploited by creative agencies seeking to expand once the ostensibly less-regulatory Trump administration has left the state.
The AI Guidance correctly states: The deployment of AI holds the promise to improve safety, fairness, welfare, transparency, and other social goals, and Americas maintenance of its status as a global leader in AI development is vital to preserving our economic and national security.
But on the other hand, the Guidance (p. 3) says AI applications could pose risks to privacy, individual rights, autonomy, and civil liberties that must be carefully assessed and appropriately addressed.
Well thats interesting. Governments, as post-9/11 and more recent surveillance history shows not the institution of orderly, competitive free enterprise are the primary threat to these very values; so opening the door too far to agencies misidentifies sources of values problems, and lays bedrock for counterproductive and harmful regulation.
Unfortunately, agencies wanting to be granted the legitimacy necessary to throw their weight around on the new and exciting AI playground have been needlessly invited to do so by the Guidance.
For example, in evaluating benefits and costs of regulatory alternatives, agencies are to (p. 12) evaluate impacts to equity, human dignity, fairness, potential distributive impacts, privacy and civil liberties, and personal freedom.
These bureau-speak formulations and directives plainly favor agency governmental proclivities moreso than they defer to the competitive process and non-governmental resolutions of the inevitable difficult issues that will naturally arise from the proliferation of AI.
Unless externally restrained, a regulatory bureaucracys inclination is to answer the question, Is there call for regulation? in the affirmative. The Guidance invites agencies (p. 11) to consider whether a change in regulatory policy is needed due to the adoption of AI applications in an already regulated industry, or due to the development of substantially new industries facilitated by AI.
Why would the Trump adiministration open this Pandoras Box? As a wholly blank canvas, this approach to AI policy will prove an irresistable unleashing of the bureaus. Trumps regulatory reduction Task Forces notwithstanding, there exists no permanent Office of No anchored at any agency to vigorously resist to top-down discretion and reject the more appealing heavy Washington influence they are invited to proffer.
The unfortunate iron law that industry generally prefers regulation that advances its interests and walls out competition will prove true of AI regulation specifically: Companies cannot just build new technology and let market forces decide how it will be used, said one prominent CEO in January 2020.
Companies may dislike like the kind or regulation that makes them ask Mother-may-I? before they take a risky step. But on the other hand, established playersespeccially given the head start of the government contracting and military presence in AIwill appreciate federal approaches that just so happen to forestall those nettlesome upstarts with a different idea, even when those new ideas advance safety or accountability.
Here are a few additional concerns with federal AI Guidance at this stage.
Too frequently there occurs misdiagnosis and denial regarding the root source government itself of frontier technologies risks. The OMB guidance (p. 6) calls on agencies to encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process. But the government is more prone to undermine security-enhancing encryption used in private sector applications, for example. And, especially given the heavy government collaborative role sought, to indemnify winner companies when things go wrong and thereby mangle risk-management mechanisms like insurance and containment in AI ecosystems.
Since the administrtions AI proclamations belong in the regulatory rather than deregulatory camp, it is good that strong AI (the potentially sentient, self-improvingversion) is ostensibly not addressed (exempted) by the Guidance. Fortunately, the Guidance acknowledges that (p. 11) current technical challenges in creating interpretable AI can make it difficult for agencies to ensure a level of transparency necessary for humans to understand the decision-making of AI applications. Indeed, agencies cannot do this; no one can; it is the very nature of black box machine learning. But it is a sure bet that agencies would seize this authority anyway, made apparent in some of the bullets above.
The AI guidance appears in a policy climate in which Republicans and Democrats alike seek major government funding of science generally, an environment replete with proposals that have marinated in the regulatory, administrative state frameworks up to and including a manufacturing czar, and quasi -military terminlogy such that energy security gets equated with national security. AIis vulnerable to all this. Internationally, governments are moving toward regulation of AI; and the U.S., by these new actions, has demonstrated readiness to do so as well.
This state of affairs is not particularly the fault of well meaning policymakers within the White House, but results from the fact that there exists no audience or consituency for keeping governments hands out of complex, competitive free enterprise generally. The disruptions purportedly to be caused by AI create irresistable magnets for the opportunistic and cynical to pursue regulation.
Unfortunately in part due to Trumps order and related/derivative guidance yet to come, we can predict that future administrations and legislators will expand government alliances with a subset of private sector winners, perhaps even a sort of cartelization. The legitimization of this concept at the top by an ostensibly deregulation-oreinted president will make it harder for our decendents to achieve regulatory liberalization and maintain any separation of technology and state in future complex undertakings, many of which will be AI-driven.
In similar vein and illustrative of the concerns raised here, the establishment of a Space Force, enacted in the National Defense Authorization Act of 2020, presents the same lock-in of a top-down federal managerialism of private sector undertakings, given that commercial space activities have hardly taken root beyond NASA contractors and partners. Making the (AI-driven) force asixth branch of the armed forceswill inevitably alter freedoms and private commercial space activities, heavily influencing technology investment and evolution in a sector that barely exists yet. The Space Force move had already been preceded by a presidential directive on space traffic management complete with tracking, cataloging, and data sharing with government. It is worth remembering that most debris in space used to justify calls for regulation is there thanks to the NASA legacy, not private entrepreneurs who would have needed to ponder property rights in sub-orbital and orbital space in a different way. Even though normalizing commercial space activies for a diverse portfolio of actors and approaches is not compatible with heavy regulation, the role of competitive discipline may yet be improperly overlooked or squelched.
So the AI Guidance is by no means making an appearance in a policy vacuum, which is not altogether encouraging. In similar vein, an October 2019 executive order established a new Presidents Council of Advisors on Science and Technology to strengthen .... the ties that connect government, industry, and academia. This project entails collaborative partnerships across the American science and technology enterprise, which includes an unmatched constellation of public and private educational institutions, research laboratories, corporations, and foundations, [by which] the United States can usher extraordinary new technologies into homes, hospitals, and highways across the world. Even this appeared in the wake of E.O. 13,885 on Establishing the National Quantum Initiative Advisory Committee, aimed at implementing the 2018 National Quantum Initiative Act in its purpose of supporting research, development, demonstration, and application of quantum information science and technology.
While big science need not entail big government; the alignment of forces implies that it likely will. There is, however, no commandment to regulate frontier sectors via the same administrative state model that has dominated policy in recent decades, and policymakers are at a fork in the road that will affect the evolution of business and enterprise. On matters of safety, economics and jobs, the government need not steer while the market merely rows.
(This article is based on my comments to OMB on its Request for Comments on the Guidance for Regulation of Artificial Intelligence Applications.)
View original post here:
How The White House Guidance For Regulation Of Artificial Intelligence Invites Overregulation - Forbes