Artificial Intelligence and Democratic Values: Next Steps for the United States – Council on Foreign Relations

More than fifty years after a research group at Dartmouth University launched work on a new field called Artificial Intelligence, the United States still lacks a national strategy on artificial intelligence (AI) policy. The growing urgency of this endeavor is made clear by the rapid progress of both U.S. allies and adversaries.

Europe is moving forward with two initiatives of far-reaching consequence. The EU Artificial Intelligence Act will establish a comprehensive, risk-based approach for the regulation of AI when it is adopted in 2023. Many anticipate that the EU AI Act will extend the Brussels Effect across the AI sector as the earlier European data privacy law, the General Data Privacy Regulation, did for much of the tech industry.

More on:

Technology and Innovation

Robots and Artificial Intelligence

The Council of Europe is developing the first international AI convention aiming to protect fundamental rights, democratic institutions, and the rule of law. Like the Council of Europe Convention on Cybercrime (COE) and the Privacy Convention, the AI Convention will be open for ratification by member and non-member states. The COE remains influential, as Canada, Japan, the United States, and several South American countries have signed onto the COE.

Net Politics

CFR experts investigate the impact of information and communication technologies on security, privacy, and international affairs.2-4 times weekly.

Digital and Cyberspace Policy program updates on cybersecurity, digital trade, internet governance, and online privacy.Bimonthly.

A summary of global news developments with CFR analysis delivered to your inbox each morning.Most weekdays.

A weekly digest of the latestfrom CFR on the biggest foreign policy stories of the week, featuring briefs, opinions, and explainers. Every Friday.

China is also moving forward with an aggressive regulatory strategy to complement its goal to be the world leader in AI by 2030. China recently matched the GDPR with the Personal Information Protection Law and a new regulation on recommendation algorithms with similar provisions to the EUs Digital Services Act. The Chinese regulatory model will likely influence countries in Africa and Asia, part of the Belt and Road Initiative, and give rise to a possible Beijing Effect.The United States has done an admirable job maintaining a coherent policy in the Executive Branch over the Obama, Trump, and Biden administrations, highlighting key values and promoting an aggressive research agenda. In the 2019 Executive Order on Maintaining American Leadership in AI, the United Statessaid it would foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application. Promoting the Use of AI in the Federal Government established the principles for the development and use of AI consistent with American values and are beneficial to the public.

The United States also played a leading role at the Organization for Economic Cooperation and Development (OECD) with the development and adoption of the OECD AI Principles, the first global framework for AI policy. Those principles, which emphasize human-centric and trustworthy AI, were later adopted by the G-20 nations, and are now endorsed by more than 50 countries, including Russia and China.

But the United States was out of the loop when the UN Educational, Scientific, and Cultural Organization (UNESCO) adopted the Recommendation on AI Ethics, now the most comprehensive framework for global AI policy which addresses emerging issues, such as AI and climate and gender equity.

Democratic values is a key theme as the United States seeks to draw a sharp distinction between the deployment of technologies that advance open, pluralist societies and those that centralize control and enable surveillance. As Secretary Blinken explained last year, More than anything else, our task is to put forth and carry out a compelling vision for how to use technology in a way that serves our people, protects our interests and upholds our democratic values. But absent a legislative agenda or clear statement of principles, neither allies nor adversaries are clear about the U.S. AI policy objectives.

More on:

Technology and Innovation

Robots and Artificial Intelligence

The United States has run into similar problems with the Trade and Technology Council (TTC), an effort to align U.S. and EU tech policy around shared values. The inaugural Joint Statement laid a foundation for cooperation on AI for the EU and the United States in the fall of 2021, but Ukraine has upended transatlantic priorities, and it remains unclear at this point whether the TTC will regain focus on a common AI policy.

A similar challenge confronts EU and U.S. leaders on new rules for transatlantic data flows. After two earlier decisions from the high court in Europe, finding that the United States lacked adequate privacy protection for the transfer of personal data, lawmakers on both sides of the Atlantic worried that data flows could be suspended, as the Irish privacy commissioner has recently threatened. President Biden and President von der Leyen announced an agreement in principle in May, but several months later there is still no public text for review.

To restore leadership in the AI policy domain, the United States should move forward the policy initiative launched last year by the Office of Science and Technology Policy (OSTP). The science office outlined many of the risks of AI, including embedded bias and widespread surveillance, andcalled for an AI Bill of Rights. OSTP said, Our country should clarify the rights and freedoms we expect data-driven technologies to respect. The White House supported the initiative and encouraged Americans toJoin the Effort to Create A Bill of Rights foran AutomatedSociety.

We strongly support this initiative. After anextensive review of the AI policies and practices in 50 countries, we identified the AI Bill of Rights as possibly the most significant AI policy initiative in the United States. But early progress has stalled. The delay has real consequences for Americans who are subject to automated decision-making in their everyday lives, with little transparency or accountability. Foreign governments are also looking for U.S. leadership in this rapidly evolving field. Progress on the AI Bill of Rights initiative will help build trust and restore U.S. leadership.

Last year, the Office of Science and Technology Policy stated clearly, "Powerful technologies should be required to respect our democratic values and abide by the central tenet that everyoneshould be treated fairly.That should be the cornerstone of a U.S. national AI policy, and that policy will advance international norms for the governance of AI.

Marc Rotenberg is President of the Center for AI and Digital Policy (CAIDP), author the forthcoming Law of Artificial Intelligence (West Academic 2023), and a Life Member of CFR. Merve Hickok is the Research Director of CAIDP and founder of the AIethicist.org

Continue reading here:
Artificial Intelligence and Democratic Values: Next Steps for the United States - Council on Foreign Relations

SCOPA: Intersection of artificial intelligence and telemedicine – Optometry Times

Optometry Times' Alex Delaney-Gesing speaks with Leo P. Semes, OD, FAAO, professor emeritus of optometry at the University of Alabama-Birmingham, on the highlights and key takeaways from his discussion titled "Artificial intelligence and telemedicine," presented during the 115th annual South Carolina Optometric Physicians Association (SCOPA) meeting in Hilton Head, South Carolina.

Editor's note: this transcript has been lightly edited for clarity.

Could you share a highlights version of your presentation?

Artificial Intelligence (AI) is a topic that I've been following for probably 5 or so years. And as I dug into the history, it's quite interesting; it really began back in the 1930s. So it has quite a long history. It's based on algorithms and whether that algorithm is something as simple as how you do addition of big numbers or long division

The algorithm for looking at, for example, a patient with diabetic retinopathy, is specifying the severity of that, and then using that as a determination for treatment. And then if the patient is treated, following that patient to see if there is stagnation, stability of the diabetic retinopathy, or regression, which is what we're hoping for.

And some of the AI paradigms now demonstrate that there is the possibility of regression of diabetic retinopathy, from a physical standpoint, of how the retina looks, and also in terms of visual performance. And that's what to me is probably the most exciting aspect of what we can do with AI; to say, Okay, this is a patient who's got a certain level of diabetic retinopathy, the patient qualifies for treatment. Then 3 months following treatment, yes, the retina looks better, but they have improvement in visual performance.

So visual acuityquantitativelynumbers look better. And as a consequence of that, patients could enjoy a better lifestyle.

Why would you say this is such an important topic of discussion? Well, one of the reasons is thataside from age-related macular degeneration (AMD) one of the major causes of vision loss, especially among the working age population. is secondary to diabetic retinopathy (DR). And it's estimated that there's a segment of the population perhaps as high as 25%, who have pre-diabetes. So patients presenting for a vision exam, or vision irregularities, or even a periodic examination, might be discovered with certain changes that relate to DR. And then a diagnosis is made and the patient can be managed systemically, as well as ocularly.

What are the key takeaways you'd like attendees to learn from this?Probably the biggest thing is going to be the new staging paradigms for DR and how those relate to when a patient is going to need treatment. And if the patient is not at high risk and not a candidate for treatment, then emphasizing to the patient the importance of maintenance of systemic management strategies, and regular ophthalmic exams.

Read the original:
SCOPA: Intersection of artificial intelligence and telemedicine - Optometry Times

Human-level AI is a giant risk. Why are we entrusting its development to tech CEOs? – Salon

Technology companies are racing to develop human-level artificial intelligence, whose development poses one of the greatest risks to humanity. Last week, John Carmack, a software engineer and video game developer, announced that he has raised 20 million dollars to start Keen Technologies, a company devoted to building fully human-level AI. He is not the only one. There are currently 72 projects around the world focused on developing a human-level AI, also known as an AGI meaning an AI which can do any cognitive task at least as well as humans can.

Many have raised concerns about the effects that even today's use of artificial intelligence, which is far from human-level, already has on our society. The rise of populism and the Capitol attack in the United States, the Tigray War in Ethiopia, increased violence against Kashmiri Muslims in India, and a genocide directed toward Rohingya in Myanmar, have all been linked to the use of artificial intelligence algorithms in social media. Social media sites employing these technologies showed a proclivity for showing hateful content to users because it identified such posts as popular and thus profitable for social media companies; this, in turn, caused egregious harm. This shows that even for current AI, deep concern for safety and ethics are crucial.

But the plan of cutting-edge tech entrepreneurs is now to build way more powerful human-level AI, which will have much larger effects on society. These effects could, in theory, be very positive: automating intelligence could for example release us from work that we prefer not to do. But the negative effects could be as large or even larger.

Oxford academic Toby Ord spent close to a decade trying to quantify the risks of human extinction due to various causes, and summarized the results in a book aptly titled "The Precipice." Supervolcanoes, asteroids, and other natural causes, according to this rigorous academic work, have only a slight chance of leading to complete human extinction. Nuclear war, pandemics, and climate change rank somewhat higher. But what trumps this apocalyptic ranking exercise? You guessed it: human-level artificial intelligence.

And it's not just Ord who believes that full human-level AI, as opposed to today's relatively impotent vanilla version, could have extremely dire consequences. The late Stephen Hawking, tech CEOs such as Elon Musk and Bill Gates, and AI academics such as the University of California San Francisco's Stuart Russell, have all warned publicly that human-level AI could lead to nothing short of disaster, especially if developed without extreme caution and deep consideration of safety and ethics.

And who's now going to build this extremely dangerous technology? People like John Carmack, a proponent of "hacker ethics" who previously programmed kids' video games like "Commander Keen." Is Keen Technologies now going to build human-level AI with the same regard for safety? Asked on Twitter about the company's mission, Carmack replied "AGI or bust, by way of Mad Science!"

A democratic society should not let tech CEOs determine the future of humanity without regard for ethics or safety.

Carmack's lack of concern for this kind of risk is nothing new. Before starting Keen Technologies, Carmack worked side by side with Mark Zuckerberg at Facebook, the company responsible for most of the harmful impacts of AI described earlier. Facebook applied technology to society without any regard for the consequences, fully in line with their motto "Move fast and break things." But if we are going to build human-level AI that way, the thing to be broken might be humanity.

In the interview with computer scientist Lex Fridman where Carmack announces his new AGI company, Carmack shows outright disdain for anything that restricts the unfettered development of technology and maximization of profit. According to Carmack, "Most people with a vision are slightly less effective." Regarding the "AI ethics things," he says: "I really stay away from any of those discussions or even really thinking about it." People like Carmack and Zuckerberg might be good programmers, but are simply not wired to take the big picture into account.

If they can't, we must. A democratic society should not let tech CEOs determine the future of humanity without regard for ethics or safety. Therefore, we all have to inform ourselves about human-level AI, especially non-technologists. We have to reach a consensus on whether human-level AI indeed poses an existential threat to humanity, as most AI Safety and existential risk academics say. And we have to find out what to do about it, where some form of regulation seems inevitable. The fact that we don't know yet what manner of regulation would effectively reduce risk should not be a reason for regulators to not address the issue but rather a reason to develop effective regulation with the highest priority. Nonprofits and academics can help in this process. Notdoing anything and thus letting people like Carmack and Zuckerberg determine the future for all of us could very well lead to disaster.

Read more

on artificial intelligence

Read more:
Human-level AI is a giant risk. Why are we entrusting its development to tech CEOs? - Salon

Headroom Solves Virtual Meeting Fatigue with Artificial Intelligence that Eliminates Wasted Time and Reveals Essential Highlights – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Headroom, a meeting platform leveraging artificial intelligence to improve communications and productivity, today announced a $9 million investment led by Equal Opportunity Ventures with participation from Gradient Ventures, LDV Capital, AME Cloud Ventures and Morado Ventures. The capital brings total funding to date to $14 million and will be used to expand Headrooms team, product development and mobile offering. The company also recently added new Shareable Automatic Summaries to its suite of tools for remote and hybrid meetings, furthering its mission to support balanced, entertaining, productive and memorable meetings.

Virtual meetings have become the de facto method for gathering, connection and collaboration. According to Fortune Business Insights, the meeting collaboration market is expected to exceed $41 billion by 2029. Gartner predicts that by 2025, 75% of conversations at work will be recorded and analyzed, enabling the discovery of added organizational value or risk. Yet despite the increase in meetings, productivity and engagement rates are down. Even before the start of the pandemic, a Harvard Business Review survey revealed 65% of senior managers felt meetings kept them from completing their own work and 64% said meetings come at the expense of deep thinking. Smarter meetings may be the biggest opportunity for improved work productivity and satisfaction.

The more meetings held, the more time wasted, with too many people spending time in redundant meetings. Headroom is leveraging AI to help companies do more with less, enabling individual workers to be more productive, choose which meetings to attend and which to watch later, or just quickly get the key pieces of information discussed, said Julian Green, CEO and Co-Founder of Headroom. Particularly in this environment, where for startups every dollar and every meeting minute counts, those that can move faster and stay better connected with people wherever they are, in real time and asynchronously, will win.

Headroom is self-learning; its relevance and impact on productivity improves with use. Headroom data shows 90% of every meeting lacks useful information. To maximize the 10% meeting content that is helpful, the company developed Shareable Automatic Summaries which auto-generate highlight reels that provide key moments, shared notes and action items, and enable easy sharing with others. Additional platform functionality that maximizes synchronous and asynchronous communication includes:

"Hybrid work is here to stay and virtual meetings are the norm, but they allow for a wide margin of distraction," said Roland Fryer, Founder and Managing Partner at Equal Opportunity Ventures and newly appointed Headroom Board Member. Headroom at its core is an engagement and productivity platform - streamlining collaboration and information sharing, without a heavy lift. It saves time in scheduling, reporting and collaborating."

Simply put: meetings should be better. Unlike any other video communication and collaboration platform, Headroom is stateful. Meeting information is generated during live conversations, and can be augmented and accessed forever after. Participants are free to act naturally and engage with the information without being restricted by the actual meeting slot, said Andrew Rabinovich CTO and Co-Founder of Headroom. Those who didn't attend the meeting itself, have all the details readily available to them. With Headroom, this is automated and highlights go to non-attendee stakeholders who can replay key decisions. Our customers are also using it as an information resource they can search for key information later.

Headroom was co-founded by Julian Green and Andrew Rabinovich in 2020. The companys executive team experience spans founding and leadership roles at GoogleX, Houzz, Magic Leap, Patreon and Square. Headrooms platform currently serves more than 5,000 customers spanning technology and online education startups, as well as marketing, design, consulting and recruiting agencies. It is free with no usage caps or storage limits, and is available on Google Chrome with no download or app required. Users have full control over sharing of meeting information. Get started at https://www.goheadroom.com/.

ABOUT HEADROOM

Headroom, founded in 2020, is improving communication in meetings by augmenting meeting intelligence. Automated virtual meetings in Headroom allow attendees to act naturally, replay key decisions, build smart summaries and search everything later. Headroom is brought to you by an experienced team that has created and managed AI products used by billions of people at tech startups and large companies including Google and Magic Leap. The founders helped create the worlds leading Computer Vision, Augmented Reality and Virtual Reality products, started Unicorns, and have won a Webby. To get started with Headroom visit https://www.goheadroom.com/.

Visit link:
Headroom Solves Virtual Meeting Fatigue with Artificial Intelligence that Eliminates Wasted Time and Reveals Essential Highlights - Business Wire

Before Python was imposed, these were the languages with which artificial intelligence was developed – Gearrice

Today, learn artificial intelligence has almost become synonymous with learning to program in Python. This programming language created by Guido Van Rossum in 1991 is, by far, the most used today in artificial intelligence projects, especially in the field of machine learning.

It helps this, in addition to its popularity as a general programming language (and also in related fields, such as data analysis) that all great AI libraries (Keras, TensorFlow, SciPy, Pandas, Scikit-learn, etc) are designed to work with Python.

Nevertheless, artificial intelligence is much older than python, and there were other languages that stood out in this field for decades before his arrival. Lets take a look at what they were:

The Information Processing Language (IPL) is a low-level language (almost as low as assembly) that was created in 1956 in order to show that the expressive theorems in the Principia Mathematica by mathematicians and philosophers Bertrand Russell and Alfred North Whitehead could be proved by resorting to the computing.

IPL introduced in programming characteristics that are still fully valid today, such as symbols, recursion or the use of lists. The latter, a data type so flexible that it allowed a list to be introduced as an element of another list (which in turn could introduce another list as an element, etc.) was fundamental when it came to use it to develop the first AI programsWhat Logic Theorist (1956) or the chess program SSN (1958).

Despite its importance in the history of AI, several factors (the first being the complexity of its syntax) made it quickly replaced by the following language of the list.

LISP is the oldest of the programming languages dedicated to artificial intelligence among those that are still in use; and it is also the second high-level programming language in history: was created in 1958 (one year after FORTRAN and one year before COBOL) by John McCarthy, who two years earlier had already been responsible for coining the term artificial intelligence.

Shortly before, McCarthy had developed a language called FLPL (FORTRAN List Processing Language), an extension of FORTRAN, and decided to collect in a single language the high-level nature of FLPL, all the innovations provided by IPL, and the formal system known as lambda calculus. . The result was named LISP (for LISt Processor).

At the same time that he was developing FLPL, McCarthy was also formulating so-called alpha-beta pruning, a search technique that reduces the number of evaluated nodes in a game tree. And, to implement it, introduced such a fundamental element in programming as structures if-then-else.

Programmers quickly fell in love with the freedom it offered them. the flexibility of this language, and its facet as a prototyping tool. A) Yes, for the next quarter of a century, LISP became the reference language in the field of AI. Over time, LISP fragmented into a whole series of dialects still in force in various fields of computing, such as Common LISP, EMACS LISP, Clojure, Scheme or Racket.

The language PROLOG (from French programming in logic), which we have already told you about on other occasions, was born at a hard time for the development of artificial intelligenceat the gates of the first AI Winter, when the initial furor over the applications of this technology crashed against the skepticism caused by the lack of progress, which generated public and private disinvestment in its development.

Specifically, it was created in 1972 by French computer engineering professor Alain Colmeraurer, with the aim of introducing the use of Horn clauses, a formula of propositional logic, into software development. Although globally it never became as widely used as LISP, it did become the main AI development language in its home continent (as well as in Japan).

Being a language based on the declarative programming paradigm like LISP, on the other hand, its syntax is very different from that of typical imperative programming languages like PythonJava, or C++.The ease that PROLOG provides in handling recursive methods and pattern matching caused IBM will bet on implementing PROLOG in its IBM Watson for natural language processing tasks.

PROLOG code example in the SWI-Prolog IDE.

An earlier version of this article was published in 2021.

The rest is here:
Before Python was imposed, these were the languages with which artificial intelligence was developed - Gearrice

Researchers Using Artificial Intelligence to Assist With Early Detection of Autism Spectrum Disorder – University of Arkansas Newswire

Photo by University Relations

Khoa Luu and Han-Seok Seo

Could artificial intelligence be used to assist with the early detection of autism spectrum disorder? Thats a question researchers at the University of Arkansas are trying to answer. But theyre taking an unusual tack.

Han-Seok Seo, an associate professor with a joint appointment in food science and the UA System Division of Agriculture, and Khoa Luu, an assistant professor in computer science and computer engineering, will identify sensory cues from various foods in both neurotypical children and those known to be on the spectrum. Machine learning technology will then be used to analyze biometric data and behavioral responses to those smells and tastes as a way of detecting indicators of autism.

There are a number of behaviors associated with ASD, including difficulties with communication, social interaction or repetitive behaviors. People with ASD are also known to exhibit some abnormal eating behaviors, such as avoidance of some if not many foods, specific mealtime requirements and non-social eating. Food avoidance is particularly concerning, because it can lead to poor nutrition, including vitamin and mineral deficiencies. With that in mind, the duo intend to identify sensory cues from food items that trigger atypical perceptions or behaviors during ingestion. For instance, odors like peppermint, lemons and cloves are known to evoke stronger reactions from those with ASD than those without, possibly triggering increased levels of anger, surprise or disgust.

Seo is an expert in the areas of sensory science, behavioral neuroscience, biometric data and eating behavior. He is organizing and leading this project, including screening and identifying specific sensory cues that can differentiate autistic children from non-autistic children with respect to perception and behavior. Luu isan expert in artificial intelligence with specialties in biometric signal processing, machine learning, deep learning and computer vision. He will develop machine learning algorithms for detecting ASD in children based on unique patterns of perception and behavior in response to specific test-samples.

The duo are in the second year of a three-year, $150,000 grant from the Arkansas Biosciences Institute.

Their ultimate goalis to create an algorithm that exhibits equal or better performance in the early detection of autism in children when compared to traditional diagnostic methods, which require trained healthcare and psychological professionals doing evaluations, longer assessment durations, caregiver-submitted questionnaires and additional medical costs. Ideally, they will be able to validate a lower-cost mechanism to assist with the diagnosis of autism. While their system would not likely be the final word in a diagnosis, it could provide parents with an initial screening tool, ideally eliminating children who are not candidates for ASD while ensuring the most likely candidates pursue a more comprehensive screening process.

Seo said that he became interested in the possibility of using multi-sensory processing to evaluate ASD when two things happened: he began working with a graduate student, Asmita Singh, who had background in working with autistic students, and the birth of his daughter. Like many first-time parents, Seo paid close attention to his newborn baby, anxious that she be healthy. When he noticed she wouldnt make eye contact, he did what most nervous parents do: turned to the internet for an explanation. He learned that avoidance of eye contact was a known characteristic of ASD.

While his child did not end up having ASD, his curiosity was piqued, particularly about the role sensitivities to smell and taste play in ASD. Further conversations with Singh led him to believe fellow anxious parents might benefit from an early detection tool perhaps inexpensively alleviating concerns at the outset. Later conversations with Luu led the pair to believe that if machine learning, developed by his graduate student Xuan-Bac Nguyen, could be used to identify normal reactions to food, it could be taught to recognize atypical responses, as well.

Seo is seeking volunteers 5-14 years old to participate in the study. Both neurotypical children and children already diagnosed with ASD are needed for the study. Participants receive a $150 eGift card for participating and are encouraged to contact Seo athanseok@uark.edu.

About the University of Arkansas:As Arkansas' flagship institution, the UofA provides an internationally competitive education in more than 200 academic programs. Founded in 1871, the UofA contributes more than$2.2 billion to Arkansas economythrough the teaching of new knowledge and skills, entrepreneurship and job development, discovery through research and creative activity while also providing training for professional disciplines. The Carnegie Foundation classifies the UofA among the few U.S. colleges and universities with the highest level of research activity.U.S. News & World Reportranks the UofA among the top public universities in the nation. See how the UofA works to build a better world atArkansas Research News.

More here:
Researchers Using Artificial Intelligence to Assist With Early Detection of Autism Spectrum Disorder - University of Arkansas Newswire

Chips-Plus Artificial Intelligence In The CHIPS Act Of 2022 – New Technology – United States – Mondaq

26 August 2022

Akin Gump Strauss Hauer & Feld LLP

To print this article, all you need is to be registered or login on Mondaq.com.

On August 9, 2022, President Biden signed the CHIPS Act of 2022(the "Act"), legislation to fund domestic semiconductormanufacturing and boost federal scientific research and development(see our previous alert for additional background). Aspart of its science-backed provisions, the Act includes many of theU.S. Innovation and Competition Act's (USICA) originalpriorities, such as promoting standards and research anddevelopment in the field of artificial intelligence (AI) andsupporting existing AI initiatives.

The Act directs the National Institute of Standards andTechnology (NIST) Director to continue supporting the developmentof AI and data science and to carry out the National AI InitiativeAct of 2020 (previous alert for additional background),which created a coordinated program across the federal governmentto accelerate AI research and application to support economicprosperity, national security, and advance AI leadership in theUnited States. The Director will further the goals of the NationalAI Initiative Act of 2020 by:

Furthermore, the Act provides that the Director may establishtestbeds, including in virtual environments, in collaboration withother federal agencies, the private sector and colleges anduniversities, to support the development of robust and trustworthyAI and machine learning systems.

A new National Science Foundation (NSF) Directorate forTechnology, Innovation and Partnerships (the"Directorate") is established under the Act to addresssocietal, national and geostrategic challenges for the bettermentof all Americans through research and development, technologydevelopment and related solutions. Over the next five years, thenew Directorate will receive $20 billion in funding. Moreover, theDirectorate will focus on 10 key technology focus areas, includingAI, machine learning, autonomy, related advances, robotics,automation, advanced manufacturing and quantum computing, amongother areas.

Within the Department of Energy (DOE), the Act authorizes $11.2billion for research, development and demonstration activities andto address energy-related supply chain activities in the ten keytechnology focus areas prioritized by the new NSF Directorate.Further, the Act authorizes $200 million for the DOE's Officeof Environmental Management to conduct research, development anddemonstration activities, including the fields of AI andinformation technology.

The Act directs NSF Director to submit to the relevant House andSenate congressional committees a report outlining the need,feasibility and plans for implementing a program for recruiting andtraining the next generation of AI professionals. The report willevaluate the feasibility of establishing a federal AIscholarship-for-service program to recruit and train the nextgeneration of AI professionals.

The Akin Gump cross-practice AI team continues to activelymonitor forthcoming congressional and administrative initiativesrelated to AI.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

POPULAR ARTICLES ON: Technology from United States

Go here to read the rest:
Chips-Plus Artificial Intelligence In The CHIPS Act Of 2022 - New Technology - United States - Mondaq

Capitol Records forced to drop its artificial-intelligence-created rapper after just one week following gross stereotypes backlash – Fortune

Capitol Music Records has severed ties with an artificial-intelligence-powered rapper days after the release of his first single amid intense backlash accusing the artist of perpetuating racist stereotypes.

Artist FN Meka became the worlds first augmented-reality music artist to be signed to a major record label earlier this month, releasing his first single Florida Water on August 12. The single featured Fortnite gamer Clix and Atlanta rapper Gunna.

Meka already has over 500,000 monthly listeners on Spotify and over 10 million followers on TikTok, where his posts allow fans a peek into his virtual world, which includes huge Bugatti jets, Maybach helicopters, and a machine that turns ice into iced-out watches.

However, backlash quickly rose up on social media with users pointing out their discomfort with how Meka is portrayed, claiming the creation was equivalent to digital blackface and that his content on Instagram and TikTok trivialized incarceration and police brutality.

One Instagram post showed the rapper being beaten by a police officer in a jail cell because he wont snitch.

On Tuesday, activist nonprofit Industry Blackout wrote an open letter to Capitol summarizing the issues brought to light.

It is a direct insult to the Black community and our culture. An amalgamation of gross stereotypes, appropriative mannerisms that derive from Black artists, complete with slurs infused in lyrics, the statement said. We find fault in the lack of awareness in how offensive this caricature is.

Industry Blackout called for Capitol to cut ties with the artist and donate any associated funds to charity or other black artists under the label.

Capitol quickly responded in a statement shared online by New York Times journalist Joe Costarelli, confirming that it had dropped the rapper with immediate effect.

Mekas debut single Florida Water has also been removed from all streaming platforms.

We offer our deepest apologies to the Black community for our insensitivity in signing this project without asking enough questions about equity and the creative process behind it. We thank those who have reached out to us with constructive feedback in the past couple of daysyour input was invaluable as we came to the decision to end our association with the project, the statement read.

Meka is partially backed by A.I. and was cocreated by Anthony Martini and Brandon Le of the company Factory New. While the voice is based on a real human, the rest is all down to artificial intelligence.

Sign up for theFortune Features email list so you dont miss our biggest features, exclusive interviews, and investigations.

See more here:
Capitol Records forced to drop its artificial-intelligence-created rapper after just one week following gross stereotypes backlash - Fortune

Get Chris Pratt in the drivers seat or this things done – Marketplace

Algorithms play a huge role in the content people watch. Netflix, for example, has said that approximately 80% of subscribers trust the platforms recommendations. But as artificial intelligence technology advances, it may play an increasingly important role in what films and TV shows are made.

Bloomberg columnist Trung Phan recently wrote about AIs potential in evaluating film and television projects commercial viability. The following is an edited transcript of Phans conversation with Marketplace host Kai Ryssdal about what he learned by having his own script analyzed.

Kai Ryssdal: Tell me about this screenplay you wrote called The Lose.

Trung Phan: OK. So, I write now publicly on the internet quite a bit, including a column at Bloomberg. But 10 years ago, I was living in Ho Chi Minh City [Vietnam], and I had dreams of being a screenwriter. And I managed to put a comedy script together and sold it to Fox. The log line for that film was The Fugitive meets Harold and Kumar, set in Southeast Asia. (Laughter) It was just ahead of its time they werent ready to make it. TL;DR it didnt get made, and now were about a decade later.

Ryssdal: All right. So how did it come to pass that you wound up writing a column about it?

Phan: So, separately, I wrote about the story of writing and selling the script ages ago. And the CEO of an artificial intelligence company called Corto AI happened to read my newsletter, and hes like, Hey, Trung, I read this article that you wrote about this old script of yours. And it just so happens that my company has technology that uses artificial intelligence to scan screenplays. And the way they describe it is they look for the narrative DNA of a screenplay and they can basically tell you why a film could or cannot succeed. I already knew that my film couldnt, but I wanted to find out.

Ryssdal: Yeah, being a glutton for punishment. So they run your script through the algorithm. How do they know what they are looking for?

Phan: So the CEO told me that they have a database of about 700,000 scripts. And I guess in the AI industry, in machine learning, a lot of what you do is you kind of tag certain items. So, as an example, in script writing you can usually tell when a screenplay transitions from the first to the second to the third act. So basically, they have this giant catalog, and they wanted to run my script against their giant catalog.

Ryssdal: So they come back with some report, and they tell you what?

Phan: So the report comes back. Im not going to bury the lede here. They said, Your film is not commercially viable. So they come back with something I already knew. Having said that, though, they correctly identified kind of the genre of the film. One of the top comparisons that they pulled up was The Hangover [Part] II, which was set in Bangkok. But they said two things specifically about my script that made it not supermarketable. They have these two scores that they calculate. One is called interestingness, and another is called uniqueness. So what interestingness does is it looks at the range of characters in a script. Obviously, most films will have the protagonist and the villain. But then other films really good films will have a lot of good secondary and tertiary characters. Apparently, my script didnt have those. Whereas, if you take a movie like The Godfather, you probably have half a dozen or 10 characters that you see how they progress. But to my credit, thats a three-hour movie, and Im not Francis Ford Coppola.

Ryssdal: And its worth pointing out here, actually and this is my favorite part of it that they also recommended that if this film did get made, if the studio put Chris Pratt in it, it might work.

Phan: Yeah, it basically chose Chris Pratt as the silver bullet for this film. Like, we looked through the entire 700,000 film and TV [show] database and, based on the interestingness and the lack of uniqueness, the other score, your only chance now is you know, you gotta get Chris Pratt in the drivers seat or this things done.

Ryssdal: If Chris Pratt is a Marketplace listener, maybe we can hook you up. But wait a minute. You mentioned The Godfather, right? And so heres what I want to go bigger picture on Hollywood and AI: The Godfather almost didnt get made like 10 different times, right? Oh, and so now were trying to put AI into this unbelievably subjective what makes a good movie. And I just wonder what you think of that. Set aside your own bad experiences with your film, but come on, man, how do they know?

Phan: I am 100% on the same page with you on this. The one thing I will say is that AI is advancing just so incredibly fast, Im hedging a little bit, but will I be completely out of a job as a writer in 10 years? Maybe, so theres that.

Theres a lot happening in the world. Through it all, Marketplace is here for you.

You rely on Marketplace to break down the worlds events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible.

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.

More:
Get Chris Pratt in the drivers seat or this things done - Marketplace

Developing artificial intelligence to support robotic autonomous systems in the battlespace | BCS – BCS

Rounding off the event was an insightful Q&A session where the audience was invited to interrogate our presenters. The below is a summary of the discussions.

How does AI apply in highly regulated areas?If the use case exists to use data, you should use it even if the platform/system is highly regulated.

There is much hype about the applications for AI, and rightly a lot of excitement for what the future holds. However, just because we want AI to be a success, doesn't mean we should be afraid to say "No". At this point in time, we need to take each AI use case proposal and approach it with the right mindset what is the outcome you're looking to achieve?

Are we in an arms race with China/Russia?Technology doesnt always give you the advantage. War is very complex and different countries approach it with a different mindset. For example, in the West the primary unit is focused on the individual. Whereas in the East, the primary unit is the State which is why they will sacrifice vast numbers of people to protect the mother country.

In many ways it comes down to ethics. In the UK we have whats referred to as The Daily Mail Test essentially if something goes wrong will it be splashed across tomorrows headlines? Its the media and the general public that set the ethical bar, and then the Governments aversity to collateral damage that will determine how AI is accepted in defence. And ultimately, AI will always be judged to a higher standard than humans are its part of the challenge our sector faces.

How far is fiction in the development of science?Todays AI isn't advanced enough to allow us to delegate too much responsibility to the technology. The applications are too narrow, which makes the technologies prone to adversarial attacks where bad actors attempt to 'trick' the AI, for example painting a tank to look like a tree so it's not detected. Additionally, poisoning attacks, where data is injected with vulnerabilities, can affect the accuracy of the outcome.

Follow a design-led process, and you build resilience into the development so you reach a point where you know the output can be trusted, because you have implemented specific controls along the way.

What education do we need around AI?One of the most dangerous ideas we need to get under control is the perception that AI technology is like magic and can solve any problem. We even hear politicians make statements like blockchain can solve Brexits border issues. And it comes back to the problem highlighted at the beginning of the lecture:

What is AI?Thankfully the industry does have influence on what the politicians do with AI through groups, like the BCS AI community interest group and AI expert groups in Brussels. If we can help the lawmakers to get the fundamentals right, well get everyone speaking the same language and more progress can be made.

Who has the capacity and skill to deliver AI in defence? And do we have the right people?BCS has run its AI community interest group for 40+ years, but we cant ignore the war on talent. New graduates emerging from university are in such high demand they can name their price, which is typically far more than the defence sector can afford.

Therefore, we need to be creative in how we approach future talent. If you can find people with more generic IT skills like testing, infrastructure, systems architecture, and design-led principles you can train them to understand how the military works, and train them in how to understand AI so they can speak the language with confidence. Rather than look to hire in specific skills, which are then difficult to retain, we need to invest in creating the right environment to support and upskill people. Find out more about BCS AI Certifications.

BCS offers a range of certifications in AI to help your teams level up. As well as aligning to SFIAplus, a globally recognised skills framework, you join a global community of 60,000 members who are committed to advancing our industry. As a member of BCS you can also join specialist groups and branches, gain access to mentoring, and we provide everything you need to facilitate continued professional development.Become a member of BCS

Read more:
Developing artificial intelligence to support robotic autonomous systems in the battlespace | BCS - BCS