Why Data Annotation Remains a Human Domain: The Boundaries of Artificial Intelligence – Medium

The Unmatched Complexity of Context Photo by julien Tromeur on Unsplash

Artificial Intelligence (AI) has undeniably revolutionized the way we interact with technology and process vast amounts of information.

From self-driving cars to virtual assistants, the scope of AIs capabilities seems limitless.

However, amidst this wave of technological advancement, there is a crucial question that often goes unnoticed: Can AI truly replace the human touch in data annotation?

As someone who has dipped their toes into this complex world, I can assure you that data annotation remains, and will likely always remain, a human domain.

In this blog post, we will explore the reasons behind this assertion, delve into the intricate boundaries of artificial intelligence, and reflect on personal experiences that illuminate the essence of human involvement in data annotation.

Data annotation is more than just labeling images, text, or audio; it involves deciphering context, nuance, and the subtleties that are inherent to human communication. While AI algorithms have made incredible strides in understanding language and visual data, they still struggle to grasp the intricacies of context.

For instance, consider the sentence, She plays a mean guitar. To a human, its evident that mean in this context means exceptionally skilled.

However, an AI might misinterpret it as derogatory, missing the nuance completely. This illustrates the limits of AI when it comes to understanding the richness of human language.

One of the most fascinating aspects of data annotation is the dance between subjectivity and interpretation. When humans annotate data, their unique perspectives and cultural backgrounds come into play. This subjectivity can be a double-edged sword, as it introduces biases, but it also adds depth and authenticity to the annotations. In contrast, AI algorithms strive for objectivity, which might seem like a noble pursuit. Still, it

Read the original here:

Why Data Annotation Remains a Human Domain: The Boundaries of Artificial Intelligence - Medium

Why AI struggles to predict the future : Short Wave – NPR

Muharrem Huner/Getty Images

Muharrem Huner/Getty Images

Artificial intelligence is increasingly being used to predict the future. Banks use it to predict whether customers will pay back a loan, hospitals use it to predict which patients are at greatest risk of disease and auto insurance companies use it to determine insurance rates by predicting how likely a customer is to get in an accident.

"Algorithms have been claimed to be these silver bullets, which can solve a lot of societal problems," says Sayash Kapoor, a researcher and PhD candidate at Princeton University's Center for Information Technology Policy. "And so it might not even seem like it's possible that algorithms can go so horribly awry when they're deployed in the real world."

But they do.

Issues like data leakage and sampling bias can cause AI to give faulty predictions, to sometimes disastrous effects.

Kapoor points to high stakes examples: One algorithm falsely accused tens of thousands of Dutch parents of fraud; another purportedly predicted which hospital patients were at high risk of sepsis, but was prone to raising false alarms and missing cases.

After digging through tens of thousands of lines of machine learning code in journal articles, he's found examples abound in scientific research as well.

"We've seen this happen across fields in hundreds of papers," he says. "Often, machine learning is enough to publish a paper, but that paper does not often translate to better real world advances in scientific fields."

Kapoor is co-writing a blog and book project called AI Snake Oil.

Want to hear more of the latest research on AI? Email us at shortwave@npr.org we might answer your question on a future episode!

Listen to Short Wave on Spotify, Apple Podcasts and Google Podcasts.

This episode was produced by Berly McCoy and edited by Rebecca Ramirez. Brit Hanson checked the facts. Maggie Luthar was the audio engineer.

Read the rest here:

Why AI struggles to predict the future : Short Wave - NPR

Artificial Intelligence in Natural Hazard Modeling: Severe Storms, Hurricanes, Floods, and Wildfires – Government Accountability Office

What GAO Found

GAO found that machine learning, a type of artificial intelligence (AI) that uses algorithms to identify patterns in information, is being applied to forecasting models for natural hazards such as severe storms, hurricanes, floods, and wildfires, which can lead to natural disasters. A few machine learning models are used operationallyin routine forecastingsuch as one that may improve the warning time for severe storms. Some uses of machine learning are considered close to operational, while others require years of development and testing.

GAO identified potential benefits of applying machine learning to this field, including:

Forecasting natural disasters using machine learning

GAO also identified challenges to the use of machine learning. For example:

GAO identified five policy options that could help address these challenges. These options are intended to inform policymakers, including Congress, federal and state agencies, academic and research institutions, and industry of potential policy implementations. The status quo option illustrates a scenario in which government policymakers take no additional actions beyond current ongoing efforts.

Policy Options to Help Address Challenges to the Use of Machine Learning in Natural Hazard Modeling

Government policymakers could expand use of existing observational data and infrastructure to close gaps, expand access to certain data, and (in conjunction with other policymakers) establish guidelines for making data AI-ready.

Government policymakers could update education requirements to include machine learning-related coursework and expand learning and support centers, while academic policymakers could adjust physical science curricula to include more machine learning coursework.

Government policymakers could address pay scale limitations for positions that include machine learning expertise and work with private sector policymakers to expand the use of public-private partnerships (PPP).

Policymakers could establish efforts to better understand and mitigate various forms of bias, support inclusion of diverse stakeholders for machine learning models, and develop guidelines or best practices for reporting methodological choices.

Government policymakers could maintain existing policy efforts and organizational structures, along with existing strategic plans and agency commitments.

Source: GAO. | GAO-24-106213

Natural disasters cause on average hundreds of deaths and billions of dollars in damage in the U.S. each year. Forecasting natural disasters relies on computer modeling and is important for preparedness and response, which can in turn save lives and protect property. AI is a powerful tool that can automate processes, rapidly analyze massive data sets, enable modelers to gain new insights, and boost efficiency.

This report on the use of machine learning in natural hazard modeling discusses (1) the emerging and current use of machine learning for modeling severe storms, hurricanes, floods, and wildfires, and the potential benefits of this use; (2) challenges surrounding the use of machine learning; and (3) policy options to address challenges or enhance benefits of the use of machine learning.

GAO reviewed the use of machine learning to model severe storms, hurricanes, floods, and wildfires across development and operational stages; interviewed a range of stakeholder groups, including government, industry, academia, and professional organizations; convened a meeting of experts in conjunction with the National Academies; and reviewed key reports and scientific literature. GAO is identifying policy options in this report.

For more information, contact Brian Bothwell at (202) 512-6888 or bothwellb@gao.gov.

See the article here:

Artificial Intelligence in Natural Hazard Modeling: Severe Storms, Hurricanes, Floods, and Wildfires - Government Accountability Office

Artificial Intelligence: Actions Needed to Improve DOD’s Workforce Management – Government Accountability Office

Fast Facts

The Department of Defense has invested billions of dollars to integrate artificial intelligence into its operations. This includes analyzing intelligence, surveillance, and reconnaissance data, and operating deadly autonomous weapon systems.

We found, however, that DOD can't fully identify who is part of its AI workforce or which positions require personnel with AI skills. As a result, DOD can't effectively assess the state of its AI workforce or forecast future AI workforce needs.

We made 3 recommendations, including that DOD establish a timeline for completing the steps needed to define and identify its AI workforce.

The Department of Defense (DOD) typically establishes standard definitions of its workforces to make decisions about which personnel are to be included in that workforce, and identifies its workforces by coding them in its data systems. DOD has taken steps to begin to identify its artificial intelligence (AI) workforce, but has not assigned responsibility and does not have a timeline for completing additional steps to fully define and identify this workforce. DOD developed AI work rolesthe specialized sets of tasks and functions requiring specific knowledge, skills, and abilities. DOD also identified some military and civilian occupations, such as computer scientists, that conduct AI work. However, DOD has not assigned responsibility to the organizations necessary to complete the additional steps required to define and identify its AI workforce, such as coding the work roles in various workforce data systems, developing a qualification program, and updating workforce guidance. DOD also does not have a timeline for completing these additional steps. Assigning responsibility and establishing a timeline for completion of the additional steps would enable DOD to more effectively assess the state of its AI workforce and be better prepared to forecast future workforce requirements (see figure).

Questions DOD Cannot Answer Until It Fully Defines and Identifies Its AI Workforce

DOD's plans and strategies address some AI workforce issues, but are not fully consistent with each other. Federal regulation and guidance state that an agency's Human Capital Operating Plan should support the execution of its Strategic Plan. However, DOD's Human Capital Operating Plan does not consistently address the human capital implementation actions for AI workforce issues described in DOD's Strategic Plan. DOD also uses inconsistent terms when addressing AI workforce issues, which could hinder a shared understanding within DOD. The military services are also developing component-level human capital plans that encompass AI and will cascade from the higher-level plans. Updating DOD's Human Capital Operating Plan to be consistent with other strategic documents would better guide DOD components' planning efforts and support actions necessary for achieving the department's strategic goals and objectives related to its AI workforce.

DOD has invested billions of dollars to integrate AI into its warfighting operations. This includes analyzing intelligence, surveillance, and reconnaissance data, and operating lethal autonomous weapon systems. DOD identified cultivating a workforce with AI expertise as a strategic focus area in 2018. However, in 2021 the National Security Commission on Artificial Intelligence concluded that DOD's AI talent deficit is one of the greatest impediments to the U.S. being AI-ready by the Commission's target date of 2025.

House Report 117-118, accompanying a bill for the National Defense Authorization Act for Fiscal Year 2022, includes a provision for GAO to review DOD's AI workforce. This report evaluates the extent to which DOD has (1) defined and identified its AI workforce and (2) established plans and strategies to address AI workforce issues, among other objectives. GAO assessed DOD strategies and plans, reviewed laws and guidance that outline requirements for managing an AI workforce, and interviewed officials.

Read more from the original source:

Artificial Intelligence: Actions Needed to Improve DOD's Workforce Management - Government Accountability Office

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say – Reuters

Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altmans four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

[1/2]Sam Altman, CEO of ChatGPT maker OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all U.S. senators hosted by Senate Majority Leader Chuck Schumer (D-NY) at the U.S. Capitol in Washington, U.S., September 13, 2023. REUTERS/Julia Nikhinson/File Photo Acquire Licensing Rights

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math where there is only one right answer implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AIs prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.

Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker

Our Standards: The Thomson Reuters Trust Principles.

Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University. Contact:4152373211

Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.

Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.

See more here:

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say - Reuters

The OpenAI Drama Has a Clear Winner: The Capitalists – The New York Times

What happened at OpenAI over the past five days could be described in many ways: A juicy boardroom drama, a tug of war over one of Americas biggest start-ups, a clash between those who want A.I. to progress faster and those who want to slow it down.

But it was, most importantly, a fight between two dueling visions of artificial intelligence.

In one vision, A.I. is a transformative new tool, the latest in a line of world-changing innovations that includes the steam engine, electricity and the personal computer, and that, if put to the right uses, could usher in a new era of prosperity and make gobs of money for the businesses that harness its potential.

In another vision, A.I. is something closer to an alien life form a leviathan being summoned from the mathematical depths of neural networks that must be restrained and deployed with extreme caution in order to prevent it from taking over and killing us all.

With the return of Sam Altman on Tuesday to OpenAI, the company whose board fired him as chief executive last Friday, the battle between these two views appears to be over.

Team Capitalism won. Team Leviathan lost.

OpenAIs new board will consist of three people, at least initially: Adam DAngelo, the chief executive of Quora (and the only holdover from the old board); Bret Taylor, a former executive at Facebook and Salesforce; and Lawrence H. Summers, the former Treasury secretary. The board is expected to grow from there.

OpenAIs largest investor, Microsoft, is also expected to have a larger voice in OpenAIs governance going forward. That may include a board seat.

Gone from the board are three of the members who pushed for Mr. Altmans ouster: Ilya Sutskever, OpenAIs chief scientist (who has since recanted his decision); Helen Toner, a director of strategy at Georgetown Universitys Center for Security and Emerging Technology; and Tasha McCauley, an entrepreneur and researcher at the RAND Corporation.

Mr. Sutskever, Ms. Toner and Ms. McCauley are representative of the kinds of people who were heavily involved in thinking about A.I. a decade ago an eclectic mix of academics, Silicon Valley futurists and computer scientists. They viewed the technology with a mix of fear and awe, and worried about theoretical future events like the singularity, a point at which A.I. would outstrip our ability to contain it. Many were affiliated with philosophical groups like the Effective Altruists, a movement that uses data and rationality to make moral decisions, and were persuaded to work in A.I. out of a desire to minimize the technologys destructive effects.

This was the vibe around A.I. in 2015, when OpenAI was formed as a nonprofit, and it helps explain why the organization kept its convoluted governance structure which gave the nonprofit board the ability to control the companys operations and replace its leadership even after it started a for-profit arm in 2019. At the time, protecting A.I. from the forces of capitalism was viewed by many in the industry as a top priority, one that needed to be enshrined in corporate bylaws and charter documents.

But a lot has changed since 2019. Powerful A.I. is no longer just a thought experiment it exists inside real products, like ChatGPT, that are used by millions of people every day. The worlds biggest tech companies are racing to build even more powerful systems. And billions of dollars are being spent to build and deploy A.I. inside businesses, with the hope of reducing labor costs and increasing productivity.

The new board members are the kinds of business leaders youd expect to oversee such a project. Mr. Taylor, the new board chair, is a seasoned Silicon Valley deal maker who led the sale of Twitter to Elon Musk last year, when he was the chair of Twitters board. And Mr. Summers is the Ur-capitalist a prominent economist who has said that he believes technological change is net good for society.

There may still be voices of caution on the reconstituted OpenAI board, or figures from the A.I. safety movement. But they wont have veto power, or the ability to effectively shut down the company in an instant, the way the old board did. And their preferences will be balanced alongside others, such as those of the companys executives and investors.

Thats a good thing if youre Microsoft, or any of the thousands of other businesses that rely on OpenAIs technology. More traditional governance means less risk of a sudden explosion, or a change that would force you to switch A.I. providers in a hurry.

And perhaps what happened at OpenAI a triumph of corporate interests over worries about the future was inevitable, given A.I.s increasing importance. A technology potentially capable of ushering in a Fourth Industrial Revolution was unlikely to be governed over the long term by those who wanted to slow it down not when so much money was at stake.

There are still a few traces of the old attitudes in the A.I. industry. Anthropic, a rival company started by a group of former OpenAI employees, has set itself up as a public benefit corporation, a legal structure that is meant to insulate it from market pressures. And an active open-source A.I. movement has advocated that A.I. remain free of corporate control.

But these are best viewed as the last vestiges of the old era of A.I., in which the people building A.I. regarded the technology with both wonder and terror, and sought to restrain its power through organizational governance.

Now, the utopians are in the drivers seat. Full speed ahead.

Read the original:

The OpenAI Drama Has a Clear Winner: The Capitalists - The New York Times

OpenAI’s Board Set Back the Promise of Artificial Intelligence – The Information

I was the first venture investor in OpenAI. The weekend drama illustrated my contention that the wrong boards can damage companies. Fancy titles like Director of Strategy at Georgetowns Center for Security and Emerging Technology can lead to a false sense of understanding of the complex process of entrepreneurial innovation. OpenAIs board members' religion of effective altruism and its misapplication could have set back the worlds path to the tremendous benefits of artificial intelligence. Imagine free doctors for everyone and near free tutors for every child on the planet. Thats whats at stake with the promise of AI.

The best companies are those whose visions are led and executed by their founding entrepreneurs, the people who put everything on the line to challenge the status quofounders like Sam Altmanwho face risk head on, and who are focusedso totallyon making the world a better place. Things can go wrong, and abuse happens, but the benefits of good founders far outweigh the risks of bad ones.

View post:

OpenAI's Board Set Back the Promise of Artificial Intelligence - The Information

The Future of AI: What to Expect in the Next 5 Years – TechTarget

For the first half of the 20th century, the concept of artificial intelligence held meaning almost exclusively for science fiction fans. In literature and cinema, androids, sentient machines and other forms of AI sat at the center of many of science fiction's high-water marks -- from Metropolis to I, Robot. In the second half of the last century, scientists and technologists began earnestly attempting to realize AI.

At the 1956 Dartmouth Summer Research Project on Artificial Intelligence, co-host John McCarthy introduced the phrase artificial intelligence and helped incubate an organized community of AI researchers.

Often AI hype outpaced the actual capacities of anything those researchers could create. But in the last moments of the 20th century, significant AI advances started to rattle society at large. When IBM's Deep Blue defeated chess master Gary Kasparov, the game's reigning champion, the event seemed to signal not only a historic and singular defeat in chess history -- the first time that a computer had beaten a top player -- but also that a threshold had been crossed. Thinking machines had left the realm of sci-fi and entered the real world.

The era of big data and the exponential growth of computational power in accord with Moore's Law has subsequently enabled AI to sift through gargantuan amounts of data and learn how to accomplish tasks that had previously been accomplished only by humans.

The effects of this machine renaissance have permeated society: Voice recognition devices such as Alexa, recommendation engines like those used by Netflix to suggest which movie you should watch next based on your viewing history, and the modest steps taken by driverless cars and other autonomous vehicles are emblematic. But the next five years of AI development will likely lead to major societal changes that go well beyond what we've seen to date.

Speed of life. The most obvious change that many people will feel across society is an increase in the tempo of engagements with large institutions. Any organization that engages regularly with large numbers of users -- businesses, government units, nonprofits -- will be compelled to implement AI in the decision-making processes and in their public- and consumer-facing activities. AI will allow these organizations to make most of the decisions much more quickly. As a result, we will all feel life speeding up.

End of privacy. Society will also see its ethical commitments tested by powerful AI systems, especially privacy. AI systems will likely become much more knowledgeable about each of us than we are about ourselves. Our commitment to protecting privacy has already been severely tested by emerging technologies over the last 50 years. As the cost of peering deeply into our personal data drops and more powerful algorithms capable of assessing massive amounts of data become more widespread, we will probably find that it was a technological barrier more than an ethical commitment that led society to enshrine privacy.

Thicket of AI law. We can also expect the regulatory environment to become much trickier for organizations using AI. Presently all across the planet, governments at every level, local to national to transnational, are seeking to regulate the deployment of AI. In the U.S. alone, we can expect an AI law thicket as city, state and federal government units draft, implement and begin to enforce new AI laws. And the European Union will almost certainly implement its long-awaited AI regulation within the next six to 12 business quarters. The legal complexity of doing business will grow considerably in the next five years as a result.

Human-AI teaming. Much of society will expect businesses and government to use AI as an augmentation of human intelligence and expertise, or as a partner, to one or more humans working toward a goal, as opposed to using it to displace human workers. One of the effects of artificial intelligence having been born as an idea in century-old science fiction tales is that the tropes of the genre, chief among them dramatic depictions of artificial intelligence as an existential threat to humans, are buried deep in our collective psyche. Human-AI teaming, or keeping humans in any process that is being substantially influenced by artificial intelligence, will be key to managing the resultant fear of AI that permeates society.

The following industries will be affected most by AI:

The notion that AI poses an existential risk to humans has existed almost as long as the concept of AI itself. But in the last two years, as generative AI has become a hot topic of public discussion and debate, fear of AI has taken on newer undertones.

Arguably the most realistic form of this AI anxiety is a fear of human societies losing control to AI-enabled systems. We can already see this happening voluntarily in use cases such as algorithmic trading in the finance industry. The whole point of such implementations is to exploit the capacities of synthetic minds to operate at speeds that outpace the quickest human brains by many orders of magnitude.

However, the existential threats that have been posited by Elon Musk, Geoffrey Hinton and other AI pioneers seem at best like science fiction, and much less hopeful than much of the AI fiction created 100 years ago.

The more likely long-term risk of AI anxiety in the present is missed opportunities. To the extent that organizations in this moment might take these claims seriously and underinvest based on those fears, human societies will miss out on significant efficiency gains, potential innovations that flow from human-AI teaming, and possibly even new forms of technological innovation, scientific knowledge production and other modes of societal innovation that powerful AI systems can indirectly catalyze.

Michael Bennett is director of educational curriculum and business lead for responsible AI in The Institute for Experiential Artificial Intelligence at Northeastern University in Boston. Previously, he served as Discovery Partners Institute's director of student experiential immersion learning programs at the University of Illinois. He holds a J.D. from Harvard Law School.

View post:

The Future of AI: What to Expect in the Next 5 Years - TechTarget

OpenAI staff reportedly warned the board about an AI breakthrough that could threaten humanity before Sam Altman … – Fortune

A potential breakthrough in the field of artificial intelligence may have contributed to Sam Altmans recent ouster as CEO of OpenAI.

According to a Reutersreportciting two sources acquainted with the matter, several staff researchers wrote a letter to the organizations board warning of a discovery that could potentially threaten the human race.

The two anonymous individuals claim this letter, which informed directors that a secret project named Q* resulted in A.I. solving grade school level mathematics, reignited tensions over whether Altman was proceeding too fast in a bid tocommercialize the technology.

Just a day before he was sacked, Altman may have referenced Q* (pronounced Q-star) at a summit of world leaders in San Francisco when he spoke of what he believed was a recent breakthrough.

Four times now in the history of OpenAIthe most recent time was just in the last couple of weeksIve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, said Altmanat a discussion during the Asia-Pacific Economic Cooperation.

He has since beenreinstated as CEOin a spectacular reversal of events after staffthreatened to mutinyagainst the board.

According to one of the sources, after being contacted by Reuters, OpenAIs chief technology officer Mira Murati acknowledged in an internal memo to employees the existence of the Q* project as well as a letter that was sent by the board.

OpenAI could not be reached immediately by Fortune for a statement, but it declined to provide a comment to Reuters.

So why is all of this special, let alone alarming?

Machines have been solving mathematical problems for decades going back to the pocket calculator.

The difference is conventional devices were designed to arrive at a single answer using a series of deterministic commands that all personal computers employ where values can only either be true or false, 0 or 1.

Under this rigid binary system, there is no capability to diverge from their programming in order to think creatively.

By comparison, neural nets are not hard coded to execute certain commands in a specific way. Instead, they are trained just like a human brain would be with massive sets of interrelated data, giving them the ability to identify patterns and infer outcomes.

Think of Googles helpful Autocomplete function that aims to predict what an internet user is searching for using statistical probabilitythis is a very rudimentary form of generative AI.

Thats why Meredith Whittaker, a leading expert in the field, describesneural netslike ChatGPT as probabilistic engines designed to spit out what seems plausible.

Should generative A.I. prove able to arrive at the correct solution to mathematical problems on its own, it suggests a capacity for higher reasoning.

This could potentially be the first step towards developing artificial general intelligence, a form of AI that can surpass humans.

The fear is that an AGI needs guardrails since it one day might come to view humanity as a threat to its existence.

See the article here:

OpenAI staff reportedly warned the board about an AI breakthrough that could threaten humanity before Sam Altman ... - Fortune

First international benchmark of artificial intelligence and machine … – Nuclear Energy Agency

Recent performance breakthroughs in artificial intelligence (AI) and machine learning (ML) have led to unprecedented interest among nuclear engineers. Despite the progress, the lack of dedicated benchmark exercises for the application of AI and ML techniques in nuclear engineering analyses limits their applicability and broader usage. In line with the NEA strategic target to contribute to building a solid scientific and technical basis for the development of future generation nuclear systems and deployment of innovations, theTask Force on Artificial Intelligence and Machine Learning for Scientific Computing in Nuclear Engineering was established within theExpert Group on Reactor Systems Multi-Physics (EGMUP) of the Nuclear Science Committees Working Party on Scientific Issues and Uncertainty Analysis of Reactor Systems (WPRS). The Task Force will focus on designing benchmark exercises that will target important AI and ML activities, and cover various computational domains of interest, from single physics to multi-scale and multi-physics.

A significant milestone has been reached with the successful launch of a first comprehensive benchmark of AI and ML to predict the Critical Heat Flux (CHF). This CHF corresponds in a boiling system to the limit beyond which wall heat transfer decreases significantly, which is often referred to as critical boiling transition, boiling crisis and (depending on operating conditions) departure from nucleate boiling (DNB), or dryout. In a heat transfer-controlled system, such as a nuclear reactor core, CHF can result in a significant wall temperature increase leading to accelerated wall oxidation, and potentially to fuel rod failure. While constituting an important design limit criterion for the safe operation of reactors, CHF is challenging to predict accurately due to the complexities of the local fluid flow and heat exchange dynamics.

Current CHF models are mainly based on empirical correlations developed and validated for a specific application case domain. Through this benchmark, improvements in the CHF modelling are sought using AI and ML methods directly leveraging a comprehensive experimental database provided by the US Nuclear Regulatory Commission (NRC), forming the cornerstone of this benchmark exercise. The improved modelling can lead to a better understanding of the safety margins and provide new opportunities for design or operational optimisations.

The CHF benchmark phase 1 kick-off meeting on 30 October 2023 gathered 78 participants, representing 48 institutions from 16 countries. This robust engagement underscores the profound interest and commitment within the global scientific community toward integrating AI and ML technologies into nuclear engineering. The ultimate goal of the Task Force is to leverage insights from the benchmarks and distill lessons learnt to provide guidelines for future AI and ML applications in scientific computing in nuclear engineering.

eNrVmE1z2jAQhu/8Co/v/iBpAukYMi1NWmaaKSVh2umFEfICIkJyVjIf/fWVgbSkYzdFRIcc7ZV319K7z66dXK7m3FsAKiZFy6+Hse+BoDJlYtLyB3fXQdO/bNeSGVmQ/WXNMB6exCe+RzlRquUX9nAERKjw+83nD2A8APrtmpfI0QyofrIu14yHn4ia3pCsWOMlC8lSbw56KtOWn+V6c9dLlEaTR3sp8V5lhEIS7e7sWzXK4dnZebxvTKLC43+4zhXgZyImpZ5BWPmkOSII3SEaJhLXlUk3Lhp2STPVByVzpNAjetpDuWAppKVxxoQrsAoyXqa3gAsOughS6jya0bmyck5mZNWHh2550u+MtaNXOoiDeiOOm/X6xcXpyfmpVSjc26rSaMVLRBkfNptv4kY0Zqh0wIQGFEQbtRMejExFTOcE7wM5DghqNmaUmfvFKs7ZxJghICIN5oROmYCAA0FhfJsVgchpcRkgEKolBtl0rRhVlgffk6gJd3TkTHWeStdRHISHZ6WVMpVxsg5nKrPdKoLEmAENZdy9SPEGd2i4x82e/eVf5JxHB2Y92AHJUcYF7zoyF7qCS9d9243oSFMNq+oTtUOpXu20yEC9nNufUpT3kl4+4oza8tIQLQelB/1uNS5fDWneEwUDdIeab0ykcqkqumNuTbB9xThKPttA+F/DyLl1cf4w0qxoi1c5ygwiwzWmjsFVV4zlsaAyai939aj11yHzzfgnKeFQMQAOLYFo9P04tDqrIHfVuTWUOv14dWerva854Pp2c1nqmqWtR9XYNQsXHcgIvTLvw8tmS45nvwwMQep2esZyLE21ztTbKFoul6EEmgYCSChx8rpa0t6I4e6jx8kcs53rtmx3lPpo268PO37bUn5u0jl2et89v/tKeOkZYfCb+s7Y3L16edz/Gd2dpd17wid3YTZj9oYwrka0fFTq8agGY45VXKPhw5exgSAcJssk2v4Ja9eSqPgL1q79AnXDevo=

ermjsW29gKwt2T3e

Read the original:

First international benchmark of artificial intelligence and machine ... - Nuclear Energy Agency

What the OpenAI drama means for AI progress and safety – Nature.com

OpenAI fired its charismatic chief executive, Sam Altman, on 17 November but has now reinstated him.Credit: Justin Sullivan/Getty

OpenAI the company behind the blockbuster artificial intelligence (AI) bot ChatGPT has been consumed by frenzied changes for almost a week. On 17 November, the company fired its charismatic chief executive, Sam Altman. Five days, and much drama, later, OpenAI announced that Altman would return with an overhaul of the companys board.

The debacle has thrown the spotlight on an ongoing debate about how commercial competition is shaping the development of AI systems, and how quickly AI can be deployed ethically and safely.

The push to retain dominance is leading to toxic competition. Its a race to the bottom, says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.

Altman, a successful investor and entrepreneur, was a co-founder of OpenAI and its public face. He had been chief executive since 2019, and oversaw an investment of some US$13 billion from Microsoft. After Altmans initial ousting, Microsoft, which uses OpenAI technology to power its search engine Bing, offered Altman a job leading a new advanced AI research team. Altmans return to OpenAI came after hundreds of company employees signed a letter threatening to follow Altman to Microsoft unless he was reinstated.

The OpenAI board that ousted Altman last week did not give detailed reasons for the decision, saying at first that he was fired because he was not consistently candid in his communications with the board and later adding that the decision had nothing to do with malfeasance or anything related to our financial, business, safety or security/privacy practice.

But some speculate that the firing might have its origins in a reported schism at OpenAI between those focused on commercial growth and those uncomfortable with the strain of rapid development and its possible impacts on the companys mission to ensure that artificial general intelligence benefits all of humanity.

OpenAI, which is based in San Francisco, California, was founded in 2015 as a non-profit organization. In 2019, it shifted to an unusual capped-profit model, with a board explicitly not accountable to shareholders or investors, including Microsoft. In the background of Altmans firing is very clearly a conflict between the non-profit and the capped-profit; a conflict of culture and aims, says Jathan Sadowski, a social scientist of technology at Monash University in Melbourne, Australia.

Ilya Sutskever, OpenAIs chief scientist and a member of the board that ousted Altman, this July shifted his focus to superalignment, a four-year project attempting to ensure that future superintelligences work for the good of humanity.

Its unclear whether Altman and Sutskever are at odds about speed of development: after the board fired Altman, Sutskever expressed regret about the impacts of his actions and was among the employees who signed the letter threatening to leave unless Altman returned.

With Altman back, OpenAI has reshuffled its board: Sutskever and Helen Toner, a researcher in AI governance and safety at Georgetown Universitys Center for Security and Emerging Technology in Washington DC, are no longer on the board. The new board members include Bret Taylor, who is on the board of e-commerce platform Shopify and used to lead the software company Salesforce.

It seems likely that OpenAI will shift further from its non-profit origins, says Sadowski, restructuring as a classic profit-driven Silicon Valley tech company.

OpenAI released ChatGPT almost a year ago, catapulting the company to worldwide fame. The bot was based on the companys GPT-3.5 large language model (LLM), which uses the statistical correlations between words in billions of training sentences to generate fluent responses to prompts. The breadth of capabilities that have emerged from this technique (including what some see as logical reasoning) has astounded and worried scientists and the general public alike.

OpenAI is not alone in pursuing large language models, but the release of ChatGPT probably pushed others to deployment: Google launched its chatbot Bard in March 2023, the same month that an updated version of ChatGPT, based on GPT-4, was released. West worries that products are appearing before anyone has a full understanding of their behaviour, uses and misuses, and that this could be detrimental for society.

The competitive landscape for conversational AI is heating up. Google has hinted that more AI products lie ahead. Amazon has its own AI offering, Titan. Smaller companies that aim to compete with ChatGPT include the German effort Aleph Alpha and US-based Anthropic, founded in 2021 by former OpenAI employees, which released the chatbot Claude 2.1 on 21 November. Stability AI and Cohere are other often-cited rivals.

West notes that these start-ups rely heavily on the vast and expensive computing resources provided by just three companies Google, Microsoft and Amazon potentially creating a race for dominance between these controlling giants.

Computer scientist Geoffrey Hinton at the University of Toronto in Canada, a pioneer of deep learning, is deeply concerned about the speed of AI development. If you specify a competition to make a car go as fast as possible, the first thing you do is remove the brakes, he says. (Hinton declined to comment to Nature on the events at OpenAI since 17 November.)

OpenAI was founded with the specific goal of developing an artificial general intelligence (AGI) a deep-learning system thats trained not just to be good at one specific thing, but to be as generally smart as a person. It remains unclear whether AGI is even possible. The jury is very much out on that front, says West. But some are starting to bet on it. Hinton says he used to think AGI would happen on the timescale of 30, 50 or maybe 100 years. Right now, I think well probably get it in 520 years, he says.

The imminent dangers of AI are related to it being used as a tool by human bad actors people who use it to, for example, create misinformation, commit scams or, potentially, invent new bioterrorism weapons1. And because todays AI systems work by finding patterns in existing data, they also tend to reinforce historical biases and social injustices, says West.

In the long term, Hinton and others worry about an AI system itself becoming a bad actor, developing sufficient agency to guide world events in a negative direction. This could arise even if an AGI was designed in line with OpenAIs superalignment mission to promote humanitys best interests, says Hinton. It might decide, for example, that the weight of human suffering is so vast that it would be better for humanity to die than to face further misery. Such statements sound like science fiction, but Hinton argues that the existential threat of an AI that cant be turned off and veers onto a destructive path is very real.

The AI Safety Summit hosted by the United Kingdom in November was designed to get ahead of such concerns. So far, some two dozen nations have agreed to work together on the problem, although what exactly they will do remains unclear.

West emphasizes that its important to focus on already-present threats from AI ahead of far-flung concerns and to ensure that existing laws are applied to tech companies developing AI. The events at OpenAI, she says, highlight how just a few companies with the money and computing resources to feed AI wield a lot of power something she thinks needs more scrutiny from anti-trust regulators. Regulators for a very long time have taken a very light touch with this market, says West. We need to start by enforcing the laws we have right now.

Continued here:

What the OpenAI drama means for AI progress and safety - Nature.com

Live chat: A new writing course for the age of artificial intelligence – Yale News

How is academia dealing with the influence of AI on student writing? Just ask ChatGPT, and itll deliver a list of 10 ways in which the rapidly expanding technology is creating both opportunities and challenges for faculty everywhere.

On the one hand, for example, while there are ethical concerns about AI compromising students academic integrity, there is also growing awareness of the ways in which AI tools might actually support students in their research and writing.

Students in Writing Essays with AI, a new English seminar taught by Yales Ben Glaser, are exploring the many ways in which the expanding number of AI tools are influencing written expression, and how they might help or harm their own development as writers.

We talk about how large language models are already and will continue to be quite transformative, Glaser said, not just of college writing but of communication in general.

An associate professor of English in Yales Faculty of Arts and Sciences, Glaser sat down with Yale News to talk about the need for AI literacy, ChatGPTs love of lists, and how the generative chatbot helped him write the course syllabus.

Ben Glaser: Its more the former. None of the final written work for the class is written with ChatGPT or any other large language model or chatbot, although we talk about using AI research tools like Elicit and other things in the research process. Some of the small assignments directly call for students to engage with ChatGPT, get outputs, and then reflect on it. And in that process, they learn how to correctly cite ChatGPT.

The Poorvu Center for Teaching and Learning has a pretty useful page with AI guidelines. As part of this class, we read that website and talked about whether those guidelines seem to match students own experience of usage and what their friends are doing.

Glaser: I dont get the sense that they are confused about it in my class because we talk about it all the time. These are students who simultaneously want to understand the technology better, maybe go into that field, and they also want to learn how to write. They dont think theyre going to learn how to write by using those AI tools better. But they want to think about it.

Thats a very optimistic take, but I think that Yale makes that possible through the resources it has for writing help, and students are often directed to those resources. If youre in a class where the writing has many stages drafting, revision its hard to imagine where ChatGPT is going to give you anything good, partly because youre going to have to revise it so much.

That said, its a totally different world if youre in high school or a large university without those resources. And then of course there are situations that have always led to plagiarism, where youre strung out at the last minute and you copy something from Google.

Glaser: First of all, its a really interesting thing to study. Thats not what youre asking youre asking what it can do or where does it belong in a writing process. But when you talk to a chatbot, you get this fuzzy, weird image of culture back. You might get counterpoints to your ideas, and then you need to evaluate whether those counterpoints or supporting evidence for your ideas are actually good ones. Theres no understanding behind the model. Its based on statistical probabilities its guessing which word comes next. It sometimes does so in a way that speeds things along.

If you say, give me some points and counterpoints in, say, AI use in second-language learning, it might spit out 10 good things and 10 bad things. It loves to give lists. And theres a kind of literacy to reading those outputs. Students in this class are gaining some of that literacy.

Glaser: I dont love the word brainstorming, but I think there is a moment where you have a blank page, and you think you have a topic, and the process of refining that involves research. ChatGPTs not the most wonderful research tool, but it sure is an easy one.

I asked it to write the syllabus for this course initially. What it did was it helped me locate some researchers that I didnt know, it gave me some ideas for units. And then I had to write the whole thing over again, of course. But that was somewhat helpful.

Glaser: It can be. I think thats a limited and effective use of it in many contexts.

One of my favorite class days was when we went to the library and had a library session. Its an insanely amazing resource at Yale. Students have personal librarians, if they want them. Also, Yale pays for these massive databases that are curating stuff for the students. The students quickly saw that these resources are probably going to make things go smoother long-term if they know how to use them.

So it's not a simple AI tool bad, Yale resource good. You might start with the quickly accessible AI tool, and then go to a librarian, and say, like, heres a different version of this. And then youre inside the research process.

Glaser: One thing that some writers have done is, if you interact with it long enough, and give it new prompts and develop its outputs, you can get something pretty cool. At that point youve done just as much work, and youve done a different kind of creative or intellectual project. And Im all for that. If everythings cited, and you develop a creative work through some elaborate back-and-forth or programming effort including these tools, youre just doing something wild and interesting.

Glaser: Im glad that I could offer a class that students who are coming from computer science and STEM disciplines, but also want to learn how to write, could be excited about. AI-generated language, thats the new medium of language. The Web is full of it. Part of making students critical consumers and readers is learning to think about AI language as not totally separate from human language, but as this medium, this soup if you want, that were floating around in.

See the article here:

Live chat: A new writing course for the age of artificial intelligence - Yale News

US agency streamlines probes related to artificial intelligence – Reuters

[1/2]AI (Artificial Intelligence) letters and robot hand miniature in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights

WASHINGTON, Nov 21 (Reuters) - Investigations of cases where artificial intelligence (AI) is used to break the law will be streamlined under a new process approved by the U.S. Federal Trade Commission, the agency said on Tuesday.

The move, along with other actions, highlights the FTC's interest in pursuing cases involving AI. Critics of the technology have said that it could be used to turbo-charge fraud.

The agency, which now has three Democrats, voted unanimously to make it easier for staff to issue a demand for documents as part of an investigation if it is related to AI, the agency said in a statement.

In a hearing in September, Commissioner Rebecca Slaughter, a Democrat who has been nominated to another term, agreed with two Republicans nominated to the agency that the agency should focus on issues like use of AI to make phishing emails and robocalls more convincing.

The agency announced a competition last week aimed at identifying the best way to protect consumers against fraud and other harms related to voice cloning.

Reporting by Diane Bartz Editing by Marguerita Choy

Our Standards: The Thomson Reuters Trust Principles.

Here is the original post:

US agency streamlines probes related to artificial intelligence - Reuters

Artificial intelligence and church – UM News

Key Points:

Artificial intelligence technology, the subject of buzz and anxiety at the moment, has made its way to religion circles.

Pastor Jay Cooper, who heads Violet Crown City Church, a United Methodist congregation in Austin, Texas, took AI out for a spin recently at his Sept. 17 worship service.

The verdict? Interesting, but something was missing.

They were glad we did it, Cooper said of his congregation, and let's not do it again.

Cooper usedChatGPTto put together the entireworship service, including the sermon and an original song. He said the result was a stilted atmosphere.

The human element was lacking, he said. It seemed to in some way prevent us from connecting with each other. The heart was missing.

AI leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind, according to theIBM website. It has been around since the 1950s, and is used to power web search engines and self-driving cars, can compete in games of strategy such as chess, and create works such as songs, sermons and prose by using data collected on the internet.

AI-based software transcribed the interviews for this story. The remaining Beatles created a new song, Now and Then, using AI to extract John Lennons vocals from a poorly recorded demo cassette tape he made in the 1970s.

The CEO of Google said that this is bigger than fire, bigger than electricity, said the Rev. James Lee, director of communications for the Greater New Jersey and Eastern Pennsylvania conferences. I really believe that this is going to be how we do everything within the next five to 10 years.

Cooper said he has strong feelings against using AI to write a sermon again.

Even if it's not as eloquent or if it's a little messy or last minute, it needs to be from the heart of the pastor.

Lee concurs. ChatGBT is pretty bad at writing good sermons. That's my own opinion, but they're very vanilla, he said.

Philip Clayton, Ingraham Professor of Theology at Claremont School of Theology, said that religion tends to be slow to pick up on new technology.

I think our fear of technology is not a good thing, especially when we're trying to attract younger people to be involved in churches, he said.

AI is a means to get something done, like using a typewriter years ago, he added. For us as Christians, the key question is, Do the means become the end?

Like what you're reading and want to see more? Sign up for our free daily and weekly digests of important news and events in the life of The United Methodist Church.

Keep me informed!

A sermon is an attempt to speak the word of God to people of God assembled at a particular time and place, Clayton said.

It takes prayer, it takes the knowledge of the people, it takes allusions to my community in my country and all kinds of frameworks, he said. If I don't do that task, what have I carried out? What are my responsibilities as one who rightly divines the word of God?

Lee suggests treating AI technology as an intern.

They are able to do a lot of work for you and support you, and almost treat them like an additional member of the team, he said.

The Rev. Stacy Minger, associate professor of preaching at Asbury Theological Seminary, believes AI could be helpful as long as the preacher does their due diligence of preparation.

The way I teach preaching is that the preacher invests in praying over the text, reading the text and using all of their biblical studies and skills, and then they consult the commentaries or the scholars, she said.

If you're maybe missing an illustration or missing a transition or theres something that just hasn't kind of come together and you're banging your head against the wall, I think at that point, after you've done all of your own work, that it could be a helpful tool.

Ted Vial is the Potthoff Professor of Theology and Modern Western Religious Thought and vice president of innovation, learning and institutional research at Iliff School of Theology. Photo by EB Pixs, courtesy of Iliff School of Theology.

It is important to verify the work of programs like ChatGPT, saidTed Vial, the Potthoff Professor of Theology and Modern Western Religious Thought and vice president of innovation, learning and institutional research at Iliff School of Theology.

Theres a lot of bad information (on the internet), Vial said. My experience with the current level of (AI) sophistication is they can produce a clearly written and well-organized essay. They're not very inspirational.

AI programs do not include the most current information, he said.

I think ChatGPT is built on data that goes through November of 2021, Vial said. So, if sermons are supposed to relate what's happening in the world to the Bible, its going to be out of date.

Humans have emotions and creativity that are hard for a computer to emulate, he said.

But the technology continues to improve.

Whatever humans can do, I'm pretty sure AI will be able to do it soon also, Vial said. So, the question isn't, Would you need a human? The question is, Are you and your congregation OK with a service that's produced by a machine?

Even if the answer to that is No, there will be pastors who want to use it because it makes their lives easier, he added.

If it's a personal connection between the pastor and a community, then it's important to have the pastor's voice and personality, Vial said. If it's exegesis of a text, there may not be anything wrong with having a computer produce it.

Looking at it from another direction, a pastor might be cheating themselves as well as their congregation if they skip doing most of the work, Minger said.

I would be concerned that if you're not spending that time, using all of your biblical study skills and prayerfully invested in the reading of Scripture, that you as a preacher are skipping over a wonderful formative opportunity in your own life, she said.

As I'm hammering out a sermon, I'm really wrestling with it, she said. You need images and metaphors, word choices and illustrations.

And so, as preachers, it's not only that we would be short-circuiting the congregation, I think we would be tamping down our own creative outlets in the effort to become more efficient.

Patterson is a UM News reporter in Nashville, Tennessee. Contact him at 615-742-5470 or [emailprotected]. To read more United Methodist news,subscribeto the free Daily or Weekly Digests.

Read the original post:

Artificial intelligence and church - UM News

New Tool for Building and Fixing Roads and Bridges: Artificial … – The New York Times

In Pennsylvania, where 13 percent of the bridges have been classified as structurally deficient, engineers are using artificial intelligence to create lighter concrete blocks for new construction. Another project is using A.I. to develop a highway wall that can absorb noise from cars and some of the greenhouse gas emissions that traffic releases as well.

At a time when the federal allocation of billions of dollars toward infrastructure projects would help with only a fraction of the cost needed to repair or replace the nations aging bridges, tunnels, buildings and roads, some engineers are looking to A.I. to help build more resilient projects for less money.

These are structures, with the tools that we have, that save materials, save costs, save everything, said Amir Alavi, an engineering professor at the University of Pittsburgh and a member of the consortium developing the two A.I. projects in conjunction with the Pennsylvania Department of Transportation and the Pennsylvania Turnpike Commission.

The potential is enormous. The manufacturing of cement alone makes up at least 8 percent of the worlds carbon emissions, and 30 billion tons of concrete are used worldwide each year, so more efficient production of concrete would have immense environmental implications.

And A.I. essentially machines that can synthesize information and find patterns and conclusions much as the human mind can could have the ability to speed up and improve tasks like engineering challenges to an incalculable degree. It works by analyzing vast amounts of data and offering options that give humans better information, models and alternatives for making decisions.

It has the potential to be both more cost effective one machine doing the work of dozens of engineers and more creative in coming up with new approaches to familiar tasks.

But experts caution against embracing the technology too quickly when it is largely unregulated and its payoffs remain largely unproven. In particular, some worry about A.I.s ability to design infrastructure in a process with several regulators and participants operating over a long period of time. Others worry that A.I.s ability to draw instantly from the entirety of the internet could lead to flawed data that produces unreliable results.

American infrastructure challenges have become all the more apparent in recent years Texas power grid failed during devastating ice storms in 2021 and continues to grapple with the states needs; communities across the country from Flint, Mich., to Jackson, Miss., have struggled with failing water supplies; and more than 42,000 bridges are in poor condition nationwide.

A vast majority of the countrys roadways and bridges were built several decades ago, and as a result infrastructure challenges are significant in many dimensions, said Abdollah Shafieezadeh, a professor of civil, environmental and geodetic engineering at Ohio State University.

The collaborations in Pennsylvania reflect A.I.s potential to address some of these issues.

In the bridge project, engineers are using A.I. technology to develop new shapes for concrete blocks that use 20 percent less material while maintaining durability. The Pennsylvania Department of Transportation will use the blocks to construct a bridge; there are more than 12,000 in the state that need repair, according to the American Road & Transportation Builders Association.

Engineers in Pittsburgh are also working with the Pennsylvania Turnpike Commission to design a more efficient noise-absorbing wall that will also capture some of the nitrous oxide emitted from vehicles. They are planning to build it in an area that is disproportionately affected by highway sound pollution. The designs will save about 30 percent of material costs.

These new projects have not been tested in the field, but they have been successful in the lab environment, Dr. Alavi said.

In addition to A.I.s speed at developing new designs, one of its largest draws in civil engineering is its potential to prevent and detect damage.

Instead of investing large sums of money in repair projects, engineers and transportation agencies could identify problems early on, experts say, such as a crack forming in a bridge before the structure itself buckled.

This technology is capable of providing an analysis of what is happening in real time in incidents like the bridge collapse on Interstate 95 in Philadelphia this summer or the fire that shut down a portion of Interstate 10 in Los Angeles this month, and could be developed to deploy automated emergency responses, said Seyede Fatemeh Ghoreishi, an engineering and computer science professor at Northeastern University.

But, as in many fields, there are increasingly more conversations and concerns about the relationship between A.I., human work and physical safety.

Although A.I. has proved helpful in many uses, tech leaders have testified before Congress, pushing for regulations. And last month, President Biden issued an executive order for a range of A.I. standards, including safety, privacy and support for workers.

Experts are also worried about the spread of disinformation from A.I. systems. A.I. operates by integrating already available data, so if that data is incorrect or biased, the A.I. will generate faulty conclusions.

It really is a great tool, but it really is a tool you should use just for a first draft at this point, said Norma Jean Mattei, a former president of the American Society of Civil Engineers.

Dr. Mattei, who has worked in education and ethics for engineering throughout her career, added: Once it develops, Im confident that well get to a point where youre less likely to get issues. Were not there yet.

Also worrisome is a lack of standards for A.I. The Occupational Safety and Health Administration, for example, does not have standards for the robotics industry. There is rising concern about car crashes involving autonomous vehicles, but for now, automakers do not have to abide by any federal software safety testing regulations.

Lola Ben-Alon, an assistant professor of architecture technology at Columbia University, also takes a cautionary approach when using A.I. She stressed the need to take the time to understand how it should be employed, but she said that she was not condemning it" and that it had many great potentials.

Few doubt that in infrastructure projects and elsewhere, A.I. exists as a tool to be used by humans, not as a substitute for them.

Theres still a strong and important place for human existence and experience in the field of engineering, Dr. Ben-Alon said.

The uncertainty around A.I. could cause more difficulties for funding projects like those in Pittsburgh. But a spokesman for the Pennsylvania Department of Transportation said the agency was excited to see how the concrete that Dr. Alavi and his team are designing could expand the field of bridge construction.

Dr. Alavi said his work throughout his career had shown him just how serious the potential risks from A.I. are.

But he is confident about the safety of the designs he and his team are making, and he is excited for the technologys future.

After 10, 12 years, this is going to change our lives, Dr. Alavi said.

Go here to read the rest:

New Tool for Building and Fixing Roads and Bridges: Artificial ... - The New York Times

Opera gives voice to Alan Turing with help of artificial intelligence – Yale News

A few years ago, composer Matthew Suttor was exploring Alan Turings archives at Kings College, Cambridge, when he happened upon a typed draft of a lecture the pioneering computer scientist and World War II codebreaker gave in 1951 foreseeing the rise of artificial intelligence.

In the lecture, Intelligent Machinery, a Heretical Theory, Turing posits that intellectuals would oppose the advent of artificial intelligence out of fear that machines would replace them.

It is probable though that the intellectuals would be mistaken about this, Turing writes in a passage that includes his handwritten edits. There would be plenty to do, trying to understand what the machines were trying to say, i.e., in trying to keep ones (sic) intelligence up to the standard set by the machines

To Suttor, the passage underscores Turings visionary brilliance.

Reading it was kind of a mind-blowing moment as were now on the precipice of Turings vision becoming our reality, said Suttor, program manager at Yales Center for Collaborative Arts and Media (CCAM) a campus interdisciplinary center engaged in creative research and practice across disciplines and a senior lecturer in the Department of Theater and Performance Studies in Yales Faculty of Arts and Sciences.

Inspired by Turings 1951 lecture, and other revelations from his papers, Suttor is working with a team of musicians, theater makers, and computer programmers (including several alumni from the David Geffen School of Drama at Yale) to create an experimental opera, called I AM ALAN TURING, which explores his visionary ideas, legacy, and private life.

I didnt envision a chronological biographical operatic piece To me, it was much more interesting to investigate Turings ideas.

Matthew Suttor

In keeping with Turings vision, the team has partnered with artificial intelligence on the project, using successive versions of GPT, a large language model, to help write the operas libretto and spoken text.

Three work-in-progress performances of the opera formed the centerpiece of the Machine as Medium Symposium: Matter and Spirit, a recent two-day event produced by CCAM that investigated how AI and other technologies intersect with creativity and alter how people approach timeless questions on the nature of existence.

The symposium, whose theme Matter and Spirit was derived from Turings writings, included panel discussions with artists and scientists, an exhibition of artworks made with the help of machines or inspired by technology, and tour of the Yale School of Architectures robotic lab led by Hakim Hasan, a lecturer at the school who specializes in robotic fabrication and computational design research.

All sorts of projects across fields and disciplines are using AI in some capacity, said Dana Karwas, CCAMs director. With the opera, Matthew and his team are using it as a collaborative tool in bringing Alan Turings ideas and story into a performance setting and creating a new model for opera and other types of live performance.

Its also an effective platform for inviting further discussion about technology that many people are excited about or questioning right now, and is a great example of the kind of work were encouraging at CCAM.

Turing is widely known for his work at Bletchley Park, Great Britains codebreaking center during World War II, where he cracked intercepted Nazi ciphers. But he was also a path-breaking scholar whose work set the stage for the development of modern computing and artificial intelligence.

His Turing Machine, developed in 1936, was an early computational device that could implement algorithms. In 1950, he published an article in the journal Mind that asked: Can machines think? He also made significant contributions to theoretical biology, which uses mathematical abstractions in seeking to better understand the structures and systems within living organisms.

A gay man, Turing was prosecuted in 1952 for gross indecency after acknowledging a sexual relationship with a man, which was then illegal in Great Britain, and underwent chemical castration in lieu of a prison sentence. He died by suicide in 1954, age 41.

Before visiting Turings archive, Suttor had read Alan Turing: The Enigma, Andrew Hodges authoritative 1983 biography, and believed the mathematicians life possessed an operatic scale.

I didnt envision a chronological biographical operatic piece, which frankly is a pretty dull proposition, Suttor said. To me, it was much more interesting to investigate Turings ideas. How do you put those on stage and sing about them in a way that is moving, relevant, and dramatically exciting?

Thats when Smita Krishnaswamy, an associate professor of genetics and computer science at Yale, introduced Suttor and his team to OpenAI and several Zoom conversations with representatives of the company about the emerging technology followed. Working with Yale University Librarys Digital Humanities Lab, the team built an interface to interact with an instance, or single occurrence, of GPT-2, training it with materials from Turings archive and the text of books hes known to have read. For example, they knew Turing enjoyed George Bernard Shaws play Back to Methuselah, and Snow White, the Brothers Grimm fairytale, so they shared those texts with the AI.

The team began asking GPT-2 the kinds of questions that Turing had investigated, such as Can machines think? They could control the temperature of the models answers or, the creativity or randomness and the number of characters the responses contained. They continually adjusted the settings on those controls and honed their questions to vary the answers.

Some of the responses are just jaw-droppingly beautiful, Suttor said. You are the applause of the galaxy, for instance, is something you might print on a T-shirt.

In one prompt, the team asked the AI technology to generate lyrics for a sexy song about the operas subject, which yielded the lyrics to Im a Turing Machine, Baby.

In composing the operas music, Suttor and his team incorporated elements of Turings work on morphogenesis the biological process that develops cells and tissues and phyllotaxis, the botanical study of mathematical patterns found in stems, leaves, and seeds. For instance, Suttor found that diagrams Turing had produced showing the spiral patterns of seeds in a sunflower head conform to a Fibonacci sequence, in which each number is the sum of the two before it. Suttor superimposed the circle of fifths a method in music theory of organizing the 12 chromatic pitches as a sequence of perfect fifths onto Turings diagram, producing a unique mathematical, harmonic progression.

Suttor repeated the process using prime numbers numbers greater than 1 that are not the product of two smaller numbers in place of the Fibonacci sequence, which also produced a harmonic series. The team sequenced analog synthesizers to these harmonic progressions.

It sounds a little like Handel on acid, he said.

The workshop version of I AM ALAN TURING was performed on three consecutive nights before a packed house in the CCAM Leeds Studio. The show, in its current form, consists of eight pieces of music that cross genres. Some are operatic with a chorus and soloist, some sound like pop music, and some evoke musical theater. While Suttor composed key structural pieces, the entire team has collaborated like a band while creating the music.

At the same time, the shows storytelling is delivered through various modes: opera, pop, and acted drama. At the beginning, an actor portraying Turing stands at a chalkboard drawing the sunflowers spiral pattern.

Another scene is drawn from a transcript of Turings comments during a panel discussion, broadcast by the BBC, about the potential of artificial intelligence. In that conversation, Turing spars with a skeptical colleague who doesnt believe machines could reach or exceed human levels of intelligence.

Turing made that point during that BBC panel that hed trained machines to do things, which took a lot of work, and they both learned something from the process, Suttor said. I think that captures our experience working with GPT to draft the script.

The show also contemplates Turings sexuality and the persecution he endured because of it. One sequence shows Turing enjoying a serene morning in his kitchen beside a partner, sipping tea and eating toast. His partner reads the paper. Turing scribbles in a notebook. A housecat makes its presence felt.

Its the life that Turing never had, Suttor said.

In high school, Turing had a close friendship with classmate Christopher Morcom, who succumbed to tuberculosis while both young men were preparing to attend Cambridge. Morcom has been described as Turings first true love.

Turing wrote a letter called Nature of Spirit to Christophers mother in which he imagines the possibility of multiple universes and how the soul and the body are intrinsically linked.

In the opera, a line from the letter is recited following the scene, in Turings kitchen, that showed a glimmer of domestic tranquility: Personally, I think that spirit is really eternally connected with matter but certainly not always by the same kind of body.

The show closed with an AI-generated text, seemingly influenced by Snow White: Look in the mirror, do you realize how beautiful you are? You are the applause of the galaxy.

The I AM ALAN TURING experimental opera was just one of many projects presented during Machine as Medium: Matter and Spirit, a two-day symposium that demonstrated the kinds of interdisciplinary collaborations driven by Yales Center for Collaborative Arts and Media (CCAM).

An exhibition at the centers York Street headquarters highlighted works created with, or inspired by, various kinds of machines and technology, including holograms, motion capture, film and immersive media, virtual reality, and even an enormous robotic chisel. An exhibition tour allowed the artists to connect while describing their work to the public. The discussion among the artists and guests typifies the sense of community that CCAM aims to provide, said Lauren Dubowski 14 M.F.A., 23 D.F.A.,CCAM's assistant director,who designed and led the event.

We work to create an environment where anyone can come in and be a part of the conversation, Dubowski said. CCAM is a space where people can see work that they might not otherwise see, meet people they might not otherwise meet and talk about the unique things happening here.

Follow this link:

Opera gives voice to Alan Turing with help of artificial intelligence - Yale News

Five reasons I would take INT D 161 – Artificial Intelligence Everywhere – University of Alberta

Over the past few years, artificial intelligence (AI) has gone from being something I would see in sci-fi movies and shows (always set in the future) to something that feels very present, both in my life and our society. Ive learnt a bit about AI from playing with OpenAIs ChatGPT, checking out Midjourney and reading a few news articles in the media, but at this point, my knowledge of what AI is, how it actually works and where it can be applied feels pretty superficial.

When I learned about the Artificial Intelligence Everywhere course taught by computing science professor Dr. Adam White, I was really excited to check it out. Its really easy to fit into a lot of degree pathways: as an INT D (interdisciplinary) course its open to almost every undergraduate student, and best of all its offered both on-campus (in-person) and asynchronous (online) in Winter 2024, so people can choose what works best for them.

Here are my five big reasons why Im considering registering for this course:

Anyone whos chatted with ChatGPT or asked DALL-E to make an image is often surprised by just hownatural everything feels. Here are a few examples:

While I was used to the magic of computers, processing datasets often required some fiddling with Python or Excel macros, this isnt so straightforward for everyone. Generative AI is massively powerful when its implemented specifically to deal with large datasets, but just pasting a big blob of numbers into ChatGPT can get you some surprisingly useful insights (note: dont try this for anything that actually matters).

On many ualberta.ca web pages, Im running into Vera, the generative-AI-powered chatbot assistant who often has the answer Im looking for. I can text Vera like I text a friend, which feels a lot different than playing with search terms in Google.

And when I need a witty response to the group chat? I want to act like Im coming up with all of my comedic bits on my own, but lets be honestthere mightve been some AI help.

There are a lot of buzzwords being thrown arounddataset, library, iterative processing, neural networks, etc., and I dont really understand what all of these mean or how they fit together. While I could spend some time in a Wikipedia rabbit hole trying to figure out whats going on, the chance to learn from a computing science professor with a strong background in the area sounds a lot more enticing. And these credits apply to a degree? Sign me up!

Theres a lot of talk in the news about what AI means for our society - will it affect jobs? Will it affect learning? Will it go rogue? While I dont think this course will have ALL the answers to ALL of these topics, Id like to be able to form some of my own opinions about AI, and I think a good foundational understanding of it is the right first step. There are famous quotes like the internet is a series of tubes which might show what happens when the people in charge of making major societal decisions about something dont understand it. And I definitely dont want to be caught saying, Well, AI is really just a lot of layered spreadsheets.

There are a ton of job titles like data scientist, CAD modeller, systems administrator, software engineer or web designer that all benefit from (or pretty much require) a strong foundational knowledge of computers and the internet. Im sure that there are going to be a lot of new jobs related to both implementing AI and using it in the workplace and as something without a perfectly clear-cut career path, I want to be ready for these. I feel the foundational knowledge will be really useful to see if I want to pursue a career related to AI.

It wasnt actually that long ago when the internet was launched (the formal date is in the 80s, but it didnt really show up in most homes and schools in Canada until the 90s), and then social media was another big thing that followed in the 2000s. Now these things are everywhere, even though they were pretty niche in the beginning. AI seemed like a sci-fi movie trope until a few years ago, and now, almost everyone I know has used it (well, maybe not my grandparents). Its certainly the next ubiquitous thing, and I want to be ready.

Learn more about the course

More here:

Five reasons I would take INT D 161 - Artificial Intelligence Everywhere - University of Alberta

Formula One trials AI to tackle track limits breaches – Reuters

[1/2]Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights

ABU DHABI, Nov 23 (Reuters) - Formula One's governing body is trialling artificial intelligence (AI) to tackle track limits breaches at this weekend's season-ending Abu Dhabi Grand Prix.

The Paris-based FIA said it would be using 'Computer Vision' technology that uses shape analysis to work out the number of pixels going past the track edge.

The AI will sort out the genuine breaches, where drivers cross the white line at the edge of the track with all four wheels, reducing the workload for the FIA's remote operations centre (ROC) and speeding up the response.

The July 2 Austrian Grand Prix was a high water mark for the sport with just four people having to process an avalanche of some 1,200 potential violations.

By the title-deciding Qatar weekend in October there were eight people assigned to assess track limits and monitor 820 corner passes, with 141 reports sent to race control who then deleted 51 laps.

Some breaches still went unpunished at October's U.S. Grand Prix in Austin, however.

Stewards said this month that their inability to properly enforce track limits violations at turn six was "completely unsatisfactory" and a solution needed to be found before the start of next season.

Tim Malyon, the FIA's head of remote operations and deputy race director, said the Computer Vision technology had been used effectively in medicine in areas such as scanning data from cancer screening.

"They dont want to use the Computer Vision to diagnose cancer, what they want to do is to use it to throw out the 80% of cases where there clearly is no cancer in order to give the well trained people more time to look at the 20%," he said.

"And thats what we are targeting."

Malyon said the extra Computer Vision layer would reduce the number of potential infringements being considered by the ROC, with still fewer then going on to race control for further action.

"The biggest imperative is to expand the facility and continue to invest in software, because thats how well make big strides," he said. "The final takeaway for me is be open to new technologies and continue to evolve.

"Ive said repeatedly that the human is winning at the moment in certain areas. That might be the case now but we do feel that ultimately, real time automated policing systems are the way forward."

Reporting by Alan Baldwin in London, editing by Toby Davis

Our Standards: The Thomson Reuters Trust Principles.

More:

Formula One trials AI to tackle track limits breaches - Reuters

Inventorship and Patentability of Artificial Intelligence-Based … – Law.com

The concept of inventorship continues to evolve with the advent of artificial intelligence (AI). AI-generated inventions spur disagreement among Patent Offices across the world as to who, or what, qualifies as an inventor in patent applications. The quandary is whether an AI platform that assists in creating an invention should be named as an inventor in a patent. For example, an AI-powered system may be employed pharmaceutically to identify molecular compounds in the discovery of a newly invented drug, without much human involvement. Does such AI assistance rise to the level of inventorship? The answer is nonot in the United Statesaccording to the U.S. Court of Appeals for the Federal Circuit. See Thaler v. Vidal, 43 F.4th 1207, 1213 (Fed. Cir. 2022).

In a hallmark decision, the Court of Appeals for the Federal Circuit recently established that artificial intelligence cannot be named as an inventor on a U.S. patent. In Thaler v. Vidal, Dr. Stephen Thaler represented that he developed an AI platform named DABUS (device for the autonomous bootstrapping ofunified sentience) that created inventions. Thaler subsequently filed two patent applications at the U.S. Patent and Trademark Office (USPTO) for inventions generated by DABUS, attempting to list the DABUS-AI system as the inventor. The USPTO declined Thalers request. The circuit court affirmed, holding that only a human being can be an inventor in the United States. See Thaler, 43 F.4th at 1209, 1213.

Here is the original post:

Inventorship and Patentability of Artificial Intelligence-Based ... - Law.com

Artificial Intelligence (AI) and Human Intelligence (HI) in the future of … – TechNode Global

In an era of rapid technological advancement, the dynamic interplay between Artificial Intelligence (AI) and Human Intelligence (HI) is shaping the future of education. As we peer into the horizon, it becomes increasingly clear that our approach to preparing the next generation for the challenges and opportunities that await must evolve. Here, we explore how AI and HI are set to play harmonious, complementary roles in the realm of preschool education and beyond.

Before we embark on this educational journey, it is crucial to envision the future professions that will shape the educational landscape. Two significant examples illustrate the evolving role of AI in various fields:

Future-proofing students for a rapidly evolving world equips students to be both AI-competent and distinctly human.

The collaboration between AI and human educators forms the core of the new educational landscape. This symbiotic relationship aims to merge the precision and efficiency of AI with the empathetic and creative guidance provided by human mentors. Heres how this partnership is poised to revolutionize education:

The integration of AI in preschool education unfolds a unique paradigm where young children can seamlessly transition between the roles of the tutee and the tutor, fostering a multifaceted and enriching educational experience. This concept manifests in three distinct applications of AI in education: as a tutor, as a tutee, and as a tool.

In essence, this multifaceted interaction with AI establishes a symbiotic relationship where the roles of teacher and student are fluid. By seamlessly transitioning between these roles, preschool children not only benefit from tailored guidance but also actively engage with AI as a creative collaborator and a student, cultivating a well-rounded skill set that extends beyond conventional learning approaches.

In summary, the future of education hinges on the collaborative efforts of human educators and AI. Together, they create an educational ecosystem that leverages the strengths of each, ensuring that students are not only equipped with knowledge but also possess the essential skills and attributes needed to thrive in a future where human qualities play a pivotal role.

Dr. Richard Yen, Harvard University PhD, founder of Ednovation, which develops edtech and operates Cambridge, ChildFirst, and Shaws Preschools in Singapore, South East Asia, and China.

TNGlobal INSIDERpublishes contributions relevant to entrepreneurship and innovation. You maysubmit your own original or published contributionssubject to editorial discretion.

How Southeast Asias SMEs can benefit from digital transformation and cloud combined

Read more here:

Artificial Intelligence (AI) and Human Intelligence (HI) in the future of ... - TechNode Global