Page 4«..3456..1020..»

Category Archives: Ai

A DEEPer (squared) dive into AI Harvard Gazette – Harvard Gazette

Posted: October 18, 2023 at 2:23 am

When an algorithm-driven microscopy technique developed in 2021 (and able to run on a fraction of the images earlier techniques required) isnt fast enough, what do you do?

Dive DEEPer, and square it. At least, that was the solution used by Dushan Wadduwage, John Harvard Distinguished Science Fellow at the FAS Center for Advanced Imaging.

Scientists have worked for decades to image the depths of a living brain.They first tried fluorescence microscopy, a century-old technique that relies on fluorescent molecules and light.However, the wavelengths werent long enough and they scattered before they reached an appreciable distance.

The invention of two-photon microscopy in 1990 brought longer wavelengths of light shine onto the tissue, causing fluorescent molecules to absorb not one but two photons. The longer wavelengths used to excite the molecules scattered less and could penetrate farther.

But two-photon microscopy can typically only excite one point on the tissue at a time, which makes for a long process requiring many measurements. A faster way to image would be to illuminate multiple points at once using a wider field of view but this, too, had its drawbacks.

If you excite multiple points at the same time, then you cant resolve them, Wadduwage said. When it comes out, all the light is scattered, and you dont know where it comes from.

To overcome this difficulty, Wadduwages group began using a special type of microscopy, described in Science Advances in 2021. The team excited multiple points on the tissue in a wide-field mode, using different pre-encoded excitation patterns. This technique called De-scattering with Excitation Patterning, or DEEP works with the help of a computational algorithm.

The idea is that we use multiple excitation codes, or multiple patterns to excite, and we detect multiple images, Wadduwage said. We can then use the information about the excitation patterns and the detected images and computationally reconstruct a clean image.

The results are comparable in quality to images produced by point-scanning two-photon microscopy. Yet they can be produced with just hundreds of images, rather than to the hundreds of thousands typically needed for point-scanning. With the new technique, Wadduwages group was able to look as far as 300 microns deep into live mouse brains.

Still not good enough. Wadduwage wondered: Could DEEP produce a clear image with only tens of images?

In a recent paper published in Light: Science and Applications, he turned to machine learning to make the imaging technique even faster. He and his co-authors used AI to train a neural network-driven algorithm on multiple sets of images, eventually teaching it to reconstruct a perfectly resolved image with only 32 scattered images (rather than the 256 reported in their first paper). They named the new method DEEP-squared: Deep learning powered de-scattering with excitation patterning.

The team took images produced by typical two-photon point-scanning microscopy, providing what Wadduwage called the ground-truth. The DEEP microscope then used physics to make a computational model of the image formation process and put it to work simulating scattered input images. These trained their DEEP-squared AI model. Once AI produced reconstructed images that resembled Wadduwages ground-truth reference, the researchers used it to capture new images of blood vessels in a mouse brain.

It is like a step-by-step process, Wadduwage said. In the first paper we worked on the optics side and reached a good working state, and in the second paper we worked on the algorithm side and tried to push the boundary all the way and understand the limits. We now have a better understanding that this is probably the best we can do with the current data we acquire.

Still, Wadduwage has more ideas for boosting the capabilities of DEEP-squared, including improving instrument design to acquire data faster. He said DEEP-squared exemplifies cross-disciplinary cooperation, as will any future innovations on the technology.

Biologists who did the animal experiments, physicists who built the optics, and computer scientists who developed the algorithms all came together to build one solution, he said.

Originally posted here:

A DEEPer (squared) dive into AI Harvard Gazette - Harvard Gazette

Posted in Ai | Comments Off on A DEEPer (squared) dive into AI Harvard Gazette – Harvard Gazette

Florida bar weighs whether lawyers using AI need client consent – Reuters

Posted: at 2:23 am

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic Acquire Licensing Rights

Oct 16 (Reuters) - Florida lawyers might soon be required to get their client's consent before using artificial intelligence on their legal matters.

The Florida Bar is crafting a new advisory opinion focused on the use of AI and has asked Florida lawyers to weigh in. Florida bar leaders have tasked the Florida Board Review Committee on Professional Ethics with creating rules around the use of generative AI, such as OpenAI's ChatGPT, Google Bard or Microsoft's Bing.

In addition to a requirement that lawyers obtain their clients consent before using AI, the committee will also consider whether such AI is subject to the same lawyer supervision requirements as non-lawyer assistants, and whether lawyer fees should be lower when AI is used. It will also look at whether law firms should be allowed to advertise their generative AI as superior or unique, and whether lawyers may encourage clients to rely on due diligence reports generated by AI.

A number of federal judges have already said that lawyers must disclose when they have used artificial intelligence in matters that appear on their dockets. Those requirements came after a case in which two New York lawyers submitted a legal brief that included six fictitious case citations generated by ChatGPT. Lawyers at Levidow, Levidow & Oberman in June were ordered pay a $5,000 fine. They had said they were unaware that the technology could make up cases.

But Florida looks to be the first jurisdiction considering a consent rule for lawyers using AI. A raft of legal tech companies have unveiled new AI products in recent months that do everything from legal research to creating legal documents.

Florida bar members have until Dec. 1 to submit comments on the new AI advisory opinion. The committee drafting the opinion is slated to meet Nov. 30.

Read more:

New York lawyers sanctioned for using fake ChatGPT cases in legal brief

Another US judge says lawyers must disclose AI use

Our Standards: The Thomson Reuters Trust Principles.

Karen Sloan reports on law firms, law schools, and the business of law. Reach her at karen.sloan@thomsonreuters.com

Read this article:

Florida bar weighs whether lawyers using AI need client consent - Reuters

Posted in Ai | Comments Off on Florida bar weighs whether lawyers using AI need client consent – Reuters

Cognizant and Vianai Systems Announce Strategic Partnership to … – PR Newswire

Posted: at 2:23 am

Cognizant and Vianai will leverage conversational Large Language Model capabilities to help clients better explore their data, predict outcomes and unlock actionable insights

TEANECK, N.J. and PALO ALTO, Calif., Oct. 17, 2023 /PRNewswire/ -- Cognizant (Nasdaq: CTSH) and Vianai Systems, Inc.today announced the launch of a global strategic go-to-market partnership to accelerate human-centered generative AI offerings. This partnership leverages Vianai's hila Enterprise platform alongside Cognizant's NeuroAI, creating a seamless, unified interface to unlock predictive, AI-driven decision making. For both companies, this partnership is expected to enable growth opportunities in their respective customer bases, and through Cognizant's plans to resell Vianai solutions.

Vianai's hila Enterprise provides clients a platform to safely and reliably deploy any large language model (LLM), optimized and fine-tuned to speak to their systems of record both structured and unstructured data, enabling clients to better analyze, discover and explore their data leveraging the conversational power of generative AI.

"Being able to monitor and improve LLM performance is critical to unlocking the true power of gen AI," said Ravi Kumar S

In addition, the LLM monitoring capabilities within hila Enterprise (vianops) is a next-generation monitoring platform for AI-driven enterprises, which monitors and analyzes LLM performance to proactively uncover opportunities to continually improve the reliability and trustworthiness of LLMs for clients.

"In every business around the world, there is a hunger to harness the power of AI, but serious challenges around hallucinations, price-performance and lack of trust are holding enterprises back. That's why we built hila Enterprise, a platform that delivers trusted, human-centered applications of AI," said Dr. Vishal Sikka, Founder and Chief Executive Officer of Vianai Systems. "In Cognizant, we have found a strategic partner with a distinguished history of delivering innovative services. Together we will deliver transformative applications of AI that businesses can truly rely on, built on the trusted foundation of hila Enterprise and Cognizant's Neuro AI platform."

"Being able to monitor and improve LLM performance is critical to unlocking the true power of generative AI," said Ravi Kumar S, Cognizant's Chief Executive Officer. "With Vianai's platform and our Neuro AI platform, we believe we will be able to offer our clients a high-quality solution to support seamless data analysis with predictive decision-making capabilities."

Dr. Sikka announced this collaboration and partnership during Cognizant's Discovery Summit in Dana Point, California on Tuesday, October 17.

About CognizantCognizant (Nasdaq: CTSH) engineers modern businesses. We help our clients modernize technology, reimagine processes and transform experiences so they can stay ahead in our fast-changing world. Together, we're improving everyday life. See how atwww.cognizant.comor @cognizant.

About Vianai Systems, Inc.

Vianai Systems, Inc.is a human-centered AI (H+AI) platform and products company focused on bringing trustworthy, responsible and transformative AI systems to enterprises worldwide. The company's hila Enterprise platform enables enterprises to safely and reliably deploy large language enterprise solutions, leveraging its Zero Hallucination technologies, veryLLM open-source code, breakthrough fine-tuning and optimization techniques as well as its LLM monitoring capabilities to analyze, discover and explore data within systems of record, in natural language. The hila Enterprise platform helps enterprises minimize the risks of AI, while taking full advantage of the transformation potential of reliable AI systems. A showcase of the underlying capabilities of hila Enterprise can be accessed for free by anyone to ask any financial question about publicly traded companies atwww.hila.ai.Follow @VianaiSystems onVianai TwitterandVianai LinkedIn.

SOURCE Cognizant Technology Solutions

See more here:

Cognizant and Vianai Systems Announce Strategic Partnership to ... - PR Newswire

Posted in Ai | Comments Off on Cognizant and Vianai Systems Announce Strategic Partnership to … – PR Newswire

How AI could speed up scientific discoveries, from proteins to … – NPR

Posted: at 2:23 am

Scientists are using AI to design synthetic proteins with hopes it will speed up the discovery process. Ian C Haydon/ UW Institute for Protein Design hide caption

Scientists are using AI to design synthetic proteins with hopes it will speed up the discovery process.

Artificial intelligence can code computer programs, draw pictures and even take notes for doctors. Now, researchers are excited about the possibility that AI speeds up the scientific process from quicker drug design to someday developing new hypotheses.

Science correspondent Geoff Brumfiel talks to host Regina Barber about his visit to one protein lab already seeing promising results.

Click here to read Geoff's full story.

Listen to Short Wave on Spotify, Apple Podcasts and Google Podcasts.

Have an AI query? Send us your questions to shortwave@npr.org.

This episode was produced by Berly McCoy and edited by Rebecca Ramirez. Geoff Brumfiel checked the facts and Maggie Luthar was the audio engineer.

Read more:

How AI could speed up scientific discoveries, from proteins to ... - NPR

Posted in Ai | Comments Off on How AI could speed up scientific discoveries, from proteins to … – NPR

AI challenge to deliver better healthcare | Western Australian … – Government of Western Australia

Posted: at 2:23 am

The Cook Government's State's Future Health Research and Innovation (FHRI) Fund is challenging innovators to look at Generative artificial intelligence (AI) for ways to improve healthcare delivery in Western Australia.

Grants worth a total of $937,312 have been awarded to fund 19 projects exploring the opportunities and benefits of AI in healthcare from reducing administrative burden to increasing the accuracy of diagnosis and treatment.

Foresight Medical's Professor Yogesan Kanagasingam is working on a non-invasive test for an early diagnosis of Alzheimer's disease. Currently, the disease is definitively diagnosed through brain imaging techniques that rely on significant visible brain changes to occur before they can be detected.

The new approach will use Generative AI tomeasure the constriction and dilation of the pupil in response to light flashes a process which is altered in people with Alzheimer's due to the degeneration of cholinergic neurons.

This test can be carried out using a smart phone and has the potential to revolutionise early detection of Alzheimer's enabling earlier interventions and potentially changing cognitive decline.

South Metropolitan Health Service's Dr Jafri Kuthubutheen is looking to use AI to improve access to ear health care in rural and remote WA.

This project involves developinga Generative AI model to identify ear conditions based on otoscopic images taken via a video otoscope improving access to care for patients who need specialist ENT services in the Kimberley.

It's a potential game-changer for people living in the remote regions of the State where ear disease is disproportionately represented in Aboriginal and Torres Strait Islander populations and access to timely and accurate specialist care is limited.

In the first stage of the challenge, successful applicants will each receive up to $50,000 to do a feasibility study using Generative AI to address challenges across health and medical research, health and medical innovation, healthcare service delivery and health and medical education and training.

Stage 1 grant recipients can then apply for Stage 2 with four innovators awarded up to $500,000 to fully develop their Generative AI solution and implement it in WA.

The FHRI Fund is administered through the Department of Health's Office of Medical Research and Innovation with the full list of recipients available on the FHRI Fund website.

Comments attributed to Medical Research Minister Stephen Dawson:

"The Cook Government is committed to supporting and developing important health and medical innovation and I will be interested to see what opportunities Generative AI can bring to the sector.

"This technology involves uncertainties and risks, but also has the potential to dramatically increase efficiency, improve the quality of care, and create value for health care organisations.

"It is important to explore and assess whether this cutting-edge technology will help us to anticipate public health needs and interventions and improve the way we execute our healthcare programs here in WA."

Follow this link:

AI challenge to deliver better healthcare | Western Australian ... - Government of Western Australia

Posted in Ai | Comments Off on AI challenge to deliver better healthcare | Western Australian … – Government of Western Australia

Henry Kissinger: The Path to AI Arms Control – Foreign Affairs Magazine

Posted: at 2:23 am

This year marks the 78th anniversary of the end of the deadliest war in history and the beginning of the longest period in modern times without great-power war. Because World War I had been followed just two decades later by World War II, the specter of World War III, fought with weapons that had become so destructive they could theoretically threaten all of humankind, hung over the decades of the Cold War that followed. When the United States atomic destruction of Hiroshima and Nagasaki compelled Japans immediate unconditional surrender, no one thought it conceivable that the world would see a de facto moratorium on the use of nuclear weapons for the next seven decades. It seemed even more improbable that almost eight decades later, there would be just nine nuclear weapons states. The leadership demonstrated by the United States over these decades in avoiding nuclear war, slowing nuclear proliferation, and shaping an international order that provided decades of great-power peace will go down in history as one of Americas most significant achievements.

Today, as the world confronts the unique challenges posed by another unprecedented and in some ways even more terrifying technologyartificial intelligenceit is not surprising that many have been looking to history for instruction. Will machines with superhuman capabilities threaten humanitys status as master of the universe? Will AI undermine nations monopoly on the means of mass violence? Will AI enable individuals or small groups to produce viruses capable of killing on a scale that was previously the preserve of great powers? Could AI erode the nuclear deterrents that have been a pillar of todays world order?

At this point, no one can answer these questions with confidence. But as we have explored these issues for the last two years with a group of technology leaders at the forefront of the AI revolution, we have concluded that the prospects that the unconstrained advance of AI will create catastrophic consequences for the United States and the world are so compelling that leaders in governments must act now. Even though neither they nor anyone else can know what the future holds, enough is understood to begin making hard choices and taking actions todayrecognizing that these will be subject to repeated revision as more is discovered.

As leaders make these choices, lessons learned in the nuclear era can inform their decisions. Even adversaries racing to develop and deploy an unprecedented technology that could kill hundreds of millions of people nonetheless discovered islands of shared interests. As duopolists, both the United States and the Soviet Union had an interest in preventing the rapid spread of this technology to other states that could threaten them. Both Washington and Moscow recognized that if nuclear technology fell into the hands of rogue actors or terrorists within their own borders, it could be used to threaten them, and so each developed robust security systems for their own arsenals. But since each could also be threatened if rogue actors in their adversarys society acquired nuclear weapons, both found it in their interests to discuss this risk with each other and describe the practices and technologies they developed to ensure this did not happen. Once the arsenals of their nuclear weapons reached a level at which neither could attack the other without triggering a response that would destroy itself, they discovered the paradoxical stability of mutual assured destruction (MAD). As this ugly reality was internalized, each power learned to limit itself and found ways to persuade its adversary to constrain its initiatives in order to avoid confrontations that could lead to a war. Indeed, leaders of both the U.S. and the Soviet government came to realize that avoiding a nuclear war of which their nation would be the first victim was a cardinal responsibility.

The challenges presented by AI today are not simply a second chapter of the nuclear age. History is not a cookbook with recipes that can be followed to produce a souffl. The differences between AI and nuclear weapons are at least as significant as the similarities. Properly understood and adapted, however, lessons learned in shaping an international order that has produced nearly eight decades without great-power war offer the best guidance available for leaders confronting AI today.

At this moment, there are just two AI superpowers: the United States and China are the only countries with the talent, research institutes, and mass computing capacity required to train the most sophisticated AI models. This offers them a narrow window of opportunity to create guidelines to prevent the most dangerous advances and applications of AI. U.S. President Joe Biden and Chinese President Xi Jinping should seize this opportunity by holding a summitperhaps immediately after the Asia-Pacific Economic Cooperations meeting in San Francisco in Novemberwhere they could hold extended, direct, face-to-face discussions on what they should see as one of the most consequential issues confronting them today.

After atomic bombs devastated Japanese cities in 1945, the scientists who had opened Pandoras atomic box saw what they had created and recoiled in horror. Robert Oppenheimer, the principal scientist of the Manhattan Project, recalled a line from the Bhagavad Gita: Now I am become Death, the destroyer of worlds. Oppenheimer became such an ardent advocate of radical measures to control the bomb that he was stripped of his security clearance. The Russell-Einstein Manifestosigned in 1955 by 11 leading scientists including not just Bertrand Russell and Albert Einstein but also Linus Pauling and Max Bornwarned of the frightening powers of nuclear weapons and implored world leaders never to use them.

Although U.S. President Harry Truman never expressed second thoughts about his decision, neither he nor members of his national security team had a viable view of how this awesome technology could be integrated into the postwar international order. Should the United States attempt to maintain its monopoly position as the sole atomic power? Was that even feasible? In pursuit of the objective, could the United States share its technology with the Soviet Union? Did survival in a world with this weapon require leaders to invent some authority superior to national governments? Henry Stimson, Trumans secretary of war (who had just helped achieve victory over both Germany and Japan), proposed that the United States share its monopoly of the atomic bomb with Soviet leader Joseph Stalin and British Prime Minister Winston Churchill to create a great-power condominium that would prevent the spread of nuclear weapons. Truman created a committee, chaired by U.S. Undersecretary of State Dean Acheson, to develop a strategy for pursuing Stimsons proposal.

Acheson essentially agreed with Stimson: the only way to prevent a nuclear arms race ending in catastrophic war would be to create an international authority that would be the sole possessor of atomic weapons. This would require the United States to share its nuclear secrets with the Soviet Union and other members of the UN Security Council, transfer its nuclear weapons to a new UN atomic development authority, and forbid all nations from developing weapons or constructing their own capability to produce weapons-grade nuclear material. In 1946, Truman sent the financier and presidential adviser Bernard Baruch to the UN to negotiate an agreement to implement Achesons plan. But the proposal was categorically rejected by Andrei Gromyko, the Soviet representative to the UN.

Three years later, when the Soviet Union succeeded in its crash effort to build its own bomb, the United States and the Soviet Union entered what people were starting to call the Cold War: a competition by all means short of bombs and bullets. A central feature of this competition was the drive for nuclear superiority. At their heights, the two superpowers nuclear arsenals included more than 60,000 weapons, some of them warheads with more explosive power than all the weapons that had been used in all the wars in recorded history. Experts debated whether an all-out nuclear war would mean the end of every living soul on earth.

Over the decades, Washington and Moscow have spent trillions of dollars on their nuclear arsenals. The current annual budget for the U.S. nuclear enterprise exceeds $50 billion. In the early decades of this race, both the United States and the Soviet Union made previously unimaginable leaps forward in the hope of obtaining a decisive advantage. Increases in weapons explosive power required the creation of new metrics: from kilotons (equivalent to the energy released by 1,000 tons of TNT) for the original fission weapons to megatons (equivalent to that released by one million tons) for hydrogen fusion bombs. The two sides invented intercontinental missiles capable of delivering warheads to targets on the other side of the planet in 30 minutes, satellites circling the globe at a height of hundreds of miles with cameras that could identify the coordinates of targets within inches, and defenses that could in essence hit a bullet with a bullet. Some observers seriously imagined defenses that would render nuclear weapons, as President Ronald Reagan put it, impotent and obsolete.

In attempting to shape these developments, strategists developed a conceptual arsenal that distinguished between first and second strikes. They clarified the essential requirements for a reliable retaliatory response. And they developed the nuclear triadsubmarines, bombers, and land-based missilesto ensure that if an adversary were to discover one vulnerability, other components of the arsenal would remain available for a devastating response. Perception of risks of accidental or unauthorized launches of weapons spurred the invention of permissive action linkselectronic locks embedded in nuclear weapons that prevented them from being activated without the right nuclear launch codes. Redundancies were designed to protect against technological breakthroughs that might jeopardize command-and-control systems, which motivated the invention of a computer network that evolved into the Internet. As the strategist Herman Kahn famously put it, they were thinking about the unthinkable.

At the core of nuclear strategy was the concept of deterrence: preventing an adversary from attacking by threatening costs out of proportion to any conceivable benefit. Successful deterrence, it came to be understood, required not just capability but also credibility. The potential victims needed not only the means to respond decisively but also the will. Strategists refined this basic idea further with concepts such as extended deterrence, which sought to employ a political mechanisma pledge of protection via allianceto persuade key states not to build their own arsenals.

By 1962, when U.S. President John F. Kennedy confronted Soviet leader Nikita Khrushchev over nuclear-tipped missiles that the Soviets had placed in Cuba, the U.S. intelligence community estimated that even if Kennedy launched a successful first strike, the Soviet retaliatory response with their existing capabilities might kill 62 million Americans. By 1969, when Richard Nixon became president, the United States needed to rethink its approach. One of us, Kissinger, later described the challenge: Our defense strategies formed in the period of our superiority had to be reexamined in the harsh light of the new realities. . . . No bellicose rhetoric could obscure the fact that existing nuclear stockpiles were enough to destroy mankind. . . . There could be no higher duty than to prevent the catastrophe of nuclear war.

To make this condition vivid, strategists had created the ironic acronym MAD, the essence of which was summarized by Reagans oft-repeated one-liner: A nuclear war cannot be wonand must therefore never be fought. Operationally, MAD meant mutual assured vulnerability. While both the United States and the Soviet Union sought to escape this condition, they eventually recognized that they were unable to do so and had to fundamentally reconceptualize their relationship. In 1955, Churchill had noted the supreme irony in which safety will be the sturdy child of terror, and survival the twin brother of annihilation. Without denying differences in values or compromising vital national interests, deadly rivals had to develop strategies to defeat their adversary by every means possible except all-out war.

One pillar of these strategies was a series of both tacit and explicit constraints now known as arms control. Even before MAD, when each superpower was doing everything it could to achieve superiority, they discovered areas of shared interests. To reduce the risk of mistakes, the United States and the Soviet Union agreed in informal discussions not to interfere with the others surveillance of their territory. To protect their citizens from radioactive fallout, they banned atmospheric testing. To avoid crisis instabilitywhen one side feels the need to strike first in the belief that the other side is about tothey agreed in the 1972 Anti-Ballistic Missile Treaty to limit missile defenses. In the Intermediate-Range Nuclear Forces Treaty, signed in 1987, Reagan and Soviet leader Mikhail Gorbachev agreed to eliminate intermediate-range nuclear forces. The Strategic Arms Limitation Talks, which resulted in treaties signed in 1972 and 1979, limited increases in missile launchers, and later, the Strategic Arms Reduction Treaty (START), signed in 1991, and the New START, signed in 2010, reduced their numbers. Perhaps most consequentially, the United States and the Soviet Union concluded that the spread of nuclear weapons to other states posed a threat to both of them and ultimately risked nuclear anarchy. They brought about what is now known as the nonproliferation regime, the centerpiece of which is the 1968 Nuclear Nonproliferation Treaty, through which 186 countries today have pledged to refrain from developing their own nuclear arsenals.

In current proposals about ways to contain AI, one can hear many echoes of this past. The billionaire Elon Musks demand for a six-month pause on AI development, the AI researcher Eliezer Yudkowskys proposal to abolish AI, and the psychologist Gary Marcuss demand that AI be controlled by a global governmental body essentially repeat proposals from the nuclear era that failed. The reason is that each would require leading states to subordinate their own sovereignty. Never in history has one great power fearing that a competitor might apply a new technology to threaten its survival and security forgone developing that technology for itself. Even close U.S. allies such as the United Kingdom and France opted to develop their own national nuclear capabilities in addition to relying on the U.S. nuclear umbrella.

To adapt lessons from nuclear history to address the current challenge, it is essential to recognize the salient differences between AI and nuclear weapons. First, whereas governments led the development of nuclear technology, private entrepreneurs, technologists, and companies are driving advances in AI. Scientists working for Microsoft, Google, Amazon, Meta, OpenAI, and a handful of smaller startups are far ahead of any analogous effort in government. Furthermore, these companies are now locked in a gladiatorial struggle among themselves that is unquestionably driving innovation, but at a cost. As these private actors make tradeoffs between risks and rewards, national interests are certain to be underweighted.

Second, AI is digital. Nuclear weapons were difficult to produce, requiring a complex infrastructure to accomplish everything from enriching uranium to designing nuclear weapons. The products were physical objects and thus countable. Where it was feasible to verify what the adversary was doing, constraints emerged. AI represents a distinctly different challenge. Its major evolutions occur in the minds of human beings. Its applicability evolves in laboratories, and its deployment is difficult to observe. Nuclear weapons are tangible; the essence of artificial intelligence is conceptual.

A screen showing Chinese and U.S. flags, Beijing, July 2023

Third, AI is advancing and spreading at a speed that makes lengthy negotiations impossible. Arms control developed over decades. Restraints for AI need to occur before AI is built into the security structure of each societythat is, before machines begin to set their own objectives, which some experts now say is likely to occur in the next five years. The timing demands first a national, then an international, discussion and analysis, as well as a new dynamic in the relationship between government and the private sector.

Fortunately, the major companies that have developed generative AI and made the United States the leading AI superpower recognize that they have responsibilities not just to their shareholders but also to the country and humanity at large. Many have already developed their own guidelines for assessing risk before deployment, reducing bias in training data, and restricting dangerous uses of their models. Others are exploring ways to circumscribe training and impose know your customer requirements for cloud computing providers. A significant step in the right direction was the initiative the Biden administration announced in July that brought leaders of seven major AI companies to the White House for a joint pledge to establish guidelines to ensure safety, security, and trust.

As one of us, Kissinger, has pointed out in The Age of AI, it is an urgent imperative to create a systematic study of the long-range implications of AIs evolving, often spectacular inventions and applications. Even while the United States is more divided than it has been since the Civil War, the magnitude of the risks posed by the unconstrained advance of AI demands that leaders in both government and business act now. Each of the companies with the mass computing capability to train new AI models and each company or research group developing new models should create a group to analyze the human and geopolitical implications of its commercial AI operations.

The challenge is bipartisan and requires a unified response. The president and Congress should in that spirit establish a national commission consisting of distinguished nonpartisan former leaders in the private sector, Congress, the military, and the intelligence community. The commission should propose more specific mandatory safeguards. These should include requirements to assess continuously the mass computing capabilities needed to train AI models such as GPT-4 and that before companies release a new model, they stress test it for extreme risks. Although the task of developing rules will be demanding, the commission would have a model in the National Security Commission on Artificial Intelligence. Its recommendations, released in 2021, provided impetus and direction for the initiatives that the U.S. military and U.S. intelligence agencies are undertaking in the AI rivalry with China.

Even at this early stage, while the United States is still creating its own framework for governing AI at home, it is not too early to begin serious conversations with the worlds only other AI superpower. Chinas national champions in the tech sectorBaidu (the countrys top search engine), ByteDance (the creator of TikTok), Tencent (the maker of WeChat), and Alibaba (the leader in e-commerce)are building proprietary Chinese-language analogues of ChatGPT, although the Chinese political system has posed particular difficulties for AI. While China still lags in the technology to make advanced semiconductors, it possesses the essentials to power ahead in the immediate future.

Biden and Xi should thus meet in the near future for a private conversation about AI arms control. Novembers Asia-PacificEconomic Cooperation meeting in San Francisco offers that opportunity. Each leader should discuss how he personally assesses the risks posed by AI, what his country is doing to prevent applications that pose catastrophic risks, and how his country is ensuring that domestic companies are not exporting risks. To inform the next round of their discussions, they should create an advisory group consisting of U.S. and Chinese AI scientists and others who have reflected on the implications of these developments. This approach would be modeled on existing Track II diplomacy in other fields, where groups are composed of individuals chosen for their judgment and fairness although not formally endorsed by their government. From our discussions with key scientists in both governments, we are confident that this can be a highly productive discussion.

U.S. and Chinese discussions and actions on this agenda will form only part of the emerging global conversation on AI, including the AI Safety Summit, which the United Kingdom will host in November, and the ongoing dialogue at the UN. Since every country will be seeking to employ AI to enhance the lives of its citizens while ensuring the safety of its own society, in the longer run, a global AI order will be required. Work on it should begin with national efforts to prevent the most dangerous and potentially catastrophic consequences of AI. These initiatives should be complemented by dialogue between scientists of various countries engaged in developing large AI models and members of the national commissions such as the one proposed here. Formal governmental negotiations, initially among countries with advanced AI programs, should seek to establish an international framework, along with an international agency comparable to the International Atomic Energy Agency.

If Biden, Xi, and other world leaders act now to face the challenges posed by AI as squarely as their predecessors did in addressing nuclear threats in earlier decades, will they be as successful? Looking at the larger canvas of history and growing polarization today, it is difficult to be optimistic. Nonetheless, the incandescent fact that we have now marked 78 years of peace among the nuclear powers should serve to inspire everyone to master the revolutionary, inescapable challenges of our AI future.

Loading... Please enable JavaScript for this site to function properly.

Originally posted here:

Henry Kissinger: The Path to AI Arms Control - Foreign Affairs Magazine

Posted in Ai | Comments Off on Henry Kissinger: The Path to AI Arms Control – Foreign Affairs Magazine

Stability AI releases StableStudio in latest push for open-source AI – The Verge

Posted: May 18, 2023 at 2:01 am

Stability AI is releasing an open-source version of DreamStudio, a commercial interface for the companys AI image generator model, Stable Diffusion. In a press statement on Wednesday, Stability AI said the new release dubbed StableStudio marks a fresh chapter for the platform and will serve as a showcase for the companys dedication to advancing open-source development.

Making an open-source version of DreamStudio carries benefits for Stability AI. It allows community developers to improve and experiment with the interface, with the company potentially reaping the rewards conferred by these improvements. Stability AI stressed community building in its press release, noting how from enabling local-first development, to experimenting with a new plugin system, weve tried hard to make things extensible for external developers.

Stability AIs approach to open-source development has helped drive interest in its products

Stability AI has previously leaned hard on its open-source approach to create interest in its products. Various versions of Stable Diffusion have been freely available to download and tinker with since the model was publicly released back in August 2022, and last month, the company released a suite of open-source large language models (LLMs) collectively called StableLM. Stability AIs founder and CEO, Emad Mostaque, has been outspoken about the importance of making AI tools open source in order to increase public trust, claiming that open models will be essential for private data, in a Zoom call with the press last month.

However, the companys approach sometimes seems to lack direction, too. For example, StableStudio will be available alongside DreamStudio and potentially compete with it. The company has previously said it plans to generate revenue by creating customized versions of DreamStudio for corporate clients, but its not clear how successful this strategy has been. Recent reports suggest the firm is burning through cash and note that its most important models, like Stable Diffusion, were built in collaboration with other parties.

Go here to read the rest:

Stability AI releases StableStudio in latest push for open-source AI - The Verge

Posted in Ai | Comments Off on Stability AI releases StableStudio in latest push for open-source AI – The Verge

Google CEO Sundar Pichai Predicts That This Profession Will Be … – The Motley Fool

Posted: at 2:01 am

Companies across industries are using artificial Intelligence (AI). And that has people thinking about what AI means for the future of many professions. Alphabet (GOOG 1.16%) (GOOGL 1.11%), the parent company of Google, is no stranger to AI. The company uses AI throughout its business -- from optimizing its search capabilities to better serving its advertising customers.

Recently, Alphabet chief executive officer Sundar Pichai spoke about how AI is set to revolutionize the working world. He even said one industry in particular might be transformed by the technology. And, surprisingly, Pichai says this may not result in layoffs -- but instead, more jobs.

Pichai told The Verge in an interview that AI may make the law profession better in certain ways and may result in more people actually becoming lawyers. Law firms today already use AI to help draft documents, verify contracts, and complete other tasks. The idea is AI won't take away lawyers' jobs. Instead, it will help them and their staff do certain things more quickly -- and give them more time to focus on more complicated parts of the job.

"I'm willing to almost bet 10 years from now, maybe there are more lawyers," Pichai said in the interview.

Pichai said that some of the jitters people have about AI today are a lot like the worries they had years ago with the introduction of new technology like computers or the Internet. Yes, these technologies may have hurt some jobs, but they've also introduced many new ones and resulted in overall progression in the employment market.

Pichai is doing his part to advance the use of AI. He's made it a focus at Alphabet. For instance, the company uses it to help the Google search engine better understand the meaning of each vocal and written search. Another example, this time in the area of advertising: An AI tool predicts how much advertisers should spend to meet their campaign goals.

So, the use of AI isn't necessarily about eliminating jobs. It's about improving how we do them. And this makes AI a great new technology to invest in today. Alphabet itself is a solid AI stock to buy -- and investors will like the company for other reasons too, such as its giant market share in the search market. Google Search holds 92% of that market. And its use of AI may keep that going.

You also can invest in AI by buying shares of e-commerce giant Amazon (AMZN 1.85%). The company has used AI for years -- that's what helps Amazon recommend products to you that you may like, for example. And last year, Amazoneven bought a company called Snackable.AI. Amazon intends to use the company's machine learning tools to boost its streaming and podcast capabilities.

But AI isn't limited to tech companies. You'll find companies in healthcare turning their attention to the area too. Vaccine maker Moderna (MRNA 0.37%) used AI to help it in the development of the coronavirus vaccine -- and is using it to make research and development more efficient. Moderna even recently signed a deal with International Business Machines to use that company's AI platform in the drug development process. And Medtronic is using AI in many ways, such as predicting outcomes in spine surgery.

That means there are plenty of stocks today offering you opportunities for AI investing. So, this technology may transform jobs -- as Pichai says -- and it may also open the door to new investment possibilities.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Adria Cimino has positions in Amazon.com. The Motley Fool has positions in and recommends Alphabet and Amazon.com. The Motley Fool recommends Moderna. The Motley Fool has a disclosure policy.

See original here:

Google CEO Sundar Pichai Predicts That This Profession Will Be ... - The Motley Fool

Posted in Ai | Comments Off on Google CEO Sundar Pichai Predicts That This Profession Will Be … – The Motley Fool

Frances privacy watchdog eyes protection against data scraping in AI action plan – TechCrunch

Posted: at 2:01 am

Frances privacy watchdog, the CNIL, has published an action plan for artificial intelligence which gives a snapshot of where it will be focusing its attention, including on generative AI technologies like OpenAIs ChatGPT, in the coming months and beyond.

A dedicated Artificial Intelligence Service has been set up within the CNIL to work on scoping the tech and producing recommendations for privacy-friendly AI systems.

A key stated goal for the regulator is to steer the development of AI that respects personal data, such as by developing the means to audit and control AI systems to protect people.

Understanding how AI systems impact people is another main focus, along with support for innovative players in the local AI ecosystem which apply the CNILs best practice.

The CNIL wants to establish clear rules protectingthe personal data of European citizens in order to contribute to the development of privacy-friendly AI systems, it writes.

Barely a week goes by without another bunch of high profile calls from technologists asking regulators to get to grips with AI. And just yesterday, during testimony in the US Senate, OpenAIs CEO Sam Altman called for lawmakers to regulate the technology, suggesting a licensing and testing regime.

However data protection regulators in Europe are far down the road already with the likes of Clearview AI already widely sanctioned across the bloc for misuse of peoples data, for example. While the AI chatbot, Replika, has faced recent enforcement in Italy.

OpenAIs ChatGPT also attracted a very public intervention by the Italian DPA at the end of March which led to the company rushing out with new disclosures and controls for users, letting them apply some limits on how it can use their information.

At the same time, EU lawmakers are in the process of hammering out agreement on a risk-based framework for regulating applications of AI which the bloc proposed back in April 2021.

This framework, the EU AI Act, could be adopted by the end of the year and the planned regulation is another reason the CNIL highlights for preparing its AI action plan, saying the work will also make it possible to prepare for the entry into application of the draft European AI Regulation, which is currently under discussion.

Existing data protection authorities (DPAs) are likely to play a role in enforcement of the AI Act so regulators building up AI understanding and expertise will be crucial for the regime to function effectively. While the topics and details EU DPAs choose focus their attention on are set to weight the operational parameters of AI in the future certainly in Europe and, potentially, further afield given how far ahead the bloc is when it comes to digital rule-making.

On generative AI, the French privacy regulator is paying special attention to the practice by certain AI model makers of scraping data off the Internet to build data-sets for training AI systems like large language models (LLMs) which can, for example, parse natural language and respond in a human-like way to communications.

It says a priority area for its AI service will be the protection of publicly available data on the web against the use of scraping, or scraping, of data for the design of tools.

This is an uncomfortable area for makers of LLMs like ChatGPT that have relied upon quietly scraping vast amounts of web data to repurpose as training fodder. Those that have hoovered up web information which contains personal data face a specific legal challenge in Europe where the General Data Protection Regulation (GDPR), in application since May 2018, requires them to have a legal basis for such processing.

There are a number of legal bases set out in the GDPR however possible options for a technology like ChatGPT are limited.

In the Italian DPAs view, there are just two possibilities: Consent or legitimate interests. And since OpenAI did not ask individual web users for their permission before ingesting their data the company is now relying on a claim of legitimate interests in Italy for the processing; a claim that remains under investigation by the local regulator, Garante. (Reminder: GDPR penalties can scale up to 4% of global annual turnover in addition to any corrective orders.)

The pan-EU regulation contains further requirements to entities processing personal data such as that the processing must be fair and transparent. So there are additional legal challenges for tools like ChatGPT to avoid falling foul of the law.

And notably in its action plan, Frances CNIL highlights the fairness and transparency of the data processing underlying the operation of [AI tools] as a particular question of interest that it says its Artificial Intelligence Service and another internal unit, the CNIL Digital Innovation Laboratory, will prioritize for scrutiny in the coming months.

Other stated priority areas the CNIL flags for its AI scoping are:

Giving testimony to a US senate committee yesterday, Altman was questioned by US lawmakers about the companys approach to protecting privacy and the OpenAI CEO sought to narrowly frame the topic as referring only to information actively provided by users of the AI chatbot noting, for example, that ChatGPT lets users specify they dont want their conversational history used as training data. (A feature it did not offer initially, however.)

Asked what specific steps its taken to protect privacy, Altman told the senate committee: We dont train on any data submitted to our API. So if youre a business customer of ours and submit data, we dont train on it at all If you use ChatGPT you can opt out of us training on your data. You can also delete your conversation history or your whole account.

But he had nothing to say about the data used to train the model in the first place.

Altmans narrow framing of what privacy means sidestepped the foundational question of the legality of training data. Call it the original privacy sin of generative AI, if you will. But its clear that eliding this topic is going to get increasingly difficult for OpenAI and its data-scraping ilk as regulators in Europe get on with enforcing the regions existing privacy laws on powerful AI systems.

In OpenAIs case, it will continue to be subject to a patchwork of enforcement approaches across Europe as it does not have an established base in the region which the GDPRs one-stop-shop mechanism does not apply (as it typically does for Big Tech) so any DPA is competent to regulate if it believes local users data is being processed and their rights are at risk.So while Italy went in hard earlier this year with an intervention on ChatGPT that imposed a stop-processing-order in parallel to it opening an investigation of the tool, Frances watchdog only announced an investigation back in April, in response to complaints. (Spain has also said its probing the tech, again without any additional actions as yet.)

In another difference between EU DPAs, the CNIL appears to be concerned about interrogating a wider array of issues than Italys preliminary list including considering how the GDPRs purpose limitation principle should apply to large language models like ChatGPT. Which suggests it could end up ordering a more expansive array of operational changes if it concludes the GDPR is being breached.

The CNIL will soon submit to a consultation a guide on the rules applicable to the sharing and re-use of data, it writes. This work will include the issue of re-use of freely accessible data on the internet and now used for learning many AI models. This guide will therefore be relevant for some of the data processing necessary for the design of AI systems, including generative AIs.

It will also continue its work on designing AI systems and building databases for machine learning. These will give riseto several publications starting in the summer of 2023, following the consultation which has already been organised with several actors, in order to provide concrete recommendations, in particular as regards the design of AI systems such as ChatGPT.

Heres the rest of the topics the CNIL says will be gradually addressed via future publications and AI guidance it produces:

On audit and control of AI systems, the French regulator stipulates that its actions this year will focus on three areas: Compliance with an existing position on the use of enhanced video surveillance, which it published in 2022; the use of AI to fight fraud (such as social insurance fraud); and on investigating complaints.

It also confirms it has already received complaints about the legal framework for the training and use of generative AIs and says its working on clarifications there.

The CNIL has, in particular, received several complaints against the company OpenAI which manages the ChatGPT service, and has opened a control procedure, it adds, noting the existence of a dedicated working group that was recently set up within the European Data Protection Board to try to coordinated how different European authorities approach regulating the AI chatbot (and produce what it bill as a harmonised analysis of the data processing implemented by the OpenAI tool).

In further words of warning for AI systems makers who never asked peoples permission to use their data, and may be hoping for future forgiveness, the CNIL notes that itll be paying particular attention to whether entities processing personal data to develop, train or use AI systems have:

As for support for innovative AI players that want to be compliant with European rules (and values), the CNIL has had a regulatory sandbox up and running for a couple of years and its encouraging AI companies and researchers working on developing AI systems that play nice with personal data protection rules to get in touch (via ia@cnil.fr).

Read more here:

Frances privacy watchdog eyes protection against data scraping in AI action plan - TechCrunch

Posted in Ai | Comments Off on Frances privacy watchdog eyes protection against data scraping in AI action plan – TechCrunch

Investing in Hippocratic AI – Andreessen Horowitz

Posted: at 2:01 am

Solving consumer engagement is worth $1 trillion to our organization, said a health plan executive to me one day. He was obviously being a bit sensationalist with the magnitude of the number, but its been a common thread in strategic dialogue across all of healthcare for quite a whilethat so much of the high cost, waste, and poor clinical outcomes from which our healthcare system suffers stems from the lack of individuals engagement with itor the flip, which is healthcare organizations inability to engage, at scale and cost-effectively, with its patients and members.

And its not just consumer engagement that needs to be solvedour countrys severe clinician burnout problem, combined with overall high rates of staff churn, are causing healthcare organizations to falter at delivering on their core services. One study shows that the average hospital has turned over 100.5% of its workforce in the last 5 years!

In this context, when it comes to generative AI, healthcare is an industry that we view as holding the most potential for tangible and measurable impact. Closing the gap on a shortage of millions of healthcare workers in the next several years, while also trying to increase leverage for those already in the workforce, requires much more than just the traditional paths of training or importing more human labor. Were excited to be backing Hippocratic AI as they apply generative AI to execute against this opportunity set.

Imagine a world in which every patient, provider, and administrative staff member could interact with an immediately available, fully context-aware, completely capable, and charismatic conversationalist to help each individual pick the right path or do their job better (a form of always-on triage, as weve described in the past). Imagine that the marginal cost of engaging a patient through empathetic phone calls was on the order of $0.10 per hour, as opposed to the $50+ it might cost today. The very nature of generative AIconversational, scalable, accessible to non-technical usershas the potential to solve the shortcomings of previous generations of rules-based chatbots and other such products in making these concepts a reality.

But AI applications in healthcare also pose among the highest stakes of any industry. AI skeptics might point to the lack of focus on responsibility, safety, and regulatory compliance exhibited by many companies in this space. Not to mention the challenge of assembling a cross-disciplinary team with deep expertise in LLM development, healthcare delivery, and healthcare administration to build AI products that actually work.

Hippocratic AIs name alone represents their safety-first ethos (referring to the Hippocratic Oath that physicians commit to, in which the core principles are to do no harm to patients and to maintain confidentiality of a patients medical information). Theyve built a unique framework to incorporate professional-grade certification, RLHF (reinforcement learning from human feedback) through a panel of healthcare professionals, and bedside manner into their non-diagnostic, patient-facing conversational LLMs, with the recognition that passing a medical board exam is not enough to ensure that a model is ready to be deployed into a real-world setting.

Weve known the CEO, Munjal Shah, since investing in his last company in 2017 (which was his third, after previously selling an AI company to Google), and thus know he has uniquely earned secrets about how to build a company at the intersection of AI and healthcare. He most recently ran a Medicare brokerage business that involved a national-scale call center that made personalized recommendations to seniors based on their individual claims history. There, he led through the operational pains of scaling an empathetic but efficient engagement platform for consumers in a regulated healthcare context. We believe these competencies give him and his founding team (composed of individuals with clinical, LLM development, and healthcare operations experience) an edge in understanding what it takes to bring high-impact, responsible, and safe generative AI products to market, and consider it a privilege to be backing him again.

***

The views expressed here are those of the individual AH Capital Management, L.L.C. (a16z) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.

This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investments/.

Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.

See original here:

Investing in Hippocratic AI - Andreessen Horowitz

Posted in Ai | Comments Off on Investing in Hippocratic AI – Andreessen Horowitz

Page 4«..3456..1020..»