Page 4«..3456..»

Category Archives: Artificial General Intelligence

Rejuve.Bio Launches Groundbreaking Crowd Fund on NetCapital to Pioneer the Future of Artificial General … – PR Newswire

Posted: March 14, 2024 at 12:11 am

Embark on a journey to redefine aging with cutting-edge biotech innovation.

LOS ANGELES, March 12, 2024 /PRNewswire/ -- Rejuve.Bio, a leading AI biotechnology firm at the forefront of the longevity revolution, announces its latest initiative: a Crowd Fundraise on the NetCapital platform. This pivotal move opens a gateway for investors to be part of a transformative journey, leveraging artificial intelligence and genetics to challenge the conventional notions of aging and human healthspan. [See https://netcapital.com/companies/rejuvebiotechfor more information.]

Focused on harnessing the power of artificial intelligence (AI) and genetics, Rejuve.Bio aims to revolutionize the healthcare and biotech industries by extending human healthspan and redefining the aging process.

"Our mission at Rejuve.Bio is not just about extending life but enhancing the quality of life," said Kennedy Schaal, Executive Director at Rejuve.Bio. "With our innovative approach combining AI, genetics, and comprehensive data analysis, we're not just imagining a future where aging is a challenge to be overcome; we're creating it."

Highlights of the announcement include:

Why Invest in Rejuve.Bio:

As Rejuve.Bio embarks on this exciting phase, the company invites investors and the public to learn more about this unique opportunity by visiting the NetCapital platform. Go to https://netcapital.com/companies/rejuvebiotech

DISCLAIMER: This release is meant for informational purposes only, and is not intended to serve as a recommendation to buy or sell any security in a self-directed account and is not an offer or sale of a security. Any investment is not directly managed by Rejuve.Bio. All investments involve risk and the past performance of a security or financial product does not guarantee future results or returns. Potential investors should seek professional advice and carefully review all documentation before making any investment decisions.

About Rejuve Bio Rejuve Bio is an AI biotechnology company dedicated to redefining aging research and extending human healthspan. With a focus on B2B operations, Rejuve Bio employs a multidisciplinary approach, utilizing artificial intelligence, genetics, and cutting-edge data analysis to explore the potential for agelessness. Rejuve Bio mission is to transform the field of longevity research by providing breakthrough therapeutics, drug discovery, and individualized healthspan solutions to improve the quality of life for people all over the world.

Contact: Lewis Farrell Email: [emailprotected]

Logo - https://mma.prnewswire.com/media/2360612/Rejuve_Bio_Logo.jpg

SOURCE Rejuve Bio

Read more:

Rejuve.Bio Launches Groundbreaking Crowd Fund on NetCapital to Pioneer the Future of Artificial General ... - PR Newswire

Posted in Artificial General Intelligence | Comments Off on Rejuve.Bio Launches Groundbreaking Crowd Fund on NetCapital to Pioneer the Future of Artificial General … – PR Newswire

Employees at Top AI Labs Fear Safety Is an Afterthought – TIME

Posted: at 12:11 am

Workers at some of the worlds leading AI companies harbor significant concerns about the safety of their work and the incentives driving their leadership, a report published on Monday claimed.

The report, commissioned by the State Department and written by employees of the company Gladstone AI, makes several recommendations for how the U.S. should respond to what it argues are significant national security risks posed by advanced AI.

Read More: Exclusive: U.S. Must Move Decisively To Avert Extinction-Level Threat from AI, Government-Commissioned Report Says

The reports authors spoke with more than 200 experts for the report, including employees at OpenAI, Google DeepMind, Meta and Anthropicleading AI labs that are all working towards artificial general intelligence, a hypothetical technology that could perform most tasks at or above the level of a human. The authors shared excerpts of concerns that employees from some of these labs shared with them privately, without naming the individuals or the specific company that they work for. OpenAI, Google, Meta and Anthropic did not immediately respond to requests for comment.

We have served, through this project, as a de-facto clearing house for the concerns of frontier researchers who are not convinced that the default trajectory of their organizations would avoid catastrophic outcomes, Jeremie Harris, the CEO of Gladstone and one of the authors of the report, tells TIME.

One individual at an unspecified AI lab shared worries with the reports authors that the lab has what the report characterized as a lax approach to safety stemming from a desire to not slow down the labs work to build more powerful systems. Another individual expressed concern that their lab had insufficient containment measures in place to prevent an AGI from escaping their control, even though the lab believes AGI is a near-term possibility.

Still others expressed concerns about cybersecurity. By the private judgment of many of their own technical staff, the security measures in place at many frontier AI labs are inadequate to resist a sustained IP exfiltration campaign by a sophisticated attacker, the report states. Given the current state of frontier lab security, it seems likely that such model exfiltration attempts are likely to succeed absent direct U.S. government support, if they have not already.

Many of the people who shared those concerns did so while wrestling with the calculation that whistleblowing publicly would likely result in them losing their ability to influence key decisions in the future, says Harris. The level of concern from some of the people in these labs, about the decisionmaking process and how the incentives for management translate into key decisions, is difficult to overstate, he tells TIME. The people who are tracking the risk side of the equation most closely, and are in many cases the most knowledgeable, are often the ones with the greatest levels of concern.

Are you an employee at an AI lab and have concerns that you might consider sharing with a journalist? You can contact the author of this piece on Signal at billyperrigo.01

The fact that todays AI systems have not yet led to catastrophic outcomes for humanity, the authors say, is not evidence that bigger systems will be safe in the future. One of the big themes weve heard from individuals right at the frontier, on the stuff being developed under wraps right now, is that its a bit of a Russian roulette game to some extent, says Edouard Harris, Gladstones chief technology officer who also co-authored the report. Look, we pulled the trigger, and hey, were fine, so lets pull the trigger again.

Read More: How We Can Have AI Progress Without Sacrificing Safety or Democracy

Many of the worlds governments have woken up to the risk posed by advanced AI systems over the last 12 months. In November, the U.K. hosted an AI Safety Summit where world leaders committed to work together to set international norms for the technology, and in October President Biden issued an executive order setting safety standards for AI labs based in the U.S. Congress, however, is yet to pass an AI law, meaning there are few legal restrictions on what AI labs can and cant do when it comes to training advanced models.

Bidens executive order calls on the National Institute of Standards and Technology to set rigorous standards for tests that AI systems should have to pass before public release. But the Gladstone report recommends that government regulators should not rely heavily on these kinds of AI evaluations, which are today a common practice for testing whether an AI system has dangerous capabilities or behaviors. Evaluations, the report says, can be undermined and manipulated easily, because AI models can be superficially tweaked, or fine tuned, by their creators to pass evaluations if the questions are known in advance. Crucially it is easier for these tweaks to simply teach a model to hide dangerous behaviors better, than to remove those behaviors altogether.

The report cites a person described as an expert with direct knowledge of one AI labs practices, who judged that the unnamed lab is gaming evaluations in this way. AI evaluations can only reveal the presence, but not confirm the absence, of dangerous capabilities, the report argues. Over-reliance on AI evaluations could propagate a false sense of security among AI developers [and] regulators.

Read more:

Employees at Top AI Labs Fear Safety Is an Afterthought - TIME

Posted in Artificial General Intelligence | Comments Off on Employees at Top AI Labs Fear Safety Is an Afterthought – TIME

Meta hooks up with Hammerspace for advanced AI infrastructure project Blocks and Files – Blocks & Files

Posted: at 12:11 am

Meta has confirmed Hammerspace is its data orchestration software supplier, supporting 49,152 Nvidia H100 GPUs split into two equal clusters.

The parent of Facebook, Instgram and other social media platforms, says its long-term vision is to create artificial general intelligence (AGI) that is open and built responsibly so that it can be widely available for everyone to benefit from. The blog authors say: Marking a major investment in Metas AI future, we are announcing two 24k GPU clusters. We are sharing details on the hardware, network, storage, design, performance, and software that help us extract high throughput and reliability for various AI workloads.

Hammerspace has been saying for some weeks that it has a huge hyperscaler AI customer, which we suspected to be Meta, and now Meta has described the role of Hammerspace in two Llama 3 AI training systems.

Metas bloggers say: These clusters support our current and next generation AI models, including Llama 3, the successor to Llama 2, our publicly released LLM, as well as AI research and development across GenAI and other areas.

A precursor AI Research SuperCluster, with 16,000 Nvidia A100 GPUs, was used to build Metas gen 1 AI models and continues to play an important role in the development of Llama and Llama 2, as well as advanced AI models for applications ranging from computer vision, NLP, and speech recognition, to image generation, and even coding. That cluster uses Pure Storage FlashArray and FlashBladeall-flash arrays.

Metas two newer and larger clusters are diagrammed in the blog:

They support models larger and more complex than that could be supported in the RSC and pave the way for advancements in GenAI product development and AI research. The scale here is overwhelming as they help handle hundreds of trillions of AI model executions per day.

The two clusters each start with 24,576 Nvidia H100 GPUs. One has an RDMA over RoCE 400 Gbps Ethernet network system, using Arista 7800 switches with Wedge400 and Minipack2 OCP rack switches, while the other has an Nvidia Quantum2 400Gbps InfiniBand setup.

Metas Grand Teton OCP hardware chassis houses the GPUs, which rely on Metas Tectonic distributed, flash-optimized and exabyte scale storage system.

This is accessed though a Meta-developed Linux Filesystem in Userspace (FUSE) API and used for AI model data needs and model checkpointing. The blog says: This solution enables thousands of GPUs to save and load checkpoints in a synchronized fashion (a challenge for any storage solution) while also providing a flexible and high-throughput exabyte scale storage required for data loading.

Meta has partnered with Hammerspace to co-develop and land a parallel network file system (NFS) deployment to meet the developer experience requirements for this AI cluster Hammerspace enables engineers to perform interactive debugging for jobs using thousands of GPUs as code changes are immediately accessible to all nodes within the environment. When paired together, the combination of our Tectonic distributed storage solution and Hammerspace enable fast iteration velocity without compromising on scale.

The Hammerspace diagramabove provides its view of the co-developed AI cluster storage system.

Both the Tectonic and Hammerspace-backed storage deployments use Metas YV3 Sierra Point server fitted with the highest-capacity E1.S format SSDs available. These are OCP servers customized to achieve the right balance of throughput capacity per server, rack count reduction, and associated power efficiency as well as fault tolerance.

Meta is not stopping here. The blog authors say: This announcement is one step in our ambitious infrastructure roadmap. By the end of 2024, were aiming to continue to grow our infrastructure build-out that will include 350,000 NVIDIA H100 GPUs as part of a portfolio that will feature compute power equivalent to nearly 600,000 H100s.

Go here to see the original:

Meta hooks up with Hammerspace for advanced AI infrastructure project Blocks and Files - Blocks & Files

Posted in Artificial General Intelligence | Comments Off on Meta hooks up with Hammerspace for advanced AI infrastructure project Blocks and Files – Blocks & Files

Among the A.I. Doomsayers – The New Yorker

Posted: at 12:11 am

Katja Graces apartment, in West Berkeley, is in an old machinists factory, with pitched roofs and windows at odd angles. It has terra-cotta floors and no central heating, which can create the impression that youve stepped out of the California sunshine and into a duskier place, somewhere long ago or far away. Yet there are also some quietly futuristic touches. High-capacity air purifiers thrumming in the corners. Nonperishables stacked in the pantry. A sleek white machine that does lab-quality RNA tests. The sorts of objects that could portend a future of tech-enabled ease, or one of constant vigilance.

Grace, the lead researcher at a nonprofit called A.I. Impacts, describes her job as thinking about whether A.I. will destroy the world. She spends her time writing theoretical papers and blog posts on complicated decisions related to a burgeoning subfield known as A.I. safety. She is a nervous smiler, an oversharer, a bit of a mumbler; shes in her thirties, but she looks almost like a teen-ager, with a middle part and a round, open face. The apartment is crammed with books, and when a friend of Graces came over, one afternoon in November, he spent a while gazing, bemused but nonjudgmental, at a few of the spines: Jewish Divorce Ethics, The Jewish Way in Death and Mourning, The Death of Death. Grace, as far as she knows, is neither Jewish nor dying. She let the ambiguity linger for a moment. Then she explained: her landlord had wanted the possessions of the previous occupant, his recently deceased ex-wife, to be left intact. Sort of a relief, honestly, Grace said. One set of decisions I dont have to make.

She was spending the afternoon preparing dinner for six: a yogurt-and-cucumber salad, Impossible beef gyros. On one corner of a whiteboard, she had split her pre-party tasks into painstakingly small steps (Chop salad, Mix salad, Mold meat, Cook meat); on other parts of the whiteboard, shed written more gnomic prompts (Food area, Objects, Substances). Her friend, a cryptographer at Android named Paul Crowley, wore a black T-shirt and black jeans, and had dyed black hair. I asked how they knew each other, and he responded, Oh, weve crossed paths for years, as part of the scene.

It was understood that the scene meant a few intertwined subcultures known for their exhaustive debates about recondite issues (secure DNA synthesis, shrimp welfare) that members consider essential, but that most normal people know nothing about. For two decades or so, one of these issues has been whether artificial intelligence will elevate or exterminate humanity. Pessimists are called A.I. safetyists, or decelerationistsor, when theyre feeling especially panicky, A.I. doomers. They find one another online and often end up living together in group houses in the Bay Area, sometimes even co-parenting and co-homeschooling their kids. Before the dot-com boom, the neighborhoods of Alamo Square and Hayes Valley, with their pastel Victorian row houses, were associated with staid domesticity. Last year, referring to A.I. hacker houses, the San Francisco Standard semi-ironically called the area Cerebral Valley.

A camp of techno-optimists rebuffs A.I. doomerism with old-fashioned libertarian boomerism, insisting that all the hand-wringing about existential risk is a kind of mass hysteria. They call themselves effective accelerationists, or e/accs (pronounced e-acks), and they believe A.I. will usher in a utopian futureinterstellar travel, the end of diseaseas long as the worriers get out of the way. On social media, they troll doomsayers as decels, psyops, basically terrorists, or, worst of all, regulation-loving bureaucrats. We must steal the fire of intelligence from the gods [and] use it to propel humanity towards the stars, a leading e/acc recently tweeted. (And then there are the normies, based anywhere other than the Bay Area or the Internet, who have mostly tuned out the debate, attributing it to sci-fi fume-huffing or corporate hot air.)

Graces dinner parties, semi-underground meetups for doomers and the doomer-curious, have been described as a nexus of the Bay Area AI scene. At gatherings like these, its not uncommon to hear someone strike up a conversation by asking, What are your timelines? or Whats your p(doom)? Timelines are predictions of how soon A.I. will pass particular benchmarks, such as writing a Top Forty pop song, making a Nobel-worthy scientific breakthrough, or achieving artificial general intelligence, the point at which a machine can do any cognitive task that a person can do. (Some experts believe that A.G.I. is impossible, or decades away; others expect it to arrive this year.) P(doom) is the probability that, if A.I. does become smarter than people, it will, either on purpose or by accident, annihilate everyone on the planet. For years, even in Bay Area circles, such speculative conversations were marginalized. Last year, after OpenAI released ChatGPT, a language model that could sound uncannily natural, they suddenly burst into the mainstream. Now there are a few hundred people working full time to save the world from A.I. catastrophe. Some advise governments or corporations on their policies; some work on technical aspects of A.I. safety, approaching it as a set of complex math problems; Grace works at a kind of think tank that produces research on high-level questions, such as What roles will AI systems play in society? and Will they pursue goals? When theyre not lobbying in D.C. or meeting at an international conference, they often cross paths in places like Graces living room.

The rest of her guests arrived one by one: an authority on quantum computing; a former OpenAI researcher; the head of an institute that forecasts the future. Grace offered wine and beer, but most people opted for nonalcoholic canned drinks that defied easy description (a fermented energy drink, a hopped tea). They took their Impossible gyros to Graces sofa, where they talked until midnight. They were courteous, disagreeable, and surprisingly patient about reconsidering basic assumptions. You can condense the gist of the worry, seems to me, into a really simple two-step argument, Crowley said. Step one: Were building machines that might become vastly smarter than us. Step two: That seems pretty dangerous.

Are we sure, though? Josh Rosenberg, the C.E.O. of the Forecasting Research Institute, said. About intelligence per se being dangerous?

Grace noted that not all intelligent species are threatening: There are elephants, and yet mice still seem to be doing just fine.

Cartoon by Erika Sjule and Nate Odenkirk

Rabbits are certainly more intelligent than myxomatosis, Michael Nielsen, the quantum-computing expert, said.

Crowleys p(doom) was well above eighty per cent. The others, wary of committing to a number, deferred to Grace, who said that, given my deep confusion and uncertainty about thiswhich I think nearly everyone has, at least everyone whos being honest, she could only narrow her p(doom) to between ten and ninety per cent. Still, she went on, a ten-per-cent chance of human extinction is obviously, if you take it seriously, unacceptably high.

They agreed that, amid the thousands of reactions to ChatGPT, one of the most refreshingly candid assessments came from Snoop Dogg, during an onstage interview. Crowley pulled up the transcript and read aloud. This is not safe, cause the A.I.s got their own minds, and these motherfuckers are gonna start doing their own shit, Snoop said, paraphrasing an A.I.-safety argument. Shit, what the fuck? Crowley laughed. I have to admit, that captures the emotional tenor much better than my two-step argument, he said. And then, as if to justify the moment of levity, he read out another quote, this one from a 1948 essay by C.S. Lewis: If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human thingspraying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of dartsnot huddled together like frightened sheep.

Grace used to work for Eliezer Yudkowsky, a bearded guy with a fedora, a petulant demeanor, and a p(doom) of ninety-nine per cent. Raised in Chicago as an Orthodox Jew, he dropped out of school after eighth grade, taught himself calculus and atheism, started blogging, and, in the early two-thousands, made his way to the Bay Area. His best-known works include Harry Potter and the Methods of Rationality, a piece of fan fiction running to more than six hundred thousand words, and The Sequences, a gargantuan series of essays about how to sharpen ones thinking. The informal collective that grew up around these writingsfirst in the comments, then in the physical worldbecame known as the rationalist community, a small subculture devoted to avoiding the typical failure modes of human reason, often by arguing from first principles or quantifying potential risks. Nathan Young, a software engineer, told me, I remember hearing about Eliezer, who was known to be a heavy guy, onstage at some rationalist event, asking the crowd to predict if he could lose a bunch of weight. Then the big reveal: he unzips the fat suit he was wearing. Hed already lost the weight. I think his ostensible point was something about how its hard to predict the future, but mostly I remember thinking, What an absolute legend.

Yudkowsky was a transhumanist: human brains were going to be uploaded into digital brains during his lifetime, and this was great news. He told me recently that Eliezer ages sixteen through twenty assumed that A.I. was going to be great fun for everyone forever, and wanted it built as soon as possible. In 2000, he co-founded the Singularity Institute for Artificial Intelligence, to help hasten the A.I. revolution. Still, he decided to do some due diligence. I didnt see why an A.I. would kill everyone, but I felt compelled to systematically study the question, he said. When I did, I went, Oh, I guess I was wrong. He wrote detailed white papers about how A.I. might wipe us all out, but his warnings went unheeded. Eventually, he renamed his think tank the Machine Intelligence Research Institute, or MIRI.

The existential threat posed by A.I. had always been among the rationalists central issues, but it emerged as the dominant topic around 2015, following a rapid series of advances in machine learning. Some rationalists were in touch with Oxford philosophers, including Toby Ord and William MacAskill, the founders of the effective-altruism movement, which studied how to do the most good for humanity (and, by extension, how to avoid ending it). The boundaries between the movements increasingly blurred. Yudkowsky, Grace, and a few others flew around the world to E.A. conferences, where you could talk about A.I. risk without being laughed out of the room.

Philosophers of doom tend to get hung up on elaborate sci-fi-inflected hypotheticals. Grace introduced me to Joe Carlsmith, an Oxford-trained philosopher who had just published a paper about scheming AIs that might convince their human handlers theyre safe, then proceed to take over. He smiled bashfully as he expounded on a thought experiment in which a hypothetical person is forced to stack bricks in a desert for a million years. This can be a lot, I realize, he said. Yudkowsky argues that a superintelligent machine could come to see us as a threat, and decide to kill us (by commandeering existing autonomous weapons systems, say, or by building its own). Or our demise could happen in passing: you ask a supercomputer to improve its own processing speed, and it concludes that the best way to do this is to turn all nearby atoms into silicon, including those atoms that are currently people. But the basic A.I.-safety arguments do not require imagining that the current crop of Verizon chatbots will suddenly morph into Skynet, the digital supervillain from Terminator. To be dangerous, A.G.I. doesnt have to be sentient, or desire our destruction. If its objectives are at odds with human flourishing, even in subtle ways, then, say the doomers, were screwed.

Read the original here:

Among the A.I. Doomsayers - The New Yorker

Posted in Artificial General Intelligence | Comments Off on Among the A.I. Doomsayers – The New Yorker

Artificial Superintelligence Could Arrive by 2027, Scientist Predicts – Futurism

Posted: at 12:11 am

We may not have reached artificial general intelligence (AGI) yet, but as one of the leading experts in the theoretical field claims, it may get here sooner rather than later.

During his closing remarks at this year's Beneficial AGI Summit in Panama, computer scientist and haberdashery enthusiast Ben Goertzel said that although people most likely won't build human-level or superhuman AI until 2029 or 2030, there's a chance it could happen as soon as 2027.

After that, the SingularityNET founder said, AGI could then evolve rapidly into artificial superintelligence (ASI), which he defines as an AI with all the combined knowledge of human civilization.

"No one has created human-level artificial general intelligence yet; nobody has a solid knowledge of when we're going to get there," Goertzel told the conference audience. "I mean, there are known unknowns and probably unknown unknowns."

"On the other hand, to me it seems quite plausible we could get to human-level AGI within, let's say, the next three to eight years," he added.

To be fair, Goertzel is far from alone in attempting to predict when AGI will be achieved.

Last fall, for instance, Google DeepMind co-founder Shane Legg reiterated his more than decade-old prediction that there's a 50/50 chance that humans invent AGI by the year 2028. In a tweet from May of last year, "AI godfather" and ex-Googler Geoffrey Hinton said he now predicts, "without much confidence," that AGI is five to 20 years away.

Best known as the creator of Sophia the humanoid robot, Goertzel has long theorized about the date of the so-called "singularity," or the point at which AI reaches human-level intelligence and subsequently surpasses it.

Until the past few years, AGI, as Goertzel and his cohort describe it, seemed like a pipe dream, but with the large language model (LLM) advances made by OpenAI since it thrust ChatGPT upon the world in late 2022, that possibility seems ever close although he's quick to point out that LLMs by themselves are not what's going to lead to AGI.

"My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI unless the AGI threatens to throttle its own development out of its own conservatism," the AI pioneer added. "I think once an AGI can introspect its own mind, then it can do engineering and science at a human or superhuman level."

"It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion," he added, presumably referring to the singularity.

Naturally, there are a lot of caveats to what Goertzel is preaching, not the least of which being that by human standards, even a superhuman AI would not have a "mind" the way we do. Then there's the assumption that the evolution of the technology would continue down a linear pathway as if in a vacuum from the rest of human society and the harms we bring to the planet.

All the same, it's a compelling theory and given how rapidly AI has progressed just in the past few years alone, his comments shouldn't be entirely discredited.

More on AGI:Amazon AGI Team Say Their AI is Showing "Emergent Properties"

See original here:

Artificial Superintelligence Could Arrive by 2027, Scientist Predicts - Futurism

Posted in Artificial General Intelligence | Comments Off on Artificial Superintelligence Could Arrive by 2027, Scientist Predicts – Futurism

OpenAI, Salesforce and Others Boost Efforts for Ethical AI – PYMNTS.com

Posted: at 12:10 am

In a shift toward ethical technology use, companies across the globe are intensifying their efforts to develop responsible artificial intelligence (AI) systems, aiming to ensure fairness, transparency and accountability in AI applications.

OpenAI, Salesforce and other tech companies recently signed an open letter highlighting a collective responsibility to maximize AIs benefits and mitigate the risks to society. Its the tech industrys latest effort to call for building AI responsibly.

The concept of responsible AI is gaining attention followingElon Musks recent lawsuit against OpenAI. He accuses the ChatGPT creator of breaking its original promise to operate as a nonprofit, alleging a breach of contract. Musks concern was that the potential dangers of AI should not be managed by profit-driven giants like Google.

OpenAI has responded aggressively to the lawsuit. The company has released a sequence of emails between Musk and top executives, revealing his initial support for the startups transition to a profit-making model. Musks lawsuit accuses OpenAI of violating their original agreement with Microsoft, which went against the startups nonprofit AI research foundation. When Musk helped launch OpenAI in 2015, his aim was to create a nonprofit organization that could balance Googles dominance in AI, especially after its acquisition of DeepMind. Musks concern was that the potential dangers of AI should not be managed by profit-driven giants like Google.

The AI firm said in ablog postthat it remains committed to a mission to ensure AGI [artificial general intelligence] benefits all of humanity. The companys mission includes building safe and beneficial AI and helping to create broadly distributed benefits.

The goals of responsible AI are ambitious but vague.Mistral AI, one of the letters signatories, wrotethat the company strives to democratize data and AI to all organizations and users and talks about ethical use, accelerating data-driven decision making and unlocking possibilities across industries .

Some observers say there is a long way to go before the goals of responsible AI are broadly achieved.

Unfortunately, companies will not attain it by adopting many of the responsible AI frameworks available today, Kjell Carlsson, head of AI strategy at Domino Data Lab, told PYMNTS in an interview.

Most of these provide idealistic language but little else. They are frequently disconnected from real-world AI projects, often flawed, and typically devoid of implementable advice.

Carlsson said that building responsible AI involves developing and improving AI models to ensure that they perform accurately and safely and comply with relevant data and AI regulations. The process entails appointing leaders in AI responsibility and training team members on ethical AI practices, including model validation, bias mitigation, and change monitoring.

It involves establishing processes for governing data, models and other artifacts and ensuring that appropriate steps are taken and approved at each stage of the AI lifecycle, he added. And critically, it involves implementing the technology capabilities that enable practitioners to leverage responsible AI tools and automate the necessary governance, monitoring and process orchestration at scale.

While the aims of responsible AI can be a bit fuzzy, the technology can have a tangible impact on lives, Kate Kalcevich of the digital accessibility company Fablepointed out in an interview with PYMNTS.

She said that if not used responsibly and ethically, AI technologies could create barriers to people with disabilities. For example, she questioned whether it would be ethical to use a video avatar that isnt disabled to represent a person with a disability.

My biggest concern would be access to critical services such as healthcare, education and employment, she added. For example, if AI-based chat or phone programs are used to book medical appointments or for job interviews, people with communication disabilities could be excluded if the AI tools arent designed with access needs in mind.

Link:

OpenAI, Salesforce and Others Boost Efforts for Ethical AI - PYMNTS.com

Posted in Artificial General Intelligence | Comments Off on OpenAI, Salesforce and Others Boost Efforts for Ethical AI – PYMNTS.com

Google DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I. – The New York Times

Posted: February 24, 2024 at 12:01 pm

Listen and follow Hard Fork Apple | Spotify | Amazon | YouTube

This weeks episode is a conversation with Demis Hassabis, the head of Googles artificial intelligence division. We talk about Googles latest A.I. models, Gemini and Gemma; the existential risks of artificial intelligence; his timelines for artificial general intelligence; and what he thinks the world will look like post-A.G.I.

Additional listening and reading:

Hard Fork is hosted by Kevin Roose and Casey Newton and produced by Davis Land and Rachel Cohn . The show is edited by Jen Poyant. Engineering by Chris Wood and original music by Dan Powell, Marion Lozano and Pat McCusker. Fact-checking by Caitlin Love .

Special thanks to Paula Szuchman, Pui-Wing Tam, Nell Gallogly, Kate LoPresti and Jeffrey Miranda .

See the article here:

Google DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I. - The New York Times

Posted in Artificial General Intelligence | Comments Off on Google DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I. – The New York Times

Bill Foster, a particle physicist-turned-congressman, on why he’s worried about artificial general intelligence – FedScoop

Posted: at 12:01 pm

Congress is just starting to ramp up its efforts to regulate artificial intelligence, but one member says he first encountered the technology in the 1990s, when he used neural networks to study physics. Now, Rep. Bill Foster, D-Ill., is returning to AI as a member of the new bipartisan task force on artificial intelligence, led by Reps. Ted Lieu, D-Calif., and Jay Obernolte, R-Calif., which was announced by House leadership earlier this week.

In a chat with FedScoop, the congressman outlined his concerns with artificial intelligence. The threat of deepfakes, he warned, cant necessarily be solved with detection and may require some kind of digital authentication platform. At the same time, Foster said hes also worried that the setup of committees and the varying levels of expertise within Congress arent well situated to deal with the technology.

There are many members of Congress who understand about finance and banking and can push back on technical statements about financial services that might not be true, he told FedScoop. Its much harder for the average member of Congress to push back on claims about AI. Thats the difference. Were not as well defended against statements that may or may not be factual from lobbying organizations.

Compared to some other members of Congress, Foster appears particularly concerned about artificial general intelligence, a theoretical form of AI that, some argue, could end up rivaling human abilities. This technology doesnt exist yet, but some executives, including OpenAI CEO Sam Altman, have warned that this type of AI could raise massive safety issues. In particular, Foster argues that there will be a survival advantage to algorithmic systems that are opaque and deceptive.

(Critics, meanwhile, argue that discussion of AGI has distracted from opportunities to address the risks of AI systems that already exist today, like bias issues raised by facial recognition software.)

Fosters comments come in the nascent days of the AI task force, but help elucidate how varied perspectives on artificial intelligence are, even within the Democratic party. Unlike other areas, the technology is still relatively new to Congress and positions on how to rein in AI, and potential partisan divides, are only still forming.

Editors note: The transcript has been edited for clarity and length.

FedScoop: With this new AI task force, to what extent do you think youre going to be focusing on chips and focusing on hardware, given both the recent chips legislation and OpenAIs Sam Altmans calls for more focus on chip infrastructure, too?

Rep. Bill Foster: Its an interesting tradeoff. I doubt that this committee is going to be in a position to micromanage the [integrated circuit] industry. I first met Sam Altman about six years ago when I visited OpenAI [to talk about] universal basic income, which is one of the things that a lot of people point to having to do with the disruption to the labor market that [AI] is likely to cost.

When I started making noise about this inside the caucus, people expected the first jobs to fall would be factory assembly line workers, long haul truck drivers, taxi drivers. Thats taken longer than people guessed right then. But the other thing thats happened thats surprised people is how quickly the creative arts have come under assault from AI. Theres a lot of nervousness among teachers about what exactly are the careers of the future that were actually training people for.

I think one of the most important responses something that the government can actually deliver and even deliver this session of Congress is to provide people some way of defending themselves against deepfakes. Theres two approaches to this. The first thing is to try to imagine that you can detect fraudulent media and to develop software to detect deepfake material. Im not optimistic that thats going to work. Its going to be a cat-and-mouse game forever. Another approach is to provide citizens with a means of proving they are who they say they are online and they are not a deepfake.

FS: An authentication service?

BF: A mobile ID. A digital drivers license or a secure digital identity. This is a way for someone to use their cell phone and their government-provided credential, like a passport or Real ID-compliant drivers license, and associate it with your cell phones [This could] take advantage of your cell phones ability through AI to recognize its owner and also the modern cell phones ability to be used like a security dongle. It has whats called a secure enclave, or a secure compute facility, that allows it to hold private encryption keys that makes the device essentially a unique device in the world that can be associated with a unique person and their credential.

FS: How optimistic are you that this new AI task force is actually going to produce legislation?

BF: One reason Im optimistic is the Republicans choice of a chair: Jay Obernolte. Hes another guy who keeps up the effort to maintain his technical currency. He and I can geek out about the actual state of the art, which is rather rare in the U.S. Congress. One of the missions, certainly for this task force, will be to try to educate members about at least the capabilities of AI.

FS: How worried are you that companies might try to influence what legislation is crafted to sort of benefit their own finances?

BF: I served on the Financial Services Committee for all my time in Congress, so Im very familiar with industry trying to influence policy. It would shock me if that didnt happen. One of the dangers here is that there are many members of Congress who understand about finance and banking and can push back on technical statements about financial services that might not be true. Its much harder for the average member of Congress to push back on claims about AI. Thats the difference. Were not as well defended against statements that may or may not be factual from lobbying organizations.

FS: To what extent should the government itself be trying to build its own models or creating data sources for training those models?

BF: There is a real role for the national labs in curating datasets. This is already done at Argonne National Lab and others. For example, with datasets where privacy is a concern, like electronic medical records where you really need to analyze them, but you need a gatekeeper on privacy thats something where a national laboratory that deals with very high-security data has the right culture to protect that. Even when theyre not developing the algorithms, they can allow third parties to come in and apply those algorithms for the datasets and give them the results without turning over all the private information.

FS: Youve proposed legislation related to technology modernization and Congress. To what extent are members exposed to ChatGPT and similar technologies?

BF: The first response is to have Congress organize itself in a way that reflects todays economy. Information technology just passed financial services as a fraction of the economy. That puts it pretty much on a par with, for example, health care, which is also a little under 20%. If you look at the structure of Congress, it looks like a snapshot of our economy 100 years ago.

The AI disruption might be an opportunity for Congress to organize itself to match the modern economy. Thats one of the big issues that Id say. Obviously, thats the work of a decade at least. Theres going to be a number of economic responses to the disruption of the workforce. I think the thing we just have to understand and appreciate [is] that were all in this together. It used to be 10 or 15 years ago that people say, those poor, long-haul truck drivers or taxi drivers or factory workers that lose their jobs. But no, its everybody. With that realization, it will be easier to get a consensus that weve got to expand the safety net for those who have seen their skills and everything that defines their identity and their economic productivity put at risk from AI.

FS: How worried are you about artificial general intelligence?

BF: Over the last five years, Ive become much more worried than I previously was. And the reason for that is theres this analogy between the evolution of AI algorithms and the evolution in living organisms. And what if you look at living organisms and the strategies that have evolved, many of them are deceptive.

This happens in the natural kingdom. It will also happen and its already happening in the evolution of artificial intelligence. If you imagine there are two AI algorithms: one of them is completely transparent and you understand how it thinks [and] the other one is a black box. Then you ask yourself, which of those is more likely to be shut down and the research abandoned on it? The answer is it is the transparent one that is more likely to be shut down, because you will see it, you will understand that [it has] evil thought processes and stop working on it. There will be a survival advantage to being opaque.

You are already seeing in some of these large language models behavior that looks like deceptive behavior. Certainly to the extent that it just models whats on the internet, there will be lots of deceptive behavior, documented on the internet, for it to model and to try out in its behavior. It will be a huge survival advantage for AI algorithms to be deceptive. Its similar to the whole scandal with Volkswagen and the smog emission software. When you have opaque algorithms, the companies might not even know that their algorithm is behaving this way. Because they will put it under observation, they will test it. The difficulty is that [theyre going to] start knowing theyre under observation and then behave very nicely, and theyll do everything that you wish they would. Then, when its out in the wild, they will just try to be as profitable as they can for their company. Those are the algorithms that will survive and displace other algorithms.

More here:

Bill Foster, a particle physicist-turned-congressman, on why he's worried about artificial general intelligence - FedScoop

Posted in Artificial General Intelligence | Comments Off on Bill Foster, a particle physicist-turned-congressman, on why he’s worried about artificial general intelligence – FedScoop

Generative AI Defined: How It Works, Benefits and Dangers – TechRepublic

Posted: at 12:01 pm

What is generative AI in simple terms?

Generative AI is a type of artificial intelligence technology that broadly describes machine learning systems capable of generating text, images, code or other types of content, often in response to a prompt entered by a user.

Generative AI models are increasingly being incorporated into online tools and chatbots that allow users to type questions or instructions into an input field, upon which the AI model will generate a human-like response.

DOWNLOAD: This generative AI guide from TechRepublic Premium.

Generative AI uses a computing process known as deep learning to analyze patterns in large sets of data and then replicates this to create new data that appears human-generated. It does this by employing neural networks, a type of machine learning process that is loosely inspired by the way the human brain processes, interprets and learns from information over time.

To give an example, if you were to feed lots of fiction writing into a generative AI model, it would eventually gain the ability to craft stories or story elements based on the literature its been trained on. This is because the machine learning algorithms that power generative AI models learn from the information theyre fed in the case of fiction, this would include elements like plot structure, characters, themes and other narrative devices.

Generative AI models get more sophisticated over time the more data a model is trained on and generates, the more convincing and human-like its outputs become.

The popularity of generative AI has exploded in recent years, largely thanks to the arrival of OpenAIs ChatGPT and DALL-E models, which put accessible AI tools into the hands of consumers.

Since then, big tech companies including Google, Microsoft, Amazon and Meta have launched their own generative AI tools to capitalize on the technologys rapid uptake.

Various generative AI tools now exist, although text and image generation models are arguably the most well-known. Generative AI models typically rely on a user feeding a prompt into the engine that guides it towards producing some sort of desired output, be it text, an image, a video or a piece of music, though this isnt always the case.

Examples of generative AI models include:

Various types of generative AI models exist, each designed for specific tasks and purposes. These can broadly be categorized into the following types.

Transformer-based models are trained on large sets of data to understand the relationships between sequential information like words and sentences. Underpinned by deep learning, transformer-based models tend to be adept at natural language processing and understanding the structure and context of language, making them well suited for text-generation tasks. ChatGPT-3 and Google Gemini are examples of transformer-based generative AI models.

Generative adversarial networks are made up of two neural networks known as a generator and a discriminator, which essentially work against each other to create authentic-looking data. As the name implies, the generators role is to generate convincing output, such as an image based on a prompt, while the discriminator works to evaluate the authenticity of said image. Over time, each component gets better at their respective roles, resulting in more convincing outputs. DALL-E and Midjourney are examples of GAN-based generative AI models.

Variational autoencoders leverage two networks to interpret and generate data in this case, an encoder and a decoder. The encoder takes the input data and compresses it into a simplified format. The decoder then takes this compressed information and reconstructs it into something new that resembles the original data but isnt entirely the same.

One example might be teaching a computer program to generate human faces using photos as training data. Over time, the program learns how to simplify the photos of peoples faces into a few important characteristics such as the size and shape of the eyes, nose, mouth, ears and so on and then use these to create new faces.

This type of VAE might be used to, say, increase the diversity and accuracy of facial recognition systems. By using VAEs to generate new faces, facial recognition systems can be trained to recognize more diverse facial features, including those that are less common.

Multimodal models can understand and process multiple types of data simultaneously, such as text, images and audio, allowing them to create more sophisticated outputs. An example might be an AI model capable of generating an image based on a text prompt, as well as a text description of an image prompt. DALL-E 3 and OpenAIs GPT-4 are examples of multimodal models.

ChatGPT is an AI chatbot developed by OpenAI. Its a large language model that uses transformer architecture specifically, the generative pretrained transformer, hence GPT to understand and generate human-like text.

You can learn everything you need to know about ChatGPT in this TechRepublic cheat sheet.

Google Gemini (previously Bard) is another example of an LLM based on transformer architecture. Similar to ChatGPT, Gemini is a generative AI chatbot that generates responses to user prompts.

Google launched Bard in the U.S. in March 2023 in response to OpenAIs ChatGPT and Microsofts Copilot AI tool. It was launched in Europe and Brazil later that year.

Learn more about Gemini by reading TechRepublics comprehensive Google Gemini cheat sheet.

SEE: Google Gemini vs. ChatGPT: Is Gemini Better Than ChatGPT? (TechRepublic)

For businesses, efficiency is arguably the most compelling benefit of generative AI because it can help automate specific tasks and focus employees time, energy and resources on more important strategic objectives. This can result in lower labor costs, greater operational efficiency and insights into how well certain business processes are or are not performing.

For professionals and content creators, generative AI tools can help with idea creation, content planning and scheduling, search engine optimization, marketing, audience engagement, research and editing, and potentially more. Again, the key proposed advantage is efficiency, because generative AI tools can help users reduce the time they spend on certain tasks and invest their energy elsewhere. That said, manual oversight and scrutiny of generative AI models remains highly important; we explain why later in this article.

McKinsey estimates that, by 2030, activities that currently account for around 30% of U.S. work hours could be automated, prompted by the acceleration of generative AI.

SEE: Indeeds 10 Highest-Paid Tech Skills: Generative AI Tops the List

Generative AI has found a foothold in a number of industry sectors and is now popular in both commercial and consumer markets. The use of generative AI varies from industry to industry and is more established in some than in others. Current and proposed use cases include the following:

In terms of role-specific use cases of generative AI, some examples include:

A major concern around the use of generative AI tools and particularly those accessible to the public is their potential for spreading misinformation and harmful content. The impact of doing so can be wide-ranging and severe, from perpetuating stereotypes, hate speech and harmful ideologies to damaging personal and professional reputation.

SEE: Gartner analysts take on 5 ways generative AI will impact culture & society

The risk of legal and financial repercussions from the misuse of generative AI is also very real; indeed, it has been suggested that generative AI could put national security at risk if used improperly or irresponsibly.

These risks havent escaped policymakers. On Feb. 13, 2024, the European Council approved the AI Act, a first-of-kind piece of legislation designed to regulate the use of AI in Europe. The legislation takes a risk-based approach to regulating AI, with some AI systems banned outright.

Security agencies have made moves to ensure AI systems are built with safety and security in mind. In November 2023, 16 agencies including the U.K.s National Cyber Security Centre and the U.S. Cybersecurity and Infrastructure Security Agency released the Guidelines for Secure AI System Development, which promote security as a fundamental aspect of AI development and deployment.

Generative AI has prompted workforce concerns, most notably that the automation of tasks could lead to job losses. Research from McKinsey suggests that, by 2030, around 12 million people may need to switch jobs, with office support, customer service and food service roles most at risk. The consulting firm predicts that clerks will see a decrease of 1.6 million jobs, in addition to losses of 830,000 for retail salespersons, 710,000 for administrative assistants and 630,000 for cashiers.

SEE: OpenAI, Google and More Agree to White House List of Eight AI Safety Assurances

Generative AI and general AI represent different sides of the same coin; both relate to the field of artificial intelligence, but the former is a subtype of the latter.

Generative AI uses various machine learning techniques, such as GANs, VAEs or LLMs, to generate new content from patterns learned from training data.

General AI, also known as artificial general intelligence, broadly refers to the concept of computer systems and robotics that possess human-like intelligence and autonomy. This is still the stuff of science fiction think Disney Pixars WALL-E, Sonny from 2004s I, Robot or HAL 9000, the malevolent AI from 2001: A Space Odyssey. Most current AI systems are examples of narrow AI, in that theyre designed for very specific tasks.

To learn more about what artificial intelligence is and isnt, read our comprehensive AI cheat sheet.

Generative AI is a subfield of artificial intelligence; broadly, AI refers to the concept of computers capable of performing tasks that would otherwise require human intelligence, such as decision making and NLP. Generative AI models use machine learning techniques to process and generate data.

Machine learning is the foundational component of AI and refers to the application of computer algorithms to data for the purposes of teaching a computer to perform a specific task. Machine learning is the process that enables AI systems to make informed decisions or predictions based on the patterns they have learned.

DOWNLOAD: TechRepublic Premiums prompt engineer hiring kit

What is the difference between generative AI and discriminative AI?

Whereas generative AI is used for generating new content by learning from existing data, discriminative AI specializes in classifying or categorizing data into predefined groups or classes.

Discriminative AI works by learning how to tell different types of data apart. Its used for tasks where data needs to be sorted into groups; for example, figuring out if an email is spam, recognizing whats in a picture or diagnosing diseases from medical images. It looks at data it already knows to classify new data correctly.

So, while generative AI is designed to create original content or data, discriminative AI is used for analyzing and sorting it, making each useful for different applications.

Regenerative AI, while less commonly discussed, refers to AI systems that can fix themselves or improve over time without human help. The concept of regenerative AI is centered around building AI systems that can last longer and work more efficiently, potentially even helping the environment by making smarter decisions that result in less waste.

In this way, generative AI and regenerative AI serve different roles: Generative AI for creativity and originality, and regenerative AI for durability and sustainability within AI systems.

It certainly looks as though generative AI will play a huge role in the future. As more businesses embrace digitization and automation, generative AI looks set to play a central role in industries of all types, with many organizations already establishing guidelines for the acceptable use of AI in the workplace. The capabilities of gen AI have already proven valuable in areas such as content creation, software development, medicine, productivity, business transformation and much more. As the technology continues to evolve, gen AIs applications and use cases will only continue to grow.

SEE: Deloittes 2024 Tech Predictions: Gen AI Will Continue to Shape Chips Market

That said, the impact of generative AI on businesses, individuals and society as a whole is contingent on properly addressing and mitigating its risks. Key to this is ensuring AI is used ethically by reducing biases, enhancing transparency and accountability and upholding proper data governance.

None of this will be straightforward. Keeping laws up to date with fast-moving tech is tough but necessary, and finding the right mix of automation and human involvement will be key to democratizing the benefits of generative AI. Recent legislation such as President Bidens Executive Order on AI, Europes AI Act and the U.K.s Artificial Intelligence Bill suggest that governments around the world understand the importance of getting on top of these issues quickly.

Here is the original post:

Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic

Posted in Artificial General Intelligence | Comments Off on Generative AI Defined: How It Works, Benefits and Dangers – TechRepublic

AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET

Posted: at 12:01 pm

AI developments are happening pretty fast. If you don't stop and look around once in a while, you could miss them.

Fortunately, I'm looking around for you and what I saw this week is that competition between OpenAI, maker of ChatGPT and Dall-E, and Google continues to heat up in a way that's worth paying attention to.

A week after updating its Bard chatbot and changing the name to Gemini, Google's DeepMind AI subsidiary previewed the next version of its generative AI chatbot. DeepMind told CNET's Lisa Lacy that Gemini 1.5 will be rolled out "slowly" to regular people who sign up for a wait list and will be available now only to developers and enterprise customers.

Gemini 1.5 Pro, Lacy reports, is "as capable as" the Gemini 1.0 Ultra model, which Google announced on Feb. 8. The 1.5 Pro model has a win rate -- a measurement of how many benchmarks it can outperform -- of 87% compared with the 1.0 Pro and 55% against the 1.0 Ultra. So the 1.5 Pro is essentially an upgraded version of the best available model now.

Gemini 1.5 Pro can ingest video, images, audio and text to answer questions, added Lacy. Oriol Vinyals, vice president of research at Google DeepMind and co-lead of Gemini, described 1.5 as a "research release" and said the model is "very efficient" thanks to a unique architecture that can answer questions by zeroing in on expert sources in that particular subject rather than seeking the answer from all possible sources.

Meanwhile, OpenAI announced a new text-to-video model called Sora that's capturing a lot of attention because of the photorealistic videos it's able to generate. Sora can "create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions." Following up on a promise it made, along with Google and Meta last week, to watermark AI-generated images and video, OpenAI says it's also creating tools to detect videos created with Sora so they can be identified as being AI generated.

Google and Meta have also announced their own gen AI text-to-video creators.

Sora, which means "sky" in Japanese, is also being called experimental, with OpenAI limiting access for now to so-called "red teamers," security experts and researchers who will assess the tool's potential harms or risks. That follows through on promises made as part of President Joe Biden's AI executive order last year, asking developers to submit the results of safety checks on their gen AI chatbots before releasing them publicly. OpenAI said it's also looking to get feedback on Sora from some visual artists, designers and filmmakers.

How do the photorealistic videos look? Pretty realistic. I agree with the The New York Times, which described the short demo videos -- "of wooly mammoths trotting through a snowy meadow, a monster gazing at a melting candle and a Tokyo street scene seemingly shot by a camera swooping across the city" -- as "eye popping."

The MIT Review, which also got a preview of Sora, said the "tech has pushed the envelope of what's possible with text-to-video generation." Meanwhile, The Washington Post noted Sora could exacerbate an already growing problem with video deepfakes, which have been used to "deceive voters" and scam consumers.

One X commentator summarized it this way: "Oh boy here we go what is real anymore." And OpenAI CEO Sam Altman called the news about its video generation model a "remarkable moment."

You can see the four examples of what Sora can produce on OpenAI's intro site, which notes that the tool is "able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world. The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions."

But Sora has its weaknesses, which is why OpenAI hasn't yet said whether it will actually be incorporated into its chatbots. Sora "may struggle with accurately simulating the physics of a complex scene and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark. The model may also confuse spatial details of a prompt, for example, mixing up left and right."

All of this is to remind us that tech is a tool -- and that it's up to us humans to decide how, when, where and why to use that technology. In case you didn't see it, the trailer for the new Minions movie (Despicable Me 4: Minion Intelligence) makes this point cleverly, with its sendup of gen AI deepfakes and Jon Hamm's voiceover of how "artificial intelligence is changing how we see the worldtransforming the way we do business."

"With artificial intelligence," Hamm adds over the minions' laughter, "the future is in good hands."

Here are the other doings in AI worth your attention.

Twenty tech companies, including Adobe, Amazon, Anthropic, ElevenLabs, Google, IBM, Meta, Microsoft, OpenAI, Snap, TikTok and X, agreed at a security conference in Munich that they will voluntarily adopt "reasonable precautions" to guard against AI tools being used to mislead or deceive voters ahead of elections.

"The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the text of the accord says, according to NPR. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

But the agreement is "largely symbolic," the Associated Press reported, noting that "reasonable precautions" is a little vague.

"The companies aren't committing to ban or remove deepfakes," the AP said. "Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide 'swift and proportionate responses' when that content starts to spread."

AI has already been used to try to trick voters in the US and abroad. Days before the New Hampshire presidential primary, fraudsters sent an AI robocall that mimicked President Biden's voice, asking them not to vote in the primary. That prompted the Federal Communications Commission this month to make AI-generated robocalls illegal. The AP said that "Just days before Slovakia's elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media."

"Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own," Nick Clegg, president of global affairs for Meta, told the Associated Press in an interview before the summit.

Over 4 billion people are set to vote in key elections this year in more than 40 countries,. including the US, The Hill reported.

If you're concerned about how deepfakes may be used to scam you or your family members -- someone calls your grandfather and asks them for money by pretending to be you -- Bloomberg reporter Rachel Metz has a good idea. She suggests it may be time for us all to create a "family password" or safe word or phrase to share with our family or personal network that we can ask for to make sure we're talking to who we think we're talking to.

"Extortion has never been easier," Metz reports. "The kind of fakery that used to take time, money and technical know-how can now be accomplished quickly and cheaply by nearly anyone."

That's where family passwords come in, since they're "simple and free," Metz said. "Pick a word that you and your family (or another trusted group) can easily remember. Then, if one of those people reaches out in a way that seems a bit odd -- say, they're suddenly asking you to deliver 5,000 gold bars to a P.O. Box in Alaska -- first ask them what the password is."

How do you pick a good password? She offers a few suggestions, including using a word you don't say frequently and that's not likely to come up in casual conversations. Also, "avoid making the password the name of a pet, as those are easily guessable."

Hiring experts have told me it's going to take years to build an AI-educated workforce, considering that gen AI tools like ChatGPT weren't released until late 2022. So it makes sense that learning platforms like Coursera, Udemy, Udacity, Khan Academy and many universities are offering online courses and certificates to upskill today's workers. Now the University of Pennsylvania's School of Engineering and Applied Science said it's the first Ivy League school to offer an undergraduate major in AI.

"The rapid rise of generative AI is transforming virtually every aspect of life: health, energy, transportation, robotics, computer vision, commerce, learning and even national security," Penn said in a Feb. 13 press release. "This produces an urgent need for innovative, leading-edge AI engineers who understand the principles of AI and how to apply them in a responsible and ethical way."

The bachelor of science in AI offers coursework in machine learning, computing algorithms, data analytics and advanced robotics and will have students address questions about "how to align AI with our social values and how to build trustworthy AI systems," Penn professor Zachary Ives said.

"We are training students for jobs that don't yet exist in fields that may be completely new or revolutionized by the time they graduate," added Robert Ghrist, associate dean of undergraduate education in Penn Engineering.

FYI, the cost of an undergraduate education at Penn, which typically spans four years, is over $88,000 per year (including housing and food).

For those not heading to college or who haven't signed up for any of those online AI certificates, their AI upskilling may come courtesy of their current employee. The Boston Consulting Group, for its Feb. 9 report, What GenAI's Top Performer Do Differently, surveyed over 150 senior executives across 10 sectors. Generally:

Bottom line: companies are starting to look at existing job descriptions and career trajectories, and the gaps they're seeing in the workforce when they consider how gen AI will affect their businesses. They've also started offering gen AI training programs. But these efforts don't lessen the need for today's workers to get up to speed on gen AI and how it may change the way they work -- and the work they do.

In related news, software maker SAP looked at Google search data to see which states in the US were most interested in "AI jobs and AI business adoption."

Unsurprisingly, California ranked first in searches for "open AI jobs" and "machine learning jobs." Washington state came in second place, Vermont in third, Massachusetts in fourth and Maryland in fifth.

California, "home to Silicon Valley and renowned as a global tech hub, shows a significant interest in AI and related fields, with 6.3% of California's businesses saying that they currently utilize AI technologies to produce goods and services and a further 8.4% planning to implement AI in the next six months, a figure that is 85% higher than the national average," the study found.

Virginia, New York, Delaware, Colorado and New Jersey, in that order, rounded out the top 10.

Over the past few months, I've highlighted terms you should know if you want to be knowledgeable about what's happening as it relates to gen AI. So I'm going to take a step back this week and provide this vocabulary review for you, with a link to the source of the definition.

It's worth a few minutes of your time to know these seven terms.

Anthropomorphism: The tendency for people to attribute humanlike qualities or characteristics to an AI chatbot. For example, you may assume it's kind or cruel based on its answers, even though it isn't capable of having emotions, or you may believe the AI is sentient because it's very good at mimicking human language.

Artificial general intelligence (AGI): A description of programs that are as capable as -- or even more capable than -- than a human. While full general intelligence is still off in the future, models are growing in sophistication. Some have demonstrated skills across multiple domains ranging from chemistry to psychology, with task performance paralleling human benchmarks.

Generative artificial intelligence (gen AI): Technology that creates content -- including text, images, video and computer code -- by identifying patterns in large quantities of training data and then creating original material that has similar characteristics.

Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that aren't yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze or make up facts about events that aren't in its training data. It's not fully understood why this happens but can arise from sparse data, information gaps and misclassification.

Large language model (LLM): A type of AI model that can generate human-like text and is trained on a broad dataset.

Prompt engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAI's ChatGPT, describing the tasks users feed into the algorithm. (e.g. "Give me five popular baby names.")

Temperature: In simple terms, model temperature is a parameter that controls how random a language model's output is. A higher temperature means the model takes more risks, giving you a diverse mix of words. On the other hand, a lower temperature makes the model play it safe, sticking to more focused and predictable responses.

Model temperature has a big impact on the quality of the text generated in a bunch of [natural language processing] tasks, like text generation, summarization and translation.

The tricky part is finding the perfect model temperature for a specific task. It's kind of like Goldilocks trying to find the perfect bowl of porridge -- not too hot, not too cold, but just right. The optimal temperature depends on things like how complex the task is and how much creativity you're looking for in the output.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

View original post here:

AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET

Posted in Artificial General Intelligence | Comments Off on AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET

Page 4«..3456..»