Page 21234..»

Category Archives: Artificial General Intelligence

Analyzing the Future of AI – Legal Service India

Posted: June 6, 2024 at 8:48 am

The future of artificial intelligence (AI) promises transformative potential, impacting various domains such as healthcare, transportation, finance, and entertainment. As AI technology evolves rapidly, key trends are emerging and shaping its trajectory.

One significant trend is the rise of artificial general intelligence (AGI), where AI systems develop cognitive abilities similar to humans, enabling them to tackle complex tasks across different domains. AGI has the potential to revolutionize industries and societies, empowering machines with human-like problem-solving capabilities.

Artificial General Intelligence (AGI) represents the pinnacle of artificial intelligence, aspiring to create systems that can match or even surpass human capabilities in any intellectual endeavour. Unlike narrow AI, which is specifically designed to excel in a limited set of tasks, AGI aims for true universality, enabling it to tackle a vast range of problems and adapt to diverse situations, much like a human mind.

This ambitious goal requires AGI systems to possess a broad spectrum of cognitive abilities, including the capacity to learn from experience, reason logically, solve complex problems, and even engage in creative thought processes. Achieving this level of intelligence would mark a significant leap forward in AI, transcending the limitations of current systems and opening up a world of possibilities for how we interact with technology.

Another key trend is the integration of AI into everyday devices and systems, leading to the expansion of the Internet of Things (IoT) and smart environments. AI-powered devices can gather and analyze vast data, enabling autonomous decision-making and enhancing efficiency, convenience, and quality of life. This trend is reshaping our interactions with the physical world, creating a more connected and intelligent environment.

With the increasing prevalence and influence of AI in society, ethical considerations and responsible governance are paramount. Issues like algorithmic bias, data privacy, transparency, accountability, and the impact on employment necessitate collaboration among policymakers, industry leaders, researchers, and civil society. Frameworks and guidelines must be established to promote ethical AI use, mitigate risks, and prevent unintended consequences.

The future of AI holds significant technological advancements, including deep learning, reinforcement learning, natural language understanding, and robotics. These advances will enable AI systems to perform complex tasks with enhanced accuracy and efficiency, unlocking new avenues for innovation and discovery. From personalized medicine to predictive maintenance, AI-driven solutions offer the potential to solve pressing challenges and contribute to a more sustainable and equitable future.

Predictive maintenance uses data analysis, smart sensors, and computer learning to keep track of how machines are doing. It helps us guess when a machine might break down so we can fix it before it stops working. This prevents unexpected shutdowns and saves us time and money.

However, ethical and societal concerns must remain at the forefront as AI evolves. These advancements require careful consideration to ensure responsible and fair use of AI while mitigating potential risks and unintended consequences. Collaboration and dialogue among various stakeholders are essential to establish ethical guidelines and frameworks that guide the development and deployment of AI, maximizing its benefits for society while safeguarding its integrity and potential for good.

While AI presents numerous opportunities for positive change, it also brings significant challenges. These include concerns about job displacement due to automation, the potential for misuse of AI for malicious purposes, and the need to ensure ethical development and deployment that respects human rights, diversity, and inclusion. Additionally, the concentration of AI power in the hands of a few large tech companies and the potential for AI-driven surveillance and control raise concerns about privacy and civil liberties.

The future of AI holds immense potential for societal transformation and human well-being. However, realizing this potential requires careful consideration of ethical, social, and governance implications. Continued investment in research, education, and cross-disciplinary collaboration is crucial. By harnessing the power of AI responsibly and ethically, we can unlock a future of innovation, creativity, and progress that benefits all of humanity.

Written By: Rana Saman, 4th Year Law Student At Al-Ameen College Of Law

Go here to read the rest:

Analyzing the Future of AI - Legal Service India

Posted in Artificial General Intelligence | Comments Off on Analyzing the Future of AI – Legal Service India

The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders – CMSWire

Posted: at 8:48 am

The Gist

Leading artificial intelligence companies avoid effective oversight because of money and operate without sufficient accountability government or other industry standards, former and current employees said in a letter published today.

In other words, they get away with a lot and that's not great news for a technology that comes with risks including human extinction.

"We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public," the group wrote in the letter titled, "A Right to Warn about Advanced Artificial Intelligence." "However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this."

The letter was signed by seven former OpenAI employees, four current OpenAI employees, one former Google DeepMind employee and one current Google DeepMind employee. It was also endorsed by AI powerhousesYoshua Bengio, Geoffrey Hinton and Stuart Russell.

While the group believes in the potential of AI technology to deliver unprecedented benefits to humanity, it says risks include:

"AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm," the group wrote. "However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily."

The list of employees who shared their names (others were listed anonymously) includes: Jacob Hilton, formerly OpenAI; Daniel Kokotajlo, formerly OpenAI; Ramana Kumar, formerly Google DeepMind; Neel Nanda, currently Google DeepMind formerly Anthropic; William Saunders, formerly OpenAI; Carroll Wainwright, formerly OpenAI; and Daniel Ziegler, formerly OpenAI.

This isn't the first time Hilton spoke publicly about his former company. And he was pretty vocal today on X as well.

Kokotajlo, who worked on OpenAI, quit last month and was vocal about it in a public forum as well. He said he "Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI (artificial general intelligence)." Saunders, also on the governance team, departed along with Kokotajlo.

Wainright's time at OpenAI dates back at least to the debut of ChatGPT. Ziegler, according to this LinkedIn profile, was with OpenAI from 2018 to 2021.

Related Article: Musk, Wozniak and Thousands of Others: 'Pause Giant AI Experiments'

Leading AI companies won't give up critical information surrounding the development of AI technologies on their own, according to this group. Today, it's up to current and former employees rather than governments that can hold them accountable to the public.

"Yet," the group wrote, "broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues. Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated."

These employees fear various forms of retaliation, given the history of such cases across the industry.

Related Article: OpenAI Names Sam Altman CEO 5 Days After It Fired Him

Here's the gist of what this group calls on leading AI companies to do:

AI companies should not:

AI companies should:

OpenAI had no public response to the group's letter. In its most recent tweet, it shared its post about deceptive uses of AI.

"OpenAI is committed to enforcing policies that prevent abuse and to improving transparency around AI-generated content," the company wrote May 30. "That is especially true with respect to detecting and disrupting covert influence operations (IO), which attempt to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them."

Have a tip to share with our editorial team? Drop us a line:

Original post:

The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders - CMSWire

Posted in Artificial General Intelligence | Comments Off on The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders – CMSWire

Creating ‘good’ AGI that won’t kill us all: Crypto’s Artificial Superintelligence Alliance – Cointelegraph

Posted: March 29, 2024 at 2:48 am

After a year of increasingly dire warnings about the imminent demise of humanity at the hands of superintelligent artificial intelligence (AI), Magazine is in Panama at the Beneficial AGI Conference to hear the other side of the story. Attendees include an eclectic mix of transhumanists, crypto folk, sci-fi authors including David Brin, futurists and academics.

The conference is run by SingularityNET, a key member of the proposed new Artificial Superintelligence Alliance, to find out what happens if everything goes right with creating artificial general intelligence (AGI) human-level,artificial general intelligence.

But how do we bring about that future, rather than the scenario in which Skynet goes rogue and kills us all?

One of the best insights into why those questions are so important comes from futurist Jose Luis Cordeiro, author of The Death of Death, who believes humanity will cure all diseases and aging thanks to AGI.

He tells Magazine of some sage wisdom that Arthur C. Clarke, the author of 2001: A Space Odyssey, once told him.

He said: We have to be positive about the future because the images of the future of whats possible begin with our minds. If we think we will self-destroy, most likely we will. But if we think that we will survive, [that] we will move into a better world [then we] will work toward that and we will achieve it. So it begins in our minds.

Humans are hardwired to focus more on the existential threats from AGI than on the benefits.

Evolutionary speaking, its better that our species worries nine times too often that the wind rustling in the bushes could be a tiger than it is to be blithely unconcerned about the rustling and get eaten by a tiger on the 10th occurrence.

Even the doomers dont put a high percentage chance of AGI killing us all, with a survey of almost 3000 AI researchers suggesting the chance of an extremely bad outcome ranges from around 5% to 10%. So while thats worryingly high, the odds are still in our favor.

Opening the conference, SingularityNET founder and the Father of AGI, Dr. Ben Goertzel, paid tribute to Ethereum founder Vitalik Buterins concept of defensive accelerationism. Thats the midpoint between the effective accelerationism techno-optimists and their move fast and break things ethos, and the decelerationists, who want to slow down or halt the galloping pace of AI development.

Goertzel believes that deceleration is impossible but concedes theres a small chance things could go horribly wrong with AGI. So hes in favor of pursuing AGI while being mindful of the potential dangers. Like many in the AI/crypto field, he believes the solution is open-sourcing the technology and decentralizing the hardware and governance.

This week SingularityNET announced it has teamed up with the decentralized multi-agent platform FetchAI founded by DeepMind veteran Humayun Sheikh and the data exchange platform Ocean Protocol to form the Artificial Superintelligence Alliance (ASI).

It will be the largest open-sourced independent player in AI research and development and has proposed merging SingularityNET, FetchAI and Ocean Protocols existing tokens into a new one called ASI. It would have a fully diluted market cap of around $7.5 billion subject to approval votes over the next two weeks. The three platforms would continue to operate as separate entities under the guidance of Goertzel, with Sheikh as chair.

According to the Alliance, the aim is to create a powerful compelling alternative to Big Techs control over AI development, use and monetization by creating decentralized AI infrastructure at scale and accelerating investment into blockchain-based AGI.

Probably the most obvious beneficial impact is AGIs potential to analyze huge swathes of data to help solve many of our most difficult scientific, environmental, social and medical issues.

Weve already seen some amazing medical breakthroughs, with MIT researchers using AI models to evaluate tens of thousands of potential chemical compounds and discovered the first new class of antibiotics in 60 years, one thats effective against the hitherto drug-resistant MRSA bacteria. Its the sort of scaling up of research thats almost impossible for humans to achieve.

Also read: Ben Goertzel profile How to prevent AI from annihilating humanity using blockchain

And thats all before we get to the immortality and mind-uploading stuff that the transhumanists get very excited about but which weirds most people out.

This ability to analyze great swathes of data also suggests the technology will be able to give early warnings of pandemics, natural disasters and environmental issues. AI and AGI also have the potential to free humans from drudgery and repetitive work, from coding to customer service help desks.

While this will cause a massive upheaval to the workforce, the invention of washing machines and Amazons online businesses had big impacts on particular occupations. The hope is that a bunch of new jobs will be created instead.

Economic professor Robin Hanson says this has happened over the past two decades, even though people were very concerned at the turn of the century that automation would replace workers.

Hansons study of the data on how automation impacted wages and employment across various industries between 1999 and 2019 found that despite big changes, most people still had jobs and were paid pretty much the same.

On average, there wasnt a net effect on wages or jobs in automation of U.S. jobs from 1999 to 2018, he says.

Janet Adams, the optimistic CEO of SingularityNET, explains that AGI has the potential to be extraordinarily positive for all humanity.

I see a future in which our future AGIs are making decisions which are more ethical than the decisions which humans make. And they can do that because they dont have emotions or jealousy or greed or hidden agendas, she says.

Adams points out that 25,000 people die every day from hunger, even as people in rich countries throw away mountains of food. Its a problem that could be solved by intelligent allocation of resources across the planet, she says.

But Adams warns AGI needs to be trained on data sets reflecting the entire worlds population and not just the top 1% so that when they make decisions, they wont make them just for the benefit of the powerful few, they will make them for the benefit of the broader civilization, broader humanity.

Anyone who watched the early utopian dreams of a decentralized internet crumble into a corporate ad-filled landscape of addictive design and engagement farming may have doubts this rosy future is possible.

Building high-end AI requires a mountain of computing and other resources that are currently out of reach of all but a handful of the usual suspects: Nvidia, Google, Meta and Microsoft. So the default assumption is that one of these tech giants will end up controlling AGI.

Goertzel, a long-haired hippy who plays in a surprisingly good band fronted by a robot, wants to challenge that assumption.

Goertzel points out that the default assumption used to be that companies like IBM would win the computing industry and Yahoo would win search.

The reason these things change is because people were concretely fighting to change it in each instance, he says. Instead, Bill Gates, Steve Jobs and the Google guys came along.

The founder of SingularityNET, hes been thinking about the Singularity (a theoretical moment when technological development increases exponentially) since the early 1970s when he read an early book on the subject called The Prometheus Project.

Hes been working on AGI for much of the time since then, popularizing the term AGI and launching the OpenCog AI framework in 2008.

Adams says Goertzel is a key reason SingularityNET has a credible shot.

We are the biggest not-for-profit, crypto-funded AI science and research team on the planet, Adams says, noting their competitors have been focused on narrow AIs like ChatGPT and are only now shifting their strategy to AGI.

Theyre years behind us, she says. We have three decades of research with Dr. Ben Goertzel in neural symbolic methods.

But she adds that opening up the platform to any and all developers around the world and rewarding them for their contribution will give it the edge even over the mega-corporations who currently dominate the space.

Because we have a powerful vision and a powerful commitment to building the most advanced, most intelligent AGI in a democratic way, its hard to imagine that Big Tech or any other player could come in and compete, particularly when youre up against open source.

[We will] see a potentially huge influx of people developing on the SingularityNET marketplace and the continued escalation of pace toward AGI. Theres a good chance it will be us.

The Prometheus Project proposed that AI was such an earth-shattering development that everyone in the world should get a democratic vote on its development.

So when blockchain emerged, it seemed like implementing decentralized infrastructure and token-based governance for AI was the next most practical alternative.

HyperCycle CEO Toufi Saliba tells Magazine this mitigates the threat of a centralized company or authoritarian country gaining immense power from developing AGI first, which would be the worst thing that ever happened to humanity.

Also read: Real AI use cases in crypto, No 1: The best use of money for AI is crypto

Its not the only potential solution to the problem. Meta chief AI scientist Yan Le Cun is a big proponent of open-sourcing AI models and letting a thousand flowers bloom, while X owner Elon Musk recently open-sourced the model for Grok.

But blockchain is arguably a big step up. SingularityNET aims to network the technology around the world, with different components controlled by different communities, thereby spreading the risk of any single company, group or government controlling the AGI.

So you could use these infrastructures to implement decentralized deep neural networks, you could use them to implement a huge logic engine, you can use them to implement an artificial life approach where you have a simulated ecosystem and a bunch of little artificial animals interacting and trying to evolve toward intelligence, explains Goertzel.

I want to foster creative contributions from everywhere, and it may be some, you know, 12-year-old genius from Tajikistan comes up with a new artificial life innovation that provides a breakthrough to AGI.

HyperCycle is a ledgerless blockchain thats fast enough to allow AI components to communicate, coordinate and transact to finality in under 300 milliseconds. The idea is to give AIs a way to call on the resources of other AIs, paid for via microtransactions.

For now, the fledgling network is being used for small-scale applications, like an AI app calling on another AI service to help complete a task. But in time, as the network scales, its theoretically possible that AGI might be an emergent property of the various AI components working together in a sort of distributed brain.

So, in that approach, the entire world has a much higher chance to get to AGI as a single entity, Saliba says.

Goertzel didnt develop HyperCycle for that reason he just needed something miles faster than existing blockchains to enable AIs to work together.

The project hes most excited about is OpenCog Hyperon, which launches in alpha this month. It combines together deep neural nets, logic engines, evolutionary learning and other AI paradigms in the same software framework, for updating the same extremely decentralized Knowledge Graph.

The idea is to throw open the doors to anyone who wants to work on it in the hope they can improve the METTA AGI programming language so it can scale up massively. We will have the complete toolset for building the baby AGI, he says. To get something I would want to call it baby AGI we will need that million times speed up of the METTA interpreter, he says.

My own best guess is that Opencog Hyperon may be the system to make the [AGI] breakthrough.

Of course, decentralization does not ensure things will go right with AGI. As Goertzel points out, the government of Somalia was decentralized very widely in the 1990s under a bunch of warlords and militias, but it would have been preferable at the time to live under the centralized government of Finland.

Furthermore, token-based governance is a long way from being fit for prime time. In projects like Uniswap and Maker, large holders like a16z and the core team have so many tokens its almost not worth anyone else voting. Many other decentralized autonomous organizations are wracked by politics and infighting.

The surging price of crypto/AI projects has attracted a bunch of token speculators. Are these really the people we want to put in control of AGI?

Goertzel argues that while blockchain projects are currently primarily attractive to people interested in making money, that will change as the use case evolves.

If we roll out the worlds smartest AI on decentralized networks, you will get a lot of other people involved who are not primarily oriented toward financial speculation. And then itll be a different culture.

But if the Artificial Superintelligence Alliance does achieve AGI, wouldnt its tokens be ludicrously expensive and out of reach of those primarily interested in beneficial AGI?

Goetzel suggests that perhaps a weighted voting system that prioritizes those who have contributed to the project may be required:

I think for guiding the mind of the AGI, we want to roll out a fairly sophisticated, decentralized reputation system and have something closer to one person, one vote, but where people who have some track record of contributing to the AI network and making some sense, get a higher weighting.

Subscribe

The most engaging reads in blockchain. Delivered once a week.

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.

Follow this link:

Creating 'good' AGI that won't kill us all: Crypto's Artificial Superintelligence Alliance - Cointelegraph

Posted in Artificial General Intelligence | Comments Off on Creating ‘good’ AGI that won’t kill us all: Crypto’s Artificial Superintelligence Alliance – Cointelegraph

Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI | TechPolicy.Press – Tech Policy Press

Posted: at 2:48 am

Based on the number of new bills across the states and in Congress, the number of working groups and reports commissioned by city, state, and local governments, and the drumbeat of activity from the White House, it would appear that it is an agenda-setting moment for policy regarding artificial intelligence (AI) in the United States. But the language describing AI research and applications continues to generate confusion and seed the ground for potentially harmful missteps.

Stakeholders agree that AI warrants thoughtful legislation, but struggle for consensus around problems and corresponding solutions. An aspect of this confusion is embodied by words we use. It is imperative that we not only know what we are talking about regarding AI, but agree on how we talk about it.

Last fall, the US Senate convened a series of closed-door meetings to inform US AI strategy. It brought together academics and civil society leaders, but was disproportionately headlined by prominent industry voices who have an interest in defining the terms of the discussion. From the expanding functionality of ever-larger AI models to the seemingly far-off threat to human existence, lawmakers and the public are immersed in AI branding and storytelling. Loaded terminology can mislead policymakers and stakeholders, ultimately causing friction between competing aspects of an effective AI agenda. While speculative and imprecise language has always permeated AI, we must emphasize nomenclature leaning more towards objectivity than sensationalism. Otherwise, US AI strategy could be misplaced or unbalanced.

Intelligence represents the promise of AI, yet its a construct thats difficult to measure. The very notion is multifaceted and characterized by a fraught history. The intelligence quotient (IQ), the supposed numerical representation of cognitive ability, remains misused and misinterpreted to this day. Corresponding research has led to contentious debates regarding purported fundamental differences between IQ scores of Black, White, and Hispanic people in the US. There's a long record of dubious attempts to quantify intelligence in ways that cause a lot of harm, and it poses a real danger that language about AI might do the same.

Modern discussions in the public sphere give full credence to AI imbued with human-like attributes. Yet, this idea serves as a shaky foundation for debate about the technology. Evaluating the power of current AI models relies on how theyre tested, but the alignment between test results and our understanding of what they can do is often not clear. AI taxonomy today is predominantly defined by commercial institutions. Artificial general intelligence (AGI), for example, is a phrase intended to illustrate the point at which AI matches or surpasses humans on a variety of tasks. It suggests a future where computers serve as equally competent partners. One by one, industry leaders have now made AGI a business milestone. But its uncertain how to know once weve crossed that threshold, and so the mystique seeps into the ethos.

Other examples illustrate this sentiment as well. The idea of a models emergent capabilities nods to AIs inherent capacity to develop and even seem to learn in unexpected ways. Similar developments have convinced some users of a large language models (LLM) sentience.

However, while these concepts are currently disputed, other scientists contend that even though bigger LLMs typically yield better performance, the presence of these phenomena ultimately relies on a practitioners test metrics.

The language and research of the private sector disproportionately influences society on AI. Perhaps its their prerogative; entrepreneurs and industry experts arent wrong to characterize their vision in their own way, and aspirational vocabulary helps aim higher and broader. But it may not always be in the public interest.

These terms arent technical jargon buried deep in a peer-review article. They are tossed around every day in print, on television, and in congressional hearings. Theres an ever-present tinge of not-quite-proven positive valence. On one hand, its propped up with bold attributes full of potential, but on the other, often dismissed and reduced to a mechanical implement when things go wrong.

The potential societal impact is inevitable when unproven themes are parroted by policymakers who may not always have time to do their homework.

Politicians are not immune to the hype. Examples abound in the speeches of world leaders like UK Prime Minister Rishi Sunak and in the statements of President Joe Biden. Congressional hearings and global meetings of the United Nations have adopted language from the loudest, most visible voices providing a wholesale dressing for the entire sector.

Whats missing here is the acknowledgement of how much language sets the conditions for our reality, and how these conversations play out in front of the media and public. We lack common, empirical, and objective terminology. Modern AI descriptors mean one thing to researchers, but may express something entirely different to the public.

We must call for intentional efforts to define and interrogate the words we use to describe AI products and their potential functionality. Exhaustive and appropriate test metrics must also justify claims. Ultimately, hypothetical metaphors can be distorting to the public and lawmakers, and this can influence the suitability of laws or inspire emerging AI institutions with ill-defined missions.

We cant press reset, but we can provide more thoughtful framing.

The effects of AI language are incredibly broad and indirect but, in total, can be enormously impactful. Steady and small-scale steps may deliver us to a place where our understanding of AI has been shaped, gradually modifying behavior by reinforcing small and successive approximations bringing us ever closer to a desired belief.

By the time we ask, how did we get here, the ground may have shifted underneath our feet.

Continued here:

Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI | TechPolicy.Press - Tech Policy Press

Posted in Artificial General Intelligence | Comments Off on Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI | TechPolicy.Press – Tech Policy Press

Whoever develops artificial general intelligence first wins the whole game – ForexLive

Posted: at 2:48 am

High risk warning: Foreign exchange trading carries a high level of risk that may not be suitable for all investors. Leverage creates additional risk and loss exposure. Before you decide to trade foreign exchange, carefully consider your investment objectives, experience level, and risk tolerance. You could lose some or all your initial investment; do not invest money that you cannot afford to lose. Educate yourself on the risks associated with foreign exchange trading and seek advice from an independent financial or tax advisor if you have any questions.

Advisory warning: FOREXLIVE is not an investment advisor, FOREXLIVE provides references and links to selected news, blogs and other sources of economic and market information for informational purposes and as an educational service to its clients and prospects and does not endorse the opinions or recommendations of the blogs or other sources of information. Clients and prospects are advised to carefully consider the opinions and analysis offered in the blogs or other information sources in the context of the client or prospect's individual analysis and decision making. None of the blogs or other sources of information is to be considered as constituting a track record. Past performance is no guarantee of future results and FOREXLIVE specifically hereby acknowledges clients and prospects to carefully review all claims and representations made by advisors, bloggers, money managers and system vendors before investing any funds or opening an account with any Forex dealer. Any news, opinions, research, data, or other information contained within this website is provided on an "as-is" basis as a general market commentary and does not constitute investment or trading advice, and we do not purport to present the entire relevant or available public information with respect to a specific market or security. FOREXLIVE expressly disclaims any liability for any lost principal or profits which may arise directly or indirectly from the use of or reliance on such information, or with respect to any of the content presented within its website, nor its editorial choices.

Disclaimer: FOREXLIVE may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers.

Finance Magnates CY Limited

Read this article:

Whoever develops artificial general intelligence first wins the whole game - ForexLive

Posted in Artificial General Intelligence | Comments Off on Whoever develops artificial general intelligence first wins the whole game – ForexLive

Elon Musk Believes ‘Super Intelligence’ Is Inevitable and Could End Humanity – Observer

Posted: at 2:48 am

Elon Musk urges A.I. leaders to steer in the most positive direction possible. STR/NurPhoto via Getty Images

Elon Musk believes that, at the current pace of advancement, A.I. will likely surpass human intelligence by 2030 and theres a real chance of the technology ending humanity. But that doesnt mean the future is all bleak. Speaking at a Silicon Valley event last week (March 19), the Tesla and SpaceX CEO warned that A.I. is happening fast and that we want to try to steer [A.I.] in the most positive direction possible to increase the probability of a great future.

Musk spoke during afireside chat with Peter Diamandis at the Abundance 360 Summit, hosted by Singularity University, a Silicon Valley institution that counsels business leaders on bleeding-edge technologies. Diamandis is the founder of both Singularity University and XPRIZE Foundation, a nonprofit hosting science competitions, some of which are sponsored by Musk.

Its called singularity for a reason, Musk said in reference to the host of the event. When you have the advent of super intelligence, its very difficult to predict what will happen nexttheres some chance it will end humanity. Musk added that he agreed with A.I. godfather Geoffrey Hinton in that theres a 10 to 20 percent probability of such an event taking place.

While acknowledging the risks of A.I. surpassing human intelligence, Musk also highlighted the potential for a positive outcome outweighing the negative, pointing to the title of Diamandis 2014 book, Abundance: The Future is Better Than You Think, as a desirable result. The book portrays a future where A.I. and robotics will drastically drive down the cost of goods and services, thus benefiting human society. Musk also brought up the Culture series by Scottish sci-fi author Iain M. Banks as the best possible scenario of a semi-utopian A.I. future.

Musk used an analogy of raising a child as a means for developing A.I. and artificial general intelligence (A.G.I.) to create a positive impact on humankind going forward. He stressed the importance of fostering a truthful and ethical approach to A.I. development, drawing parallels to Stanley Kubricks 1968 film, 2001: A Space Odyssey.

I think thats incredibly important for A.I. safety is to have a maximum sort of truth-seeking and curious A.I., Musk said, adding that he believed achieving ultimate A.I. safety hinged on never forcing A.I. to lie even when confronted by an unpleasant truth.

A main plot point in 2001: A Space Odyssey was the A.I. being forced to lie, causing it to kill the crew of the spaceship. So the lesson there is dont force an A.I. to lie or do things that are axiomatically incompatible, but to do two things that are actually mutually possible, the SpaceX CEO explained.

However, Musk pointed to various constraints that could slow the expansion of A.I., including the tight supply of A.I. chips seen last year and the growing demand for voltage step down transformers, needed to convert high-voltage power to a lower voltage required for devices in homes and businesses. That is literally the issue this year, he said.

The discussion at one point touched on the concept ofmerging the neocortex of the human mind with the cloud. While Musk described the goal of uploading a persons consciousness and memories to the cloud as a ways off, he touted his brain-computer interface startup Neuralink and its first human patient.A live demo with the patient, who is quadriplegic, was recently carried out in an FDA-approved trial. After receiving a brain implant, the patient was able to control the screen, play video games, download software or do anything else thats possible when using a mouse just by thinking about it. Its going quite well. The first patient is actually able to control their computer just by thinking, Musk said.

Musk said the expansion of A.I. may remove the restraints for creating a whole brain interface, but Neuralink is working toward that goal in the meantime.

More:

Elon Musk Believes 'Super Intelligence' Is Inevitable and Could End Humanity - Observer

Posted in Artificial General Intelligence | Comments Off on Elon Musk Believes ‘Super Intelligence’ Is Inevitable and Could End Humanity – Observer

What was (A)I made for? – by The Ink – The.Ink

Posted: at 2:48 am

The real A.I. threat? Not some future Matrix turning us all into rechargeable batteries, but todays A.I. industry demanding all of our data, labor, and energy right now.

The vast tech companies behind generative A.I. (the latest iteration of the tech, responsible for all the hyperrealistic puppy videos and uncanny automated articles) have been busy exploiting workers, building monopolies, finding ways to write off their massive environmental impacts, and disempowering consumers while sucking up every scrap of data they produce.

But generative A.I.s hunger for data far outstrips that of earlier digital tools, so firms are doing this on a vaster scale than weve seen in any previous technology effort. (OpenAIs Sam Altman is trying to talk world leaders into committing $7 trillion to his project, a sum exceeding GDP growth for the entire world in 2023.) And thats largely in pursuit of a goal A.G.I., or artificial general intelligence that is, so far as anyone can tell, more ideological than useful.

Karen Hao, whos covered the A.I. industry for MIT Technology Review, The Wall Street Journal, and most recently The Atlantic, is one of the few writers who has focused specifically on the human, environmental, and political costs of emerging A.I. technology. Below, she tells us about the very physical supply chain behind digital technologies, the mix of magical thinking and profit maximization that drives A.I.s most influential advocates, how A.I. advances might jeopardize climate goals, and about who stands to gain and lose the most from widespread adoption of generative A.I.

A lot has been promised about what A.I. will supposedly do for us, but youve been writing mostly about what A.I. might cost us. What are the important hidden costs people are missing in this A.I. transition that were going through?

I like to think about the fact that A.I. has a supply chain like any other technology; there are inputs that go into the creation of this technology, data being one, and then computational power or computer chips being another. And both of those have a lot of human costs associated with them.

First of all, when it comes to data, the data comes from people. And that means that if the companies are going to continue expanding their A.I. models and trying to, in their words, deliver more value to customers, that fuels a surveillance capitalism business model where theyre continuing to extract data from us. But the cleaning and annotation of that data requires a lot of labor, a lot of low-income labor. Because when you collect data from the real world, its very messy, and it needs to be curated and neatly packaged in order for a machine learning model to get the most out of it. And a lot of this work this is an entire industry now, the data annotation industry is exported to developing countries, to Global South countries, just like many other industries before it.

Have we just been trained to miss this by our experience with the outsourcing of manufacturing, or by what's happened to us as consumers of online commerce? And is this really just an evolution of what we've been seeing with big tech already?

Theres always been outsourcing of manufacturing. And in the same way, we now see a lot of outsourced work happening in the A.I. supply chain. But the difference is that these are digital products. And I dont think people have fully wrapped their heads around the fact that there is a very physical and human supply chain to digital products.

A lot of that is because of the way that the tech industry talks about these technologies. They talk about it like, It comes from the cloud, and it works like magic. And they dont really talk about the fact that the magic is actually just people, teaching these machines, very meticulously and under great stress and sometimes trauma, to do the right things. And the A.I. industry is built on surveillance capitalism, as internet platforms in general have been built on this ad-targeting business thats in turn been built on the extraction of our data.

But the A.I. industry is different in the sense that it has an even stronger imperative to extract that data from us, because the amount of data that goes into building something like ChatGPT completely dwarfs the amount of data that was going into building lucrative ad businesses. Weve seen these stories showing that OpenAI and other companies are running out of data. And that means that they face an existential business crisis and if there is no more data they have to generate it from us, in order to continue advancing their technology.

Share

Connecting these issues seems like the way people really need to be framing this stuff, but its a frame that most people are still missing. These are all serious anti-democratic threats.

See the original post here:

What was (A)I made for? - by The Ink - The.Ink

Posted in Artificial General Intelligence | Comments Off on What was (A)I made for? – by The Ink – The.Ink

The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings – Brookings Institution

Posted: at 2:48 am

In April 2023, a Stanford study found rapid acceleration in the U.S. federal government spending in 2022. In parallel, the House Appropriations Committee was reported in June 2023 to be focusing on advancing legislation to incorporate artificial intelligence (AI) in an increasing number of programs and third-party reports tracking the progress of this legislation corroborates those findings. In November 2023, both the Department of Defense (DoD) and the Department of State (DoS) released AI strategies, illustrating that policy is starting to catch up to, and potentially shape, expenditures. Recognizing this criticality of this domain on government, The Brookings Institutions Artificial Intelligence and Emerging Technology Initiative (AIET) has been established to advance good governance of transformative new technologies to promote effective solutions to the most pressing challenges posed by AI and emerging technologies.

In this second in a series of articles on AI spending in the U.S. federal government, we continue to follow the trail of money to understand the federal market for AI work. In our last article, we analyzed five years of federal contracts. Key findings included that over 95% of AI-labeled expenditures were in NAICS 54 (professional, scientific, and technical services); that within this category over half of the contracts and nearly 90% of contract value sit within the Department of Defense; and that the vast majority of vendors had a single contract, reflecting a very fragmented vendor community operating in very narrow niches.

All of the data for this series has been taken directly from federal contracts and was consolidated and provided to us by Leadership Connect. Leadership Connect has an extensive repository of federal contracts and their data forms the basis for this series of papers.

In this analysis, we analyzed all new federal contracts since our original report that had the term artificial intelligence (or AI) in the contract description. As such, our dataset included 489 new contracts to compare with 472 existing contracts. Existing values are based on our previous study, tracking the five years up to August 2022; new values are based on the year following to August 2023.

Out of the 15 NAICS code categories we identified in the first paper, there were only 13 NAICS codes still in use from previous contract and only five used in new contracts, demonstrating a refinement and focusing of categorization of AI work. In the current analysis, we differentiate between funding obligated and potential value of award as the former is indicative of current investment and the latter is representative of future appetite. During the period of the study, the value of funding obligated increased over 150% from $261 million to $675 million while the value of potential value of award increased almost 1200% from $355 million to $4.561 billion. For funding obligated, NAICS 54 (Professional, Scientific and Technical Services) was the most common code used followed by NAICS 51 (Information and Cultural Industries), where NAICS 54 increased from $219 million for existing contracts to $366 million for new contracts, while NAICS 51 grew from $5 million of existing to $17 million of new contracts. For potential value of award, NAICS 54 increased from $311 million of existing to $1.932 billion of new contracts, while NAICS 51 grew from $5 million of existing to $2.195 billion of new contracts, eclipsing all other NAICS codes.

The number of federal agencies with contracts rose from 17 to 23 in the last year, with notable additions including the Department of the Treasury, the Nuclear Regulatory Commission, and the National Science Foundation. With an astounding growth from 254 contracts to 657 in the last year, the Department of Defense continues to dominate in AI contracts, with NASA and Health and Human Services being distant a second and third with 115 and 49 contracts respectively. From a potential value perspective, defense rose from $269 million with 76% of all federal funding to $4.323 billion with 95%. In comparison, NASA and HHS increased their AI contract values by between 25% and 30% each, but still fell to 1% each from 11% and 6% respectively of the overall federal government AI contract potential value due to the 1500% increase in the DoD AI contract values. In essence, DoD grew their AI investment to such a degree that all other agencies become a rounding error.

For existing contracts, there were four vendors with over $10 million in contract value, of which one was over $50 million. For new contracts, there were 205 vendors with over $10 million in contract value, of which six were over $50 million and a seventh was over $100 million. The driver for the change in potential value of contracts appears to be the proliferation of $15 million and $30 million maximum potential value contracts, of which 226 and 25 were awarded respectively in the last year, but none of which have funds obligated yet to them. We posit that these are contract vehicles established at the maximum signing authority value for future funding allocation and expenditure. It is notable that only one of the firms in the top 10 potential contract value in the previous study were in the top 10 of new contract awards (MORSE Corp), that the top firm in previous years did not receive any new contract (AI Signal Research) and that the new top firm did not receive any contracts in previous study years (Palantir USG).

In our previous analysis, we reported 62 firms with multiple awards, while over the past year there were 72 firms receiving multiple awards. However, the maximum number of awards has changed significantly, where the highest number of existing contracts was 69 (AI Solutions) while for new contracts the maximum is four. In fact, there were 10 vendors with four or more existing contracts but only three vendors with four or more new ones (Booz Allen Hamilton, Leidos, and EpiSys Science). This reflects a continued fragmented vendor community that is operating in very narrow niches with a single agency.

Growth in private sector R&D has been at above 10% per year for a decade while the federal government has shown more modest growth over the last five years after a period of stagnation, however the 1200% one-year increase in AI potential value of awards of over $4.2 billion is indicative of a new imperative in government AI R&D leading to deployment.

In our previous analysis, we noted that the vendor side of the market was highly fragmented with many small players whose main source of revenues were likely a single contract with a nearby federal client. The market remains fragmented with smaller vendors, but larger players such as Accenture, Booz Allen Hamilton, General Atomics, and Lockheed Martin, are moving quickly into the market, following, or perhaps resulting in, the significant increase of the value of contracts. In our previous analysis, we identified that these larger firms would be establishing beachheads for entry into AI and we expect this trend to continue with other large defense players such as RAND, Northrop Grumman, and Raytheon amongst others as vendors integrate AI in their offerings.

From the client side, we had previously discussed the large number of relatively small contracts demonstrating an experimental phase of purchasing AI. The explosion of large, maximum potential value contracts appears to be a shift from experimentation to implementation, which would be bolstered by the shift from almost uniquely NAICS 54 to a balance between NAICS 54 and 51. While research and experimentation are still ongoing, there are definite signs of vendors bringing to the federal market concrete technologies and systems. The thousand flowers are starting to bloom and agenciesparticularly DoDare tending to them carefully.

We had identified that the focus on federal AI spending was DoD and over the last year, this focus has proportionally become almost total. Defense AI applications have long been touted as a potential long term growth area and it appears that 2022/23 has been a turning point in the realization of those aspirations. While other agencies are continuing to invest in AI, either adding to existing investment or just starting, DoD is massively investing in AI as a new technology across a range of applications. In January 2024, Michael C. Horowitz (deputy assistant secretary of defense for force development and emerging capabilities) confirmed a wide swath of investments in research, development, test and evaluation, and new initiatives to speed up experimentation with AI within the department.

We have noted in other analyses that there are different national approaches to AI development, where the U.S. and its allies have been focusing on the traditional guardrails of technology management (e.g., data governance, data management, education, public service reform) and so spreading their expenditures between governance and capacity development, while potential adversaries are almost exclusively focused on building up their R&D capacity and are largely ignoring the guardrails. While we had identified risks with a broad-based approach leading to a winnowing of projects for a focused ramp-up of investment, we rather see a more muscular approach where a wide range of projects are receiving considerable funding. The vast increase in overall spendingparticularly in defense applicationsappears to indicate that the U.S. is substantially ramping up its investment in this area to address the threat of potential competitors. At the same time, public statements by federal agency leaders often strike a balance between the potential benefits and the risks of AI while outlining potential legislative and policy avenues while agencies seek means of controlling the potential negative impacts of AI. The recent advancement of U.S. Congress legislation and agency strategies coupled with the significant investment increase identified in the current study demonstrate that well-resourced countries such as the U.S. can have both security and capacity when it comes to AI.

The current framework for solving this coordination issue is the National Artificial Intelligence Initiative Office (NAIIO), which was established by the National Artificial Intelligence Initiative Act of 2020. Under this Act, the NAIIO is directed to sustain consistent support for AI R&D, support AI educationsupport interdisciplinary AI researchplan and coordinate Federal interagency AI activitiesand support opportunities for international cooperation with strategic AIfor trustworthy AI systems. While the intent of this Act and its formal structure are admirable, the current federal spending does not seem to reflect these lofty goals. Rather, we are seeing a federal market that appears to be much more chaotic than desirable, especially given the lead that China already has on the U.S. in AI activities. This fragmented federal market may resolve itself as the impact of recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence directs agency engagement on the issue of monitoring and regulation of AI.

In conclusion, the analysis of the U.S. federal governments AI spending over the past year reveals a remarkable surge in investment, particularly within the DoD. The shift from experimental contracts to large, maximum potential value contracts indicates a transition from testing to implementation, with a significant increase in both funding obligated and potential value of awards. The federal governments focus on AI, as evidenced by the substantial investments and legislative initiatives, reflects a strategic response to global competition and security challenges. While the market remains fragmented with smaller vendors, the concentration of investments in defense applications signals a turning point in the realization of AIs potential across various government agencies. The current trajectory, led by the DoD, aligns with the broader national approach that combines governance and capacity development to ensure both security and innovation in AI technologies.

As we noted in our first article in this series, if one wants to know what the real strategy is, one must follow the money. In the case of the U.S. federal government, the strategy is clearly focused on defense applications of AI. The spillover of this focus is a likelihood of defense and security priorities, needs and values being the dominant ones in government applications. This is a double-edged sword as while it may lead to more secure national systems or more effective defenses against hostile uses of AI against the U.S. and its allies, it may also involve trade-offs in individual privacy or decision-making transparency. However, the appropriate deployment of AI by government has the potential to increase both security and freedom, as noted in other contexts such as surveillance.

The AI industry is in a rapid growth phase as demonstrated by the potential revenues from the sector growing exponentially. As virtually all new markets go through the same industry growth cycle, the increasing value of the AI market will likely continue to draw in new firms in the short-term, including the previously absent large players to whom the degree of actual and potential market capitalization has now drawn their attention and capacity. While an industry consolidation phase of start-up and smaller player acquisitions will likely happen in the future, if the scale of AI market increase continues at a similar rate this winnowing process is likely still several years away. That being said, the government may start to look more towards their established partner firmsparticularly in the defense and security sectorwho have the track record and industrial capacity to meet the high value contracting vehicles being put in place.

Despite the commendable intentions outlined in the National Artificial Intelligence Initiative Act of 2020, the current state of federal spending on AI raises concerns about coordination and coherence. NAIIO is tasked with coordinating interagency AI activities and promoting international cooperation, but the observed chaotic nature of the federal market calls into question the effectiveness of the existing framework. The fragmented market may see resolution as the recent executive order on AI guides agencies towards more a more cohesive and coordinated approach to AI. As the U.S. strives to maintain its technological leadership and address security challenges posed by potential adversaries, the coordination of AI initiatives will be crucial. The findings emphasize the need for continued policy development, strategic planning, and collaborative efforts to ensure the responsible and effective integration of AI technologies across the U.S. federal government.

View original post here:

The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings - Brookings Institution

Posted in Artificial General Intelligence | Comments Off on The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings – Brookings Institution

Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI – PYMNTS.com

Posted: at 2:48 am

Three entities in the field of artificial intelligence (AI) plan to combine to create the Artificial Superintelligence Alliance.

Fetch.ai,Ocean ProtocolandSingularityNETaim to create a decentralized alternative to existing AI projects controlled by Big Tech, the companies said in a Wednesday (March 27)press release.

The proposed alliance is subject to approval from the three entities respective communities, per the release.

As part of this alliance, the tokens that fuel the members networks $FET, $OCEAN and $AGIX will be merged into a single $ASI token that will function across the combined decentralized network created by this partnership, according to the release.

The combined value of the three tokens is $7.6 billion as of Tuesday (March 26), per the release.

The creation of the largest open-sourced, decentralized network through a multi-billion token merger is a major step that accelerates the race to artificial general intelligence (AGI), the release said.

The Artificial Superintelligence Alliance also brings together SingularityNETs decentralized AI network, Fetch.ais Web3 platform and Ocean Protocols decentralized data exchange platform, according to the release.

The deal provides an unparalleled opportunity for these three influential leaders to create a powerful compelling alternative to Big Techs control over AI development, use and monetization, the release said.

Leveraging blockchain technology, it will turn AI systems into open networks for coordinating machine intelligence, rather than hiding their inner workings from the public, according to the release.

The alliance will also facilitate the commercialization of the technology and enable greater access to AI platforms and large databases, advancing the path to AGI on the blockchain, the release said.

In another recent development in this space,Stability AIannounced Friday (March 22) that its founder and CEO Emad Mostaque has resigned as CEO and stepped down from the companys board to pursuedecentralized AI.

We should have more transparent & distributed governance in AI as it becomes more and more important, Mostaque said when announcing his move. Its a hard problem, but I think we can fix it The concentration of power in AI is bad for us all. I decided to step down to fix this at Stability & elsewhere.

More here:

Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI - PYMNTS.com

Posted in Artificial General Intelligence | Comments Off on Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI – PYMNTS.com

Scientists create AI models that can talk to each other and pass on skills with limited human input – Livescience.com

Posted: at 2:48 am

The next evolution in artificial intelligence (AI) could lie in agents that can communicate directly and teach each other to perform tasks, research shows.

Scientists have modeled an AI network capable of learning and carrying out tasks solely on the basis of written instructions. This AI then described what it learned to a sister AI, which performed the same task despite having no prior training or experience in doing it.

The first AI communicated to its sister using natural language processing (NLP), the scientists said in their paper published March 18 in the journal Nature.

NLP is a subfield of AI that seeks to recreate human language in computers so machines can understand and reproduce written text or speech naturally. These are built on neural networks, which are collections of machine learning algorithms modeled to replicate the arrangement of neurons in the brain.

Once these tasks had been learned, the network was able to describe them to a second network a copy of the first so that it could reproduce them. To our knowledge, this is the first time that two AIs have been able to talk to each other in a purely linguistic way, said lead author of the paper Alexandre Pouget, leader of the Geneva University Neurocenter, in a statement.

The scientists achieved this transfer of knowledge by starting with an NLP model called "S-Bert," which was pre-trained to understand human language. They connected S-Bert to a smaller neural network centered around interpreting sensory inputs and simulating motor actions in response.

Related: AI-powered humanoid robot can serve you food, stack the dishes and have a conversation with you

Get the worlds most fascinating discoveries delivered straight to your inbox.

This composite AI a "sensorimotor-recurrent neural network (RNN)" was then trained on a set of 50 psychophysical tasks. These centered on responding to a stimulus like reacting to a light through instructions fed via the S-Bert language model.

Through the embedded language model, the RNN understood full written sentences. This let it perform tasks from natural language instructions, getting them 83% correct on average, despite having never seen any training footage or performed the tasks before.

That understanding was then inverted so the RNN could communicate the results of its sensorimotor learning using linguistic instructions to an identical sibling AI, which carried out the tasks in turn also having never performed them before.

The inspiration for this research came from the way humans learn by following verbal or written instructions to perform tasks even if weve never performed such actions before. This cognitive function separates humans from animals; for example, you need to show a dog something before you can train it to respond to verbal instructions.

While AI-powered chatbots can interpret linguistic instructions to generate an image or text, they cant translate written or verbal instructions into physical actions, let alone explain the instructions to another AI.

However, by simulating the areas of the human brain responsible for language perception, interpretation and instructions-based actions, the researchers created an AI with human-like learning and communication skills.

This won't alone lead to the rise of artificial general intelligence (AGI) where an AI agent can reason just as well as a human and perform tasks in multiple areas. But the researchers noted that AI models like the one they created can help our understanding of how human brains work.

Theres also scope for robots with embedded AI to communicate with each other to learn and carry out tasks. If only one robot received initial instructions, it could be really effective in manufacturing and training other automated industries.

The network we have developed is very small, the researchers explained in the statement. Nothing now stands in the way of developing, on this basis, much more complex networks that would be integrated into humanoid robots capable of understanding us but also of understanding each other.

Continued here:

Scientists create AI models that can talk to each other and pass on skills with limited human input - Livescience.com

Posted in Artificial General Intelligence | Comments Off on Scientists create AI models that can talk to each other and pass on skills with limited human input – Livescience.com

Page 21234..»