Page 21234..1020..»

Category Archives: Singularity

IBM Is Planning to Build Its First Fault-Tolerant Quantum Computer by 2029 – Singularity Hub

Posted: December 12, 2023 at 12:47 am

This week, IBM announced a pair of shiny new quantum computers.

The companys Condor processor is the first quantum chip of its kind with over 1,000 qubits, a feat that would have made big headlines just a few years ago. But earlier this year, a startup, Atom Computing, unveiled a 1,180-qubit quantum computer using a different approach. And although IBM says Condor demonstrates it can reliably produce high-quality qubits at scale, itll likely be the largest single chip the company makes until sometime next decade.

Instead of growing the number of qubits crammed onto each chip, IBM will focus on getting the most out of the qubits it has. In this respect, the second chip announced, Heron, is the future.

Though Heron has fewer qubits than Condorjust 133its significantly faster and less error-prone. The company plans to combine several of these smaller chips into increasingly more powerful systems, a bit like the multicore processors powering smartphones. The first of these, System Two, also announced this week, contains three linked Condor chips.

IBM also updated its quantum roadmap, a timeline of key engineering milestones, through 2033. Notably, the company is aiming to complete a fault-tolerant quantum computer by 2029. The machine wont be large enough to run complex quantum algorithms, like the one expected to one day break standard encryption. Still, its a bold promise.

Practical quantum computers will be able to tackle problems that cant be solved using classical computers. But todays systems are far too small and error-ridden to realize that dream. To get there, engineers are working on a solution called error-correction.

A qubit is the fundamental unit of a quantum computer. In your laptop, the basic unit of information is a 1 or 0 represented by a transistor thats either on or off. In a quantum computer, the unit of information is 1, 0, orthanks to quantum weirdnesssome combination of the two. The physical component can be an atom, electron, or tiny superconducting loop of wire.

Opting for the latter, IBM makes its quantum computers by cooling loops of wire, or transmons, to temperatures near absolute zero and placing them into quantum states. Heres the problem. Qubits are incredibly fragile, easily falling out of these quantum states throughout a calculation. This introduces errors that make todays machines unreliable.

One way to solve this problem is to minimize errors. IBMs made progress here. Heron uses some new hardware to significantly speed up how quickly the system places pairs of qubits into quantum statesan operation known as a gatelimiting the number of errors that crop up and spread to neighboring qubits (researchers call this crosstalk).

Its a beautiful device, Gambetta told Ars Technica. Its five times better than the previous devices, the errors are way less, [and] crosstalk cant really be measured.

But you cant totally eliminate errors. In the future, redundancy will also be key.

By spreading information between a group of qubits, you can reduce the impact of any one error and also check for and correct errors in the group. Because it takes multiple physical qubits to form one of these error-corrected logical qubits, you need an awful lot of them to complete useful calculations. This is why scale matters.

Software can also help. IBM is already employing a technique called error mitigation, announced earlier this year, in which it simulates likely errors and subtracts them from calculations. Theyve also identified a method of error-correction that reduces the number of physical qubits in a logical qubit by nearly an order of magnitude. But all this will require advanced forms of connectivity between qubits, which could be the biggest challenge ahead.

Youre going to have to tie them together, Dario Gil, senior vice president and director of research at IBM, told Reuters. Youre going to have to do many of these things together to be practical about it. Because if not, its just a paper exercise.

Something that makes IBM unique in the industry is that it publishes a roadmap looking a decade into the future.

This may seem risky, but to date, theyve stuck to it. Alongside the Condor and Heron news, IBM also posted an updated version of its roadmap.

Next year, theyll release an upgraded version of Heron capable of 5,000 gate operations. After Heron comes Flamingo. Theyll link seven of these Flamingo chips into a single system with over 1,000 qubits. They also plan to grow Flamingos gate count by roughly 50 percent a year until it hits 15,000 in 2028. In parallel, the company will work on error-correction, beginning with memory, then moving on to communication and gates.

All this will culminate in a 200-qubit, fault-tolerant chip called Starling in 2029 and a leap in gate operations to 100 million. Starling will give way to the bigger Blue Jay in 2033.

Though it may be the most open about them, IBM isnt alone in its ambitions.

Google is pursuing the same type of quantum computer and has been focused on error-correction over scaling for a few years. Then there are other kinds of quantum computers entirelysome use charged ions as qubits while others use photons, electrons, or like Atom Computing, neutral atoms. Each approach has its tradeoffs.

When it comes down to it, theres a simple set of metrics for you to compare the performance of the quantum processors, Jerry Chow, director of quantum systems at IBM, told the Verge. Its scale: what number of qubits can you get to and build reliably? Quality: how long do those qubits live for you to perform operations and calculations on? And speed: how quickly can you actually run executions and problems through these quantum processors?

Atom Computing favors neutral atoms because theyre identicaleliminating the possibility of manufacturing flawscan be controlled wirelessly, and operate at room temperature. Chow agrees there are interesting things happening in the nuetral atom space but speed is a drawback. It comes down to that speed, he said. Anytime you have these actual atomic items, either an ion or an atom, your clock rates end up hurting you.

The truth is the race isnt yet won, and wont be for awhile yet. New advances or unforeseen challenges could rework the landscape. But Chow said the companys confidence in its approach is what allows them to look ahead 10 years.

And to me its more that there are going to be innovations within that are going to continue to compound over those 10 years, that might make it even more attractive as time goes on. And thats just the nature of technology, he said.

Image Credit: IBM

Originally posted here:

IBM Is Planning to Build Its First Fault-Tolerant Quantum Computer by 2029 - Singularity Hub

Posted in Singularity | Comments Off on IBM Is Planning to Build Its First Fault-Tolerant Quantum Computer by 2029 – Singularity Hub

22 Laws of Singularity And How You Can Apply Them To Live A Better Life – Medium

Posted: at 12:47 am

13 min read

Superintelligence and the Singularity represent concepts at the forefront of discussions surrounding the future of technology and humanity.

Superintelligence refers to the hypothetical state where artificial intelligence surpasses human intelligence, while the Singularity is the point in time when rapid technological advancements lead to profound and unpredictable changes in society. Understanding the implications of these concepts is critical as they have the potential to reshape our world in unprecedented ways.

Numerous experts and researchers have made predictions about the development of superintelligence. Ray Kurzweil, a renowned futurist and inventor, predicts that by 2045, artificial intelligence will surpass human intelligence, leading to a technological Singularity. Additionally, researchers at the University of Oxford estimated a 50% chance of human-level AI being achieved within the next 45 years based on a survey of experts in the field.

Advancements in artificial intelligence technologies provide evidence for the exponential growth and potential for superintelligence. AI systems have already surpassed human performance in specific domains such as chess and complex data analysis. Furthermore, the significant progress in areas like machine learning, natural language processing, and robotics points towards the trajectory of achieving superior machine intelligence.

The exact timeline towards superintelligence and the Singularity remains uncertain. Some experts believe it could occur within the next few decades, while others suggest a more prolonged timeline. Yet, it is critical to acknowledge the significance of this topic. The implications of superintelligence and the Singularity extend beyond technological advancements, touching upon societal, economic, ethical, and existential considerations.

The need for caution arises from the potential risks associated with rapid technological acceleration. Superintelligence, if not approached with careful oversight and ethical considerations, could lead to unintended consequences and unforeseen outcomes. Concerns include job displacement, ethical dilemmas, loss of privacy, distribution of power, and existential threats.

More:

22 Laws of Singularity And How You Can Apply Them To Live A Better Life - Medium

Posted in Singularity | Comments Off on 22 Laws of Singularity And How You Can Apply Them To Live A Better Life – Medium

Singularity: Here’s When Humanity Will Reach It, New Data Shows

Posted: March 31, 2023 at 1:47 am

In the world of artificial intelligence, the idea of singularity looms large. This slippery concept describes the moment AI exceeds beyond human control and rapidly transforms society. The tricky thing about AI singularity (and why it borrows terminology from black hole physics) is that its enormously difficult to predict where it begins and nearly impossible to know whats beyond this technological event horizon.

However, some AI researchers are on the hunt for signs of reaching singularity measured by AI progress approaching the skills and ability comparable to a human. One such metric, defined by Translated, a Rome-based translation company, is an AIs ability to translate speech at the accuracy of a human. Language is one of the most difficult AI challenges, but a computer that could close that gap could theoretically show signs of Artificial General Intelligence (AGI).

Thats because language is the most natural thing for humans, Translated CEO Marco Trombetti said at a conference in Orlando, Florida, in December. Nonetheless, the data Translated collected clearly shows that machines are not that far from closing the gap.

More From Popular Mechanics

The company tracked its AIs performance from 2014 to 2022 using a metric called Time to Edit, or TTE, which calculates the time it takes for professional human editors to fix AI-generated translations compared to human ones. Over that 8-year period and analyzing over 2 billion post-edits, Translateds AI showed a slow, but undeniable improvement as it slowly closed the gap toward human-level translation quality.

Translated

On average, it takes a human translator roughly one second to edit each word of another human translator, according to Translated. In 2015, it took professional editors approximately 3.5 seconds per word to check a machine-translated (MT) suggestion today that number is just 2 seconds. If the trend continues, Translateds AI will be as good as human-produced translation by the end of the decade (or even sooner).

The change is so small that every single day you dont perceive it, but when you see progress across 10 years, that is impressive, Trombetti said on a podcast in December. This is the first time ever that someone in the field of artificial intelligence did a prediction of the speed to singularity.

Although this is a novel approach to quantifying how close humanity is to approaching singularity, this definition of singularity runs into similar problems of identifying AGI more broadly. Although perfecting human speech is certainly a frontier in AI research, the impressive skill doesnt necessarily make a machine intelligent (not to mention how many researchers dont even agree on what intelligence is).

Whether these hyper-accurate translators are harbingers of our technological doom or not, that doesnt lessen Translateds AI accomplishment. An AI capable of translating speech as well as a human could very well change society, even if the true technological singularity remains ever elusive.

Darren lives in Portland, has a cat, and writes/edits about sci-fi and how our world works. You can find his previous stuff at Gizmodo and Paste if you look hard enough.

Read the original here:

Singularity: Here's When Humanity Will Reach It, New Data Shows

Posted in Singularity | Comments Off on Singularity: Here’s When Humanity Will Reach It, New Data Shows

sentinelOne expands singularity marketplace with new SOAR, insider threat, training, and prioritization integrations – ZAWYA

Posted: at 1:47 am

sentinelOne expands singularity marketplace with new SOAR, insider threat, training, and prioritization integrations  ZAWYA

The rest is here:

sentinelOne expands singularity marketplace with new SOAR, insider threat, training, and prioritization integrations - ZAWYA

Posted in Singularity | Comments Off on sentinelOne expands singularity marketplace with new SOAR, insider threat, training, and prioritization integrations – ZAWYA

Reaching the Singularity May be Humanitys Greatest and Last …

Posted: March 4, 2023 at 12:56 am

Biological? Post-biological? Something in between? What is humanity's future?

In a new paper published in The International Journal of Astrobiology, Joseph Gale from The Hebrew University of Jerusalem and co-authors make the point that recent advances in artificial intelligence (AI)particularly in pattern recognition and self-learningwill likely result in a paradigm shift in the search for extraterrestrial intelligent life.

While futurist Ray Kurzweil predicted 15 years ago that the singularitythe time when the abilities of a computer overtake the abilities of the human brainwill occur in about 2045, Gale and his co-authors believe this event may be much more imminent, especially with the advent of quantum computing. Its already been four years since the program AlphaGO, fortified with neural networks and learning modes, defeated Lee Sedol, the Go world champion. The strategy game StarCraft II may be the next to have a machine as reigning champion.

If we look at the calculating capacity of computers and compare it to the number of neurons in the human brain, the singularity could be reached as soon as the early 2020s. However, a human brain is wired differently than a computer, and that may be the reason why certain tasks that are simple for us are still quite challenging for todays AI. Also, the size of the brain or the number of neurons dont equate to intelligence. For example, whales and elephants have more than double the number of neurons in their brain, but are not more intelligent than humans.

The authors dont know when the singularity will come, but come it will. When this occurs, the end of the human race might very well be upon us, they say, citing a 2014 prediction by the late Stephen Hawking. According to Kurzweil, humans may then be fully replaced by AI, or by some hybrid of humans and machines.

What will this mean for astrobiology? Not much, if were searching only for microbial extraterrestrial life. But it might have a drastic impact on the search for extraterrestrial intelligent life (SETI). If other civilizations are similar to ours but older, we would expect that they already moved beyond the singularity. So they wouldnt necessarily be located on a planet in the so-called habitable zone. As the authors point out, such civilizations might prefer locations with little electronic noise in a dry and cold environment, perhaps in space, where they could use superconductivity for computing and quantum entanglement as a means of communication.

We are just beginning to understand quantum entanglement, and it is not yet clear whether it can be used to transfer information. If it can, however, that might explain the apparent lack of evidence for extraterrestrial intelligent civilizations. Why would they use primitive radio waves to send messages?

I think it also is still unclear whether there is something special enough about the human brains ability to process information that casts doubt on whether AI can surpass our abilities in all relevant areas, especially in achieving consciousness. Might there be something unique to biological brains after millions and millions of years of evolution that computers cannot achieve? If not, the authors are correct that reaching the singularity could be humanitys greatest and last advance.

Recommended Videos

Go here to read the rest:

Reaching the Singularity May be Humanitys Greatest and Last ...

Posted in Singularity | Comments Off on Reaching the Singularity May be Humanitys Greatest and Last …

Singularity: Explain It to Me Like I’m 5-Years-Old – Futurism

Posted: at 12:56 am

Supercomputers to Superintelligence

Heres an experiment that fits all ages: approach your mother and father (if theyre asleep, use caution). Ask them gently about that time before you were born, and whether they dared think at that time that one day everybody will post and share their images on a social network called Facebook. Or that they will receive answers to every question from a mysterious entity called Google. Or enjoy the services of a digital adviser called Waze that guides you everywhere on the road. If they say they figured all of the above will happen, kindly refer those people to me. Were always in need of good futurists.

The truth is that very few thought, in those olden days of yore, that technologies like supercomputers, wireless network or artificial intelligence will make their way to the general public in the future. Even those who figured that these technologies will become cheaper and more widespread, failed in imagining the uses they will be put to, and how they will change society. And here we are today, when youre posting your naked pictures on Facebook. Thanks again, technology.

History is full of cases in which a new and groundbreaking technology, or a collection of such technologies, completely changes peoples lives. The change is often so dramatic that people whove lived before the technological leap have a very hard time understanding how the subsequent generations think. To the people before the change, the new generation may as well be aliens in their way of thinking and seeing the world.

These kinds of dramatic shifts in thinking are called Singularity a phrase that is originally derived from mathematics and describes a point which we are incapable of deciphering its exact properties. Its that place where the equations basically go nuts and make no sense any longer.

The singularity has risen to fame in the last two decades largely because of two thinkers. The first is the scientist and science fiction writer Vernor Vinge, who wrote in 1993 that

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

The other prominent prophet of the Singularity is Ray Kurzweil. In his book The Singularity is Near, Kurzweil basically agrees with Vinge but believes the later has been too optimistic in his view of technological progress. Kurzweil believes that by the year 2045 we will experience the greatest technological singularity in the history of mankind: the kind that could, in just a few years, overturn the institutes and pillars of society and completely change the way we view ourselves as human beings. Just like Vinge, Kurzweil believes that well get to the Singularity by creating a super-human artificial intelligence (AI). An AI of that level could conceive of ideas that no human being has thought about in the past, and will invent technological tools that will be more sophisticated and advanced than anything we have today.

Since one of the roles of this AI would be to improve itself and perform better, it seems pretty obvious that once we have a super-intelligent AI, it will be able to create a better version of itself. And guess what the new generation of AI would then do? Thats right improve itself even further. This kind of a race would lead to an intelligence explosion and will leave old poor us simple, biological machines that we are far behind.

If this notion scares you, youre in good company. A few of the most widely regarded scientists, thinkers and inventors, like Steven Hawking and Elon Musk, have already expressed their concerns that super-intelligent AI could escape our control and move against us. Others focus on the great opportunities that such a singularity holds for us. They believe that a super-intelligent AI, if kept on a tight leash, could analyze and expose many of the wonders of the world for us. Einstein, after all, was a remarkable genius who has revolutionized our understanding of physics. Well, how would the world change if we enjoyed tens, hundreds and millions Einsteins that couldve analyzed every problem and find a solution for it?

Similarly, how would things look like if each of us could enjoy his very own Doctor House, that constantly analyzed his medical state and provided ongoing recommendations? And which new ideas and revelations would those super-intelligences come up with, when they go over humanitys history and holy books?

Already we see how AI is starting to change the ways in which we think about ourselves. The computer Deep Blue managed to beat Gary Kasparov in chess in 1997. Today, after nearly twenty years of further development, human chess masters can no longer beat on their own even an AI running on a laptop computer. But after his defeat, Kasparov has created a new kind of chess contests: ones in which humanoid and computerized players collaborate, and together reach greater successes and accomplishments than each wouldve gotten on their own. In this sort of a collaboration, the computer provides rapid computations of possible moves, and suggests several to the human player. Its human compatriot needs to pick the best option, to understand their opponents and to throw them off balance.

Together, the two create a centaur: a mythical creature that combines the best traits of two different species. We see, then that AI has already forced chess players to reconsider their humanity and their game.

In the next few decades we can expect a similar singularity to occur in many other games, professions and other fields that were previously conserved for human beings only. Some humans will struggle against the AI. Others will ignore it. Both these approaches will prove disastrous, since when the AI will become capable than human beings, both the strugglers and the ignorant will remain behind. Others will realize that the only way to success lies in collaboration with the computers. They will help computers learn and will direct their growth and learning. Those people will be the centaurs of the future. And this realization that man can no longer rely only on himself and his brain, but instead must collaborate and unite with sophisticated computers to beat tomorrows challenges well, isnt that a singularity all by itself?

See the article here:

Singularity: Explain It to Me Like I'm 5-Years-Old - Futurism

Posted in Singularity | Comments Off on Singularity: Explain It to Me Like I’m 5-Years-Old – Futurism

SINGULARITY FUTURE TECHNOLOGY LTD. : Notice of Delisting or Failure to Satisfy a Continued Listing Rule or Standard; Transfer of Listing, Change in…

Posted: at 12:56 am

SINGULARITY FUTURE TECHNOLOGY LTD. : Notice of Delisting or Failure to Satisfy a Continued Listing Rule or Standard; Transfer of Listing, Change in Directors or Principal Officers, Other Events, Financial Statements and Exhibits (form 8-K)  Marketscreener.com

Go here to read the rest:

SINGULARITY FUTURE TECHNOLOGY LTD. : Notice of Delisting or Failure to Satisfy a Continued Listing Rule or Standard; Transfer of Listing, Change in...

Posted in Singularity | Comments Off on SINGULARITY FUTURE TECHNOLOGY LTD. : Notice of Delisting or Failure to Satisfy a Continued Listing Rule or Standard; Transfer of Listing, Change in…

Apple co-founder Steve Wozniak on Artificial Intelligence: Not worried about The Singularity, well still be in control – MacDailyNews

Posted: February 12, 2023 at 2:35 am

Apple co-founder Steve Wozniak on Artificial Intelligence: Not worried about The Singularity, well still be in control  MacDailyNews

Visit link:

Apple co-founder Steve Wozniak on Artificial Intelligence: Not worried about The Singularity, well still be in control - MacDailyNews

Posted in Singularity | Comments Off on Apple co-founder Steve Wozniak on Artificial Intelligence: Not worried about The Singularity, well still be in control – MacDailyNews

Cauchy principal value – Wikipedia

Posted: January 4, 2023 at 6:13 am

Method for assigning values to certain improper integrals which would otherwise be undefined

In mathematics, the Cauchy principal value, named after Augustin Louis Cauchy, is a method for assigning values to certain improper integrals which would otherwise be undefined.

Depending on the type of singularity in the integrand f, the Cauchy principal value is defined according to the following rules:

In some cases it is necessary to deal simultaneously with singularities both at a finite number b and at infinity. This is usually done by a limit of the form

Let C c ( R ) {displaystyle {C_{c}^{infty }}(mathbb {R} )} be the set of bump functions, i.e., the space of smooth functions with compact support on the real line R {displaystyle mathbb {R} } . Then the map

To prove the existence of the limit

Therefore, 0 1 u ( x ) u ( x ) x d x {displaystyle int _{0}^{1},{frac {u(x)-u(-x)}{x}},mathrm {d} x} exists and by applying the mean value theorem to u ( x ) u ( x ) , {displaystyle u(x)-u(-x),} we get:

And furthermore:

we note that the map

Note that the proof needs u {displaystyle u} merely to be continuously differentiable in a neighbourhood of 0 and x u {displaystyle x,u} to be bounded towards infinity. The principal value therefore is defined on even weaker assumptions such as u {displaystyle u} integrable with compact support and differentiable at 0.

The principal value is the inverse distribution of the function x {displaystyle x} and is almost the only distribution with this property:

In a broader sense, the principal value can be defined for a wide class of singular integral kernels on the Euclidean space R n {displaystyle mathbb {R} ^{n}} . If K {displaystyle K} has an isolated singularity at the origin, but is an otherwise "nice" function, then the principal-value distribution is defined on compactly supported smooth functions by

Consider the values of two limits:

This is the Cauchy principal value of the otherwise ill-defined expression

Also:

Similarly, we have

This is the principal value of the otherwise ill-defined expression

Different authors use different notations for the Cauchy principal value of a function f {displaystyle f} , among others:

Continue reading here:

Cauchy principal value - Wikipedia

Posted in Singularity | Comments Off on Cauchy principal value – Wikipedia

Singularity Future Technology Ltd. (SGLY) Stockholder Notice: Robbins LLP Reminds Investors of the Class Action Against Singularity Future Technology…

Posted: December 14, 2022 at 9:31 am

  1. Singularity Future Technology Ltd. (SGLY) Stockholder Notice: Robbins LLP Reminds Investors of the Class Action Against Singularity Future Technology Ltd. f/k/a Sino-Global Shipping America Ltd.  GlobeNewswire
  2. The Law Offices of Frank R. Cruz Announces the Filing of a Securities Class Action on Behalf of Singularity Future Technology Ltd. f/k/a Sino-Global Shipping America Ltd. (SGLY) Investors  Business Wire
  3. Blockchain Supplier Singularity Hid CEO's Past Fraud, Suit Says  Bloomberg Law
  4. Singularity Securities Fraud Class Action Lawsuit Pending: Contact Levi & Korsinsky Before February 7, 20  Benzinga
  5. Lawsuit filed for Investors in shares of Singularity Future Technology Ltd. (NASDAQ: SGLY)  openPR
  6. View Full Coverage on Google News

Read the original:

Singularity Future Technology Ltd. (SGLY) Stockholder Notice: Robbins LLP Reminds Investors of the Class Action Against Singularity Future Technology...

Posted in Singularity | Comments Off on Singularity Future Technology Ltd. (SGLY) Stockholder Notice: Robbins LLP Reminds Investors of the Class Action Against Singularity Future Technology…

Page 21234..1020..»