The Singularity Is Nearer: When We Merge with AI – Cool Hunting

Oyster

$395.00

Oyster, a new Norwegian company, set out to improve the performance of coolers and the result is remarkable. They engineered new solutions to improve every aspect of a standard cooler and Tempo, the result, leapfrogs everything else weve seen. Its a game changer, and here are just some of the ways its accomplished it. First, they created a patented vacuum insulation called DTLA that improves the three critical components of maintaining temperatureinsulation, circulation and the thermal bridgethe connection between the top and bottom of the cooler. The results speak for themselvesimproving those by 2x, 380x and 1.4x respectively. The beautifully designed Tempo is made of aluminum, and is 100% recyclable. You do not need to add ice to keep your contents coldwhatever you place in the Tempo stays colder longer than any other cooler, whether its cooled or frozen. You can fit 36 beverage cans inside. Should you need to keep whatevers inside cooler for longer, you can add Oysters thin ice packs (sold separately), either cold or frozen, to both cool and keep things that way because of all of the thermal innovation. A bundle including ice packs and an aluminum handle is also available for $495.

Added: July 2024

This product is sold by Oyster

Go here to read the rest:

The Singularity Is Nearer: When We Merge with AI - Cool Hunting

This Enormous Computer Chip Beat the Worlds Top Supercomputer at Molecular Modeling – Singularity Hub

Computer chips are a hot commodity. Nvidia is now one of the most valuable companies in the world, and the Taiwanese manufacturer of Nvidias chips, TSMC, has been called a geopolitical force. It should come as no surprise, then, that a growing number of hardware startups and established companies are looking to take a jewel or two from the crown.

Of these, Cerebras is one of the weirdest. The company makes computer chips the size of tortillas bristling with just under a million processors, each linked to its own local memory. The processors are small but lightning quick as they dont shuttle information to and from shared memory located far away. And the connections between processorswhich in most supercomputers require linking separate chips across room-sized machinesare quick too.

This means the chips are stellar for specific tasks. Recent preprint studies in two of theseone simulating molecules and the other training and running large language modelsshow the wafer-scale advantage can be formidable. The chips outperformed Frontier, the worlds top supercomputer, in the former. They also showed a stripped down AI model could use a third of the usual energy without sacrificing performance.

The materials we make things with are crucial drivers of technology. They usher in new possibilities by breaking old limits in strength or heat resistance. Take fusion power. If researchers can make it work, the technology promises to be a new, clean source of energy. But liberating that energy requires materials to withstand extreme conditions.

Scientists use supercomputers to model how the metals lining fusion reactors might deal with the heat. These simulations zoom in on individual atoms and use the laws of physics to guide their motions and interactions at grand scales. Todays supercomputers can model materials containing billions or even trillions of atoms with high precision.

But while the scale and quality of these simulations has progressed a lot over the years, their speed has stalled. Due to the way supercomputers are designed, they can only model so many interactions per second, and making the machines bigger only compounds the problem. This means the total length of molecular simulations has a hard practical limit.

Cerebras partnered with Sandia, Lawrence Livermore, and Los Alamos National Laboratories to see if a wafer-scale chip could speed things up.

The team assigned a single simulated atom to each processor. So they could quickly exchange information about their position, motion, and energy, the processors modeling atoms that would be physically close in the real world were neighbors on the chip too. Depending on their properties at any given time, atoms could hop between processors as they moved about.

The team modeled 800,000 atoms in three materialscopper, tungsten, and tantalumthat might be useful in fusion reactors. The results were pretty stunning, with simulations of tantalum yielding a 179-fold speedup over the Frontier supercomputer. That means the chip could crunch a years worth of work on a supercomputer into a few days and significantly extend the length of simulation from microseconds to milliseconds. It was also vastly more efficient at the task.

I have been working in atomistic simulation of materials for more than 20 years. During that time, I have participated in massive improvements in both the size and accuracy of the simulations. However, despite all this, we have been unable to increase the actual simulation rate. The wall-clock time required to run simulations has barely budged in the last 15 years, Aidan Thompson of Sandia National Laboratories said in a statement. With the Cerebras Wafer-Scale Engine, we can all of a sudden drive at hypersonic speeds.

Although the chip increases modeling speed, it cant compete on scale. The number of simulated atoms is limited to the number of processors on the chip. Next steps include assigning multiple atoms to each processor and using new wafer-scale supercomputers that link 64 Cerebras systems together. The team estimates these machines could model as many as 40 million tantalum atoms at speeds similar to those in the study.

While simulating the physical world could be a core competency for wafer-scale chips, theyve always been focused on artificial intelligence. The latest AI models have grown exponentially, meaning the energy and cost of training and running them has exploded. Wafer-scale chips may be able to make AI more efficient.

In a separate study, researchers from Neural Magic and Cerebras worked to shrink the size of Metas 7-billion-parameter Llama language model. To do this, they made whats called a sparse AI model where many of the algorithms parameters are set to zero. In theory, this means they can be skipped, making the algorithm smaller, faster, and more efficient. But todays leading AI chipscalled graphics processing units (or GPUs)read algorithms in chunks, meaning they cant skip every zeroed out parameter.

Because memory is distributed across a wafer-scale chip, it can read every parameter and skip zeroes wherever they occur. Even so, extremely sparse models dont usually perform as well as dense models. But here, the team found a way to recover lost performance with a little extra training. Their model maintained performanceeven with 70 percent of the parameters zeroed out. Running on a Cerebras chip, it sipped a meager 30 percent of the energy and ran in a third of the time of the full-sized model.

While all this is impressive, Cerebras is still niche. Nvidias more conventional chips remain firmly in control of the market. At least for now, that appears unlikely to change. Companies have invested heavily in expertise and infrastructure built around Nvidia.

But wafer-scale may continue to prove itself in niche, but still crucial, applications in research. And it may be the approach becomes more common overall. The ability to make wafer-scale chips is only now being perfected. In a hint at whats to come for the field as a whole, the biggest chipmaker in the world, TSMC, recently said its building out its wafer-scale capabilities. This could make the chips more common and capable.

For their part, the team behind the molecular modeling work say wafer-scales influence could be more dramatic. Like GPUs before them, adding wafer-scale chips to the supercomputing mix could yield some formidable machines in the future.

Future work will focus on extending the strong-scaling efficiency demonstrated here to facility-level deployments, potentially leading to an even greater paradigm shift in the Top500 supercomputer list than that introduced by the GPU revolution, the team wrote in their paper.

Image Credit: Cerebras

See the original post:

This Enormous Computer Chip Beat the Worlds Top Supercomputer at Molecular Modeling - Singularity Hub

Transhumanist author predicts artificial super-intelligence, immortality, and the Singularity by 2045 – TechSpot

Dystopian Kurzweil: As Big Tech continues frantically pushing AI development and funding, many users have become concerned about the outcome and dangers of the latest AI advancements. However, one man is more than sold on AI's ability to bring humanity to its next evolutionary level.

Raymond Kurzweil is a well-known computer scientist, author, and artificial intelligence enthusiast. Over the years, he has promoted radical concepts such as transhumanism and technological singularity, where humanity and advanced technology merge to create an evolved hybrid species. Kurzweil's latest predictions on AI and the future of tech essentially double down on twenty-year-old predictions.

In a recent interview with the Guardian, Kurzweil introduced his latest book, "The Singularity Is Nearer," a sequel to his bestselling 2005 book, "The Singularity Is Near: When Humans Transcend Biology." Kurzweil predicted that AI would reach human-level intelligence by 2029, with the merging between computers and humans (the singularity) happening in 2045. Now that AI has become the most talked-about topic, he believes his predictions still hold.

Kurzweil believes that in five years, machine learning will possess the same abilities as the most skilled humans in almost every field. A few "top humans" capable of writing Oscar-level screenplays or conceptualizing deep new philosophical insights will still be able to beat AI, but everything will change when artificial general intelligence (AGI) finally surpasses humans at everything.

Bringing large language models (LLM) to the next level simply requires more computing power. Kurzweil noted that the computing paradigm we have today is "basically perfect," and it will just get better and better over time. The author doesn't believe that quantum computing will turn the world upside down. He says there are too many ways to continue improving modern chips, such as 3D and vertically stacked designs.

Kurzweil predicts that machine-learning engineers will eventually solve the issues caused by hallucinations, uncanny AI-generated images, and other AI anomalies with more advanced algorithms trained on more data. The singularity is still happening and will arrive once people start merging their brains with the cloud. Advancements in brain-computer interfaces (BCIs) are already occurring. These BCIs, eventually comprised of nanobots "noninvasively" entering the brain through capillaries, will enable humans to possess a combination of natural and cybernetic intelligence.

Kurzweil's imaginative nature as a book author and enthusiastic transhumanist is plain to see. Science still hasn't discovered an effective way to deliver drugs directly into the brain because human physiology doesn't work the way the futurist thinks. However, he remains confident that nanobots will make humans "a millionfold" more intelligent within the next twenty years.

Kurzweil concedes that AI will radically change society and create a global automated economy. People will lose jobs but will also adapt to new employment roles and opportunities advanced tech brings. A universal basic income will also ease the pain. He expects the first tangible transformative plans will emerge in the 2030s. The inevitable Singularity will enable humans to live forever or extend our living prospects indefinitely. Technology could even resurrect the dead through AI avatars and virtual reality.

Kurzweil says people are misdirecting their worries regarding AI.

"It is not going to be us versus AI: AI is going inside ourselves," he said. "It will allow us to create new things that weren't feasible before. It'll be a pretty fantastic future."

Link:

Transhumanist author predicts artificial super-intelligence, immortality, and the Singularity by 2045 - TechSpot

SentinelOne receives top accolades for Singularity Cloud Security – ChannelLife New Zealand

SentinelOne has announced that its Singularity Cloud Security solution has been recognised as market-leading across CNAPP, CSPM, and CWPP by G2, the world's largest and most trusted software marketplace. The platform garnered over 240 awards in G2s 2024 Summer Grid Reports and is the only CNAPP product that achieved a 4.9 out of 5 rating.

G2 explains the credibility of its Grid, noting, "The Grid represents the democratic voice of real software users, rather than the subjective opinion of one analyst. Our G2 staff does not add any subjective input to the ratings, which are determined algorithmically based on data aggregated from publicly available online sources and social networks. Sellers cannot influence their ratings by spending time or money with us. Only the opinion of real users and data from public sources factor into the ratings."

SentinelOne was recognised as a leader in all three G2 Grids and received accolades for Best ROI, Best Support, Easiest Setup, and Easiest to Use. The comprehensive solution combines an agentless CNAPP for cloud risk prioritisation with agent-based workload protection and malware protection for cloud storage to provide visibility and mitigation capabilities in a single platform. This integration enables security teams to detect and respond to threats with machine-speed intelligence and ensures thorough coverage and deep insight into cloud environments.

Real user reviews substantiate the solution's efficacy. An Engineering Leader at SBI General Insurance shared, "One of the main reasons I use SentinelOne is the ability to provide us with deep visibility into our cloud environment. SentinelOne displays all your cloud environments components in one console and gives details on how they affect cloud security.

Another G2 reviewer, Prahsant Singh, stated, "With SentinelOne, we can feel confident that our entire cloud infrastructure is being scanned around the clock for any potential threats. The all-in-one CNAPP cloud security allows us to identify issues quickly and provide real-time alerts that integrate seamlessly with our existing alerting tools like JIRA, Slack, PagerDuty, and email."

G2 is a software marketplace used by more than 90 million people annually. SentinelOne is an autonomous AI-powered cybersecurity platform. Built on the first unified Data Lake, SentinelOne creates intelligent, data-driven systems that think for themselves, stay ahead of complexity and risk, and evolve on their own.

Read the rest here:

SentinelOne receives top accolades for Singularity Cloud Security - ChannelLife New Zealand

Gene Drives Shown to Work in Wild Plants. They Could Wipe Out Weeds. – Singularity Hub

Henry Grabar has had enough battling knotweed. All he wanted was to build a small garden in Brooklyna bit of peace amid the cacophony of city life. But a plant with beet-red leaves soon took over his nascent garden. The fastest growing plant hed ever seen, it could sprout up to 10 feet high and grow thick as a cornfield. Even with herbicide, it was nearly impossible to kill.

Invasive plant species and weeds dont just ruin backyard gardens. Weeds decrease crop yields at an average annual cost of $33 billion, and control measures can rack up $6 billion more. Herbicides are a defense, but they have their own baggage. Weeds rapidly build resistance against the chemicals, and the resulting produce can be a hard sell for many consumers.

Weeds often seem to have the upper hand. Can we take it away?

Two recent studies say yes. Using a technology called a synthetic gene drive, the teams spliced genetic snippets into a mustard plant popular in lab studies. Previously validated in fruit flies, mosquitoes, and mice, gene drives break the rules of inheritance, allowing selfish genes to rapidly spread across entire species.

But making gene drives work in plants has been a headache, in part due to the way they repair their DNA. The new studies found a clever workaround, leading to roughly 99 percent propagation of a synthetic genetic payload to subsequent generations, in contrast to natures 50 percent. Computer models suggest the gene drives could spread throughout an entire population of the plant in roughly 10 to 30 generations.

Overriding natural evolution, gene drives could add genes that make weeds more vulnerable to herbicides or reduce their pollination and numbers. Beneficial genes can also spread across cropsessentially fast-tracking the practice of cross-breeding for desirable traits.

Imagine a future where yield-robbing agricultural weeds or biodiversity threatening invasive plants could be kept on a genetic leash, wrote Paul Neve at the University of Copenhagen and Luke Barrett at CSIRO Agriculture and Food in Australia, who were not involved in the study.

Inheritance is a coin toss for most species. Half of an offsprings genetic material comes from each parent.

Gene drives torpedo this inheritance rule. Developed roughly a decade ago, the technology relies on CRISPRthe gene editing toolto spread a new gene throughout a population, beating the 50/50 odds. In insects and mammals, a gene can propagate at roughly 80 percent, shuttling an inherited trait down generations and irreversibly changing an entire species.

While this may seem somewhat nefarious, gene drives are designed for good. A main use under investigation is to control disease-carrying mosquitoes by genetically modifying males to be sterile. Upon release, they outcompete their natural counterparts, reducing wild mosquito numbers, and in turn, lowering the risk of multiple diseases. In indoor cages, gene drives have fully suppressed a population of the insects within a year. Small-scale field tests are underway.

Gene drives have caught the eyes of plant scientists too, but initial efforts in plants failed.

The technology relies on CRISPR, which cuts DNA to insert, delete, or swap out genetic letters. Sensing damage to their DNA, cells activate internal molecular repairmen to stitch genes back together and adopt gene drives and their genetic cargo.

Plants are different. Their cells also have a DNA repair mechanism, but its only partially similar to that of insects or mice. Sticking a classic gene drive into plants can cause genetic mutations at the target site and even trigger resistance against the gene drive in a kind of a cellular civil war.

As a workaround, both new studies used a system dubbed toxin-antidote. Compared to previous gene drives, it doesnt rely on canonical DNA repair.

The teams used a self-pollinating mustard plant for their studies. A darling in plant science research, its genome is well-known, and because the plant self-pollinates, its easier to contain the experiment. To build the gene drive, they developed a CRISPR-based method to destroy a gene thats critical for survival called the torpedo. Any pollen without the gene cant live on. A second construct, the antidote, carried a mimic of the same gene, but with modifications so that its resistant to destruction by CRISPR.

They examined two different genetic payloads. One study tinkered with a gene thats essential to both male and female reproductive cells in plants. The other targeted a gene that disrupts pollen production.

Heres the clever part: As the plant pollinates, offspring can inherit either the toxin, the antidote, or both. Only those with the antidote surviveplants that inherit the toxin rapidly die out. As a result, the system worked as a gene drive, with plants carrying the CRISPR-resistant gene taking over the population. The gene drives were highly efficient, passing down through generations roughly 99 percent of the time. And scientists didnt see any signs of evolutionary adaptationknown as resistanceagainst the new genetic makeup.

Computer modeling showed the gene drive could overtake a single plant species in 10 to 30 generations. Thats impressive, according Neve and Barrett. Artificial genetic changes dont often stick in wild plantsthe plants tend to die off. The new gene drives suggest they could potentially last longer in the field, battling invasive species or cultivating hardier and pest-resistant crops that pass down beneficial traits over generations.

Despite their promise, gene drives remain controversial because of their potential to alter entire species. Scientists are still debating the ecological impacts. Theres also the concern that gene drives may hop over to unintended targets. For now, studies have designed genetic brakes to keep gene drives in check. Most studies are done in carefully controlled lab settings, and for malaria, potential unexpected consequences are being rigorously discussed before releasing gene drive-carrying mosquitos into the wild.

Even if the science works, the road to regulatory and societal approval may face roadblocks. Selling farmers on the technology may be difficult. And CRISPRed plants as a food source could also be tainted by the negative perception of genetically modified organisms (GMOs).

For now, the teams are looking towards a more acceptable everyday usekilling weeds. There are still a few kinks to work out. Gene drives only work when they can spread, so an ideal use is in plants that pollinate others, rather than those that self-pollinate, such as those in the studies. Still, the results are a proof of concept that the powerful technology can work in plantsthough it may be awhile yet before it helps Henry with his knotweed problem.

Image Credit:Anthony Wade / Unsplash

Follow this link:

Gene Drives Shown to Work in Wild Plants. They Could Wipe Out Weeds. - Singularity Hub

This Week’s Awesome Tech Stories From Around the Web (Through April 6) – Singularity Hub

To Build a Better AI Supercomputer, Let There Be Light Will Knight | Wired Lightmatter wants to directly connect hundreds of thousands or even millions of GPUsthose silicon chips that are crucial to AI trainingusing optical links. Reducing the conversion bottleneck should allow data to move between chips at much higher speeds than is possible today, potentially enabling distributed AI supercomputers of extraordinary scale.

Apple Has Been Secretly Building Home Robots That Could End up as a New Product Line, Report Says Aaron Mok | Business Insider Apple is in the early stages of looking into making home robots, a move that appears to be an effort to create its next big thing after it killed its self-driving car project earlier this year, sources familiar with the matter told Bloomberg. Engineers are looking into developing a robot that could follow users around their houses, Bloomberg reported. Theyre also exploring a tabletop at-home device that uses robotics to rotate the display, a more advanced project than the mobile robot.

A Tantalizing Hint That Astronomers Got Dark Energy All Wrong Dennis Overbye | The New York Times On Thursday, astronomers who are conducting what they describe as the biggest and most precise survey yet of the history of the universe announced that they might have discovered a major flaw in their understanding of dark energy, the mysterious force that is speeding up the expansion of the cosmos. Dark energy was assumed to be a constant force in the universe, both currently and throughout cosmic history. But the new data suggest that it may be more changeable, growing stronger or weaker over time, reversing or even fading away.

How ASML Took Over the Chipmaking Chessboard Mat Honan and James ODonnell | MIT Technology Review When asked what he thought might eventually cause Moores Law to finally stall out, van den Brink rejected the premise entirely. Theres no reason to believe this will stop. You wont get the answer from me where it will end, he said. It will end when were running out of ideas where the value we create with all this will not balance with the cost it will take. Then it will end. And not by the lack of ideas.'

The Very First Jet Suit Grand Prix Takes Off in Dubai Mike Hanlon | New Atlas A new sport kicked away this month when the first ever jet-suit race was held in Dubai. Each racer wore an array of seven 130-hp jet engines (two on each arm and three in the backpack for a total 1,050 hp) that are controlled by hand-throttles. After that, the pilots use the three thrust vectors to gain lift, move forward and try to stay above ground level while negotiating the coursefaster than anyone else.

Toyotas Bubble-ized Humanoid Grasps With Its Whole Body Evan Ackerman | IEEE Spectrum Many of those motions look very human-like, because this is how humans manipulate things. Not to throw too much shade at all those humanoid warehouse robots, but as is pointed out in the video above, using just our hands outstretched in front of us to lift things is not how humans do it, because using other parts of our bodies to provide extra support makes lifting easier.

A Brief History of the Future Offers a Hopeful Antidote to Cynical Tech Takes Devin Coldewey | TechCrunch The future, he said, isnt just what a Silicon Valley publicist tells you, or what Big Dystopia warns you of, or even what a TechCrunch writer predicts. In the six-episode series, he talks with dozens of individuals, companies and communities about how theyre working to improve and secure a future they may never see. From mushroom leather to ocean cleanup to death doulas, Wallach finds people who see the same scary future we do but are choosing to do something about it, even if that thing seems hopelessly small or nave.

This AI Startup Wants You to Talk to Houses, Cars, and Factories Steven Levy | Wired Weve all been astonished at how chatbots seem to understand the world. But what if they were truly connect to thereal world? What if the dataset behind the chat interface was physical reality itself, captured in real time by interpreting the input of billions of sensors sprinkled around the globe? Thats the idea behind Archetype AI, an ambitious startup launching today. As cofounder and CEO Ivan Poupyrev puts it, Think of ChatGPT, but for physical reality.'

How One Tech Skeptic Decided AI Might Benefit the Middle Class Steve Lohr | The New York Times David Autor seems an unlikely AI optimist. The labor economist at the Massachusetts Institute of Technology is best known for his in-depth studies showing how much technology and trade have eroded the incomes of millions of American workers over the years. But Mr. Autor is now making the case that the new wave of technologygenerative artificial intelligence, which can produce hyper-realistic images and video and convincingly imitate humans voices and writingcould reverse that trend.

Image Credit:Harole Ethan / Unsplash

Read the original:

This Week's Awesome Tech Stories From Around the Web (Through April 6) - Singularity Hub

Palia studio Singularity 6 is the latest studio to suffer layoffs – PC Gamer

Singularity 6, the studio behind the cosy MMO Palia, is the latest developer to suffer layoffs. Just under 50 developers, around one third of the company, have been let go according to Polygon reporter Nicole Carpenter.

Environmental artist Daphne Fiato tweeted "Whelp, I've been laid off," following up with "49 people Thanos snapped". Other Singularity 6 folk joined to reveal they'd also been laid off, including Brian Ernst who tweeted they'd been with the developer for five years. One developer revealed via LinkedIn that they'd been given the news while on vacation, according to MMORPG.com.

Singularity 6 is yet to publicly address the layoffs, with its last Twitter post happening on April 3, one day before they occurred. It's the same situation for the official Palia account.

Palia only just arrived on Steam on March 25, following a stint as an Epic exclusive. The free-to-play MMO has generated some praise for its cosy vibes and its stress-free cycle of farming and building, but it's also been criticised for its slow progression, reliance on timers and limited multiplayer elements. Its development status has also led to some confusionit's not in early access, and the store page implies a feature-complete game, when in reality it is still in beta. On Steam, it's currently sitting at just over 3,700 user reviews with a "Mixed" rating.

The studio joins a painfully long line of developers to have nixed a portion of its staff this year. Despite only being four months into the year, the number of layoffs are close to reaching last year's count. It was estimated that around 10,500 developers lost their jobs last year, according to the Game Industry Layoffs tracker. The number is already up to around 8000 estimated job losses right now, with more undoubtedly on their way given the volatility of the industry.

We compiled our own layoff chart earlier this year, showing the trajectory of 16,000 layoffs from January 2023 to January 2024. Since the chart was published, companies like Relic, Certain Affinity, Sony and Blackbird Interactive have joined the list.

Sign up to get the best content of the week, and great gaming deals, as picked by the editors.

Continue reading here:

Palia studio Singularity 6 is the latest studio to suffer layoffs - PC Gamer

Midweek Modular: SkaldOne, Arbhar 2, Sovage and The Singularity – gearnews.com

Midweek Modular Source: Gearnews

This week Skald Modular looked hot at Synthfest, Arbhar gets an upgrade, Sovage has new modules and Error instruments pull us fighting and screaming into The Singularity.

It was SynthFest at the weekend, which was a thoroughly good time. Check out Georges impressions of what was his first synth show. In terms of modular, I ran into most of the new stuff at Bristronica the week before. SynthFest is definitely more synth-focused in a traditional sense, and there was plenty to enjoy. However, check out Skald Modular below.

There have been a couple of interesting releases this week that we have already covered. First of all, Erica Synths has released two effects modules based on a new DSP platform. We originally saw these at Superbooth, and now the Stereo Reverb and Delay modules are available for sale at 280.

Qu-Bit has released the Mojave granular processor. It inhabits everything sandy, dusty, grainy and deserty and generates extraordinarily interesting and rhythmic explorations of micro-samples.

And, in software news, Cherry Audio has released the epic PS-33o0 based on a Korg modular synthesizer. Its worth checking out, I think.

What other peaches could we pluck from the fruit tree of modular this week?

Hiding in plain sight in a booth was Skald Modular and a simple, solid, modular synth voice. SkaldOne is a 16HP all-through-hole analogue monophonic synthesizer voice. It features a single VCO, a 24 dB lowpass OTA filter, a transistor-based VCA and a four-stage envelope with decay and release on the same knob as they do at Moog. The envelope is also wired to the pulse width modulation.

It all sounds very nice, but theres more going on here. SkaldOne is designed to hook up with a bunch of friends to become a polyphonic system. Skald Modular are building a MIDI interface which will support velocity, pitch bend and aftertouch, as well as an LFO that can be bussed to multiple SkaldOne voices, presumably via a rear connection. Its a bit like the Dreadbox Telepathy system but with much more space and simplicity.

The first batch of modules is being made now, and Skald hopes the polyphonic system will be ready by Christmas. Each voice will cost around 500. Its a nice idea. The website is currently under construction, so this video from Sonic State is all we have to go on.

By loading the video, you agree to YouTubes privacy policy. Learn more

Load video

Always unblock YouTube

The extraordinary Arbhor granular processor from Instruo has had a major overhaul with a brand-new firmware update. Its been rewritten from the ground up and includes so much detail that Instruo has produced an overview video thats over 3 hours long. The biggest key improvements are that the number of simultaneous polyphonic grains has doubled to 88 between the two engines and that now the output can be in stereo.

Its a beautiful and intoxicating module that looks like nothing else in your rack. To summarise the features, I can tell you that it has two granular engines and a total of six 10-second audio buffers. It has pitch randomisation and grain detection probability. It can scan, it can follow, or it can become a wavetable oscillator. There is a built-in condenser microphone, a preamp and a limiter for instant and automatic audio capturing, or you can dump library onto the 4GB USB flash drive. You can save, load and clone between layers and save entire configurations with up to 42 scenes. This is an epic machine.

Arbhar V2.0 is available as a free upgrade to existing owners and is already shipping with all new modules.

By loading the video, you agree to YouTubes privacy policy. Learn more

Load video

Always unblock YouTube

This time last year, Sovage launched its first range of modules. This week, we have another four to add to the collection. Three of them make some kind of sense, and one is a bit nuts.

Le Brasier is a resonant multimode filter based on germanium and OTA circuits. Theres an awful lot of fuzz going on in there. Bagarre is a stereo mix bus distortion with skills as a VCA, mixer, limiter and soft distortion. And Boucan is an analogue noise generator with waveshaping, distortion and filtering.

Sovage modules

The crazy one is Le Binome. Its labelled as a Spacial Creative Percussive Machine, and space is the one thing that it doesnt really project. In here somewhere is an entire synth voice of unintentional territory. It can use the internal oscillator or external sources to generate percussion through filter and envelope manipulation. Its then pushed into two channels that interact dynamically through Choke and Fade parameters. The stereo field can rotate and modulate in all sorts of ways. There are some interesting knobs on the front panel, like Bass and Air, Sabotage and Decay Shape.

Potentially fascinating, I think, but we could do with some video evidence. A video has just appeared on the Brasier, so hopefully, more will be along.

By loading the video, you agree to YouTubes privacy policy. Learn more

Load video

Always unblock YouTube

This is something a bit strange, and thats saying something when it comes to Error Instruments. It has a sub-title of Tropical Noise; it has LPGs, clock dividers and mixing. You can plug in different capacitors or LEDs, and you can run it with or without power for slightly different outcomes. What is it all about?

Its somehow related to the Landscape Noon, which is a delightfully weird passive drum machine. This, perhaps, tells us that we are in the territory of percussive computations. If you turn the power down via a knob on the front, it will behave very much like Noon. Behind the panel is a bunch of oscillators that do weird things as you roll off the power. All you need is a clock and a bit of abuse, and it will start generating pulses of noise, glitches and nonsense.

The Singularity is one crazy mess of noises, patch cables and excessive intentions. Bonkers.

By loading the video, you agree to YouTubes privacy policy. Learn more

Load video

Always unblock YouTube

An oldy but a goody, Stepper Acid is available again after a long time falling under the shadow of the chip shortages. I spoke to Transistor Sound Labs at Synthfest 2022 about the problems they were having, and now, a year later, there is finally some stock.

Stepper Acid is a remarkable 16-step sequencer with all sorts of performance controls, slide, accents, patterns, song modes, and lots of fun to be had. TSL also said the long-awaited Stepper Drum, which had to be completely redesigned, is also not too far away now.

By loading the video, you agree to YouTubes privacy policy. Learn more

Load video

Always unblock YouTube

Image Sources:

How do you like this post?

Rating: Yours: | :

Read the original post:

Midweek Modular: SkaldOne, Arbhar 2, Sovage and The Singularity - gearnews.com

Are We Approaching the Singularity? – Walter Bradley Center for Natural and Artificial Intelligence

Are humans progressing morally as well as materially? What does it mean to be human in the cosmos? On a new episode of ID the Future, we bring you the second half of a stimulating conversation between Dr. David Berlinski and host Eric Metaxas on the subject of Berlinskis book Human Nature.

In Human Nature, Berlinski argues that the utopian view that humans are progressing toward evolutionary and technological perfection is wishful thinking. Men are not about to become like gods. Im a strong believer in original sin, quips Berlinski in his discussion with Metaxas. In other words, he believes not only that humans are fundamentally distinct from the rest of the biological world, but also that humans are prone to ignorance and depravity as well as wisdom and nobility. During this second half of their discussion, Berlinski and Metaxas compare and contrast the ideas of thinkers like psychologist Steven Pinker, author Christopher Hitchens, and physicist Steven Weinberg. The pair also spar gracefully over the implications of human uniqueness. Berlinski, though candid and self-critical, is unwilling to be pigeonholed. Metaxas, drawing his own conclusions about the role of mind in the universe, challenges Berlinski into moments of clarity with his usual charm. The result is an honest, probing, and wide-ranging conversation about the nature of science and the human condition. Download the podcast or listen to it here.

This is Part 2 of a two-part interview. If you missed it, listen to Part 1.

Cross-posted at Evolution News.

See more here:

Are We Approaching the Singularity? - Walter Bradley Center for Natural and Artificial Intelligence

What is a singularity? | Live Science

To understand what a singularity is, imagine the force of gravity compressing you down into an infinitely tiny point, so that you occupy literally no volume. That sounds impossible and it is. These "singularities" are found in the centers of black holes and at the beginning of the Big Bang. These singularities don't represent something physical. Rather, when they appear in mathematics, they are telling us that our theories of physics are breaking down, and we need to replace them with a better understanding.

Singularities can happen anywhere, and they are surprisingly common in the mathematics that physicists use to understand the universe. Put simply, singularities are places where the mathematics "misbehave," typically by generating infinitely large values. There are examples of mathematical singularities throughout physics: Typically, any time an equation uses 1/X, as X goes to zero, the value of the equation goes to infinity.

Most of these singularities, however, can usually be resolved by pointing out that the equations are missing some factor, or noting the physical impossibility of ever reaching the singularity point. In other words, they are probably not "real."

But there are singularities in physics that do not have simple resolutions. The most famous are gravitational singularities, the infinities that appear in Einstein's general relativity (GR), which is currently our best theory of how gravity works.

In general relativity, there are two kinds of singularities: coordinate singularities and true singularities. Coordinate singularities happen when an infinity appears in one coordinate system (a particular choice for recording separations in time and space) but disappears in another.

For example, the physicist Karl Schwarzschild applied general relativity to the simple system of a spherical mass, such as a star. He found that the solution contained two singularities, one in the very center and one at a certain distance from the center, known today as the Schwarzschild radius. For many years, physicists thought that both singularities signaled breakdowns in the theory, but it didn't matter as long as the radius of the spherical mass was larger than the Schwarzschild radius. All physicists needed was for GR to predict the gravitational influence outside the mass, according to San Jose State University (opens in new tab).

But what would happen if an object were squeezed below its own Schwarzschild radius? Then that singularity would be outside the mass, and it would mean that GR is breaking down in a region that it shouldn't.

It was soon discovered that the singularity at the Schwarzschild radius was a coordinate singularity. A change in coordinate systems removes the singularity, saving GR and allowing it to still make valid predictions, astrophysicist Ethan Siegel writes in Forbes (opens in new tab).

But the singularity at the centers of spherical masses remained. If you squeeze an object below its Schwarzschild radius, then its own gravity becomes so intense that it just keeps on squeezing all by itself, all the way down to an infinitely tiny point, according to National Geographic (opens in new tab).

For decades physicists debated whether a collapse to an infinitely tiny point was possible, or whether some other force was able to prevent total collapse. While white dwarfs and neutron stars can hold themselves up indefinitely, any object larger than about six times the mass of the sun will have too much gravity, overwhelming all the other forces and collapsing into an infinitely tiny point: a true singularity, according to NASA (opens in new tab).

These are what we call the black holes: a point of infinite density, surrounded by an event horizon located at the Schwarzschild radius. The event horizon "protects" the singularity, preventing outside observers from seeing it unless they traverse the event horizon, according to Quanta Magazine (opens in new tab).

Physicists long thought that in GR, all singularities like this are surrounded by event horizons, and this concept was known as the Cosmic Censorship Hypothesis so named because it was surmised that some process in the universe prevented (or "censored") singularities from being viewable. However, computer simulations and theoretical work have raised the possibility of exposed (or "naked") singularities. A naked singularity would be just that: a singularity without an event horizon, fully observable from the outside universe. Whether such exposed singularities exist continues to be a subject of considerable debate.

Because they are mathematical singularities, nobody knows what's really at the center of a black hole. To understand it, we need a theory of gravity beyond GR. Specifically, we need a quantum theory of gravity, one that can describe the behavior of strong gravity at very tiny scales, according to Physics of the Universe (opens in new tab).

Hypotheses that modify or replace general relativity to give us a replacement of the black hole singularity include Planck stars (a highly-compressed exotic form of matter), gravastars (a thin shell of matter supported by exotic gravity), and dark energy stars (an exotic state of vacuum energy that behaves like a black hole). To date, all these ideas are hypothetical, and a true answer must await a quantum theory of gravity.

The Big Bang theory, which assumes general relativity to be true, is the modern cosmological model of the history of the universe. It also contains a singularity. In the distant past, about 13.77 billion years ago, according to the Big Bang theory, the entire universe was compressed into an infinitely tiny point.

Physicists know that this conclusion is incorrect. Though the Big Bang theory is enormously successful at describing the history of the cosmos since that moment, just as with black holes, the presence of the singularity is telling scientists that the theory again, GR is incomplete, and needs to be updated.

One possible resolution to the Big Bang singularity is causal set theory. Under causal set theory, space-time is not a smooth continuum, as it is in GR, but rather made up of discrete chunks, named "space-time atoms." Since nothing can be smaller than one of these "atoms", singularities are impossible,Bruno Bento, a physicist studying this topic at the University of Liverpool in England, told Live Science.

Bento and his collaborators are attempting to replace the earliest moments of the Big Bang using causal set theory. After those initial moments, "somewhere along the away, the universe becomes large and 'well-behaved' enough so that a continuum space-time approximation becomes a good description and GR can take over to reproduce what we see," Bento said.

While there are no universally accepted solutions to the Big Bang singularity problem, physicists are hopeful they will find a solution soon and they're enjoying their work. As Bento said, "I've always been fascinated with the universe and the fact that reality has so many things that most people would associate with sci-fi or even fantasy."

Read more:

What is a singularity? | Live Science

What Is A Singularity? – Universe Today

Ever since scientists first discovered the existence of black holes in our universe, we have all wondered: what could possibly exist beyond the veil of that terrible void? In addition, ever since the theory of General Relativity was first proposed, scientists have been forced to wonder, what could have existed before the birth of the Universe i.e. before the Big Bang?

Interestingly enough, these two questions have come to be resolved (after a fashion) with the theoretical existence of something known as a Gravitational Singularity a point in space-time where the laws of physics as we know them break down. And while there remain challenges and unresolved issues about this theory, many scientists believe that beneath veil of an event horizon, and at the beginning of the Universe, this was what existed.

In scientific terms, a gravitational singularity (or space-time singularity) is a location where the quantities that are used to measure the gravitational field become infinite in a way that does not depend on the coordinate system. In other words, it is a point in which all physical laws are indistinguishable from one another, where space and time are no longer interrelated realities, but merge indistinguishably and cease to have any independent meaning.

Singularities were first predicated as a result of Einsteins Theory of General Relativity, which resulted in the theoretical existence of black holes. In essence, the theory predicted that any star reaching beyond a certain point in its mass (aka. the Schwarzschild Radius) would exert a gravitational force so intense that it would collapse.

At this point, nothing would be capable of escaping its surface, including light. This is due to the fact the gravitational force would exceed the speed of light in vacuum 299,792,458 meters per second (1,079,252,848.8 km/h; 670,616,629 mph).

This phenomena is known as the Chandrasekhar Limit, named after the Indian astrophysicist Subrahmanyan Chandrasekhar, who proposed it in 1930. At present, the accepted value of this limit is believed to be 1.39 Solar Masses (i.e. 1.39 times the mass of our Sun), which works out to a whopping 2.765 x 1030 kg (or 2,765 trillion trillion metric tons).

Another aspect of modern General Relativity is that at the time of the Big Bang (i.e. the initial state of the Universe) was a singularity. Roger Penrose and Stephen Hawking both developed theories that attempted to answer how gravitation could produce singularities, which eventually merged together to be known as the PenroseHawking Singularity Theorems.

According to the Penrose Singularity Theorem, which he proposed in 1965, a time-like singularity will occur within a black hole whenever matter reaches certain energy conditions. At this point, the curvature of space-time within the black hole becomes infinite, thus turning it into a trapped surface where time ceases to function.

The Hawking Singularity Theorem added to this by stating that a space-like singularity can occur when matter is forcibly compressed to a point, causing the rules that govern matter to break down. Hawking traced this back in time to the Big Bang, which he claimed was a point of infinite density. However, Hawking later revised this to claim that general relativity breaks down at times prior to the Big Bang, and hence no singularity could be predicted by it.

Some more recent proposals also suggest that the Universe did not begin as a singularity. These includes theories like Loop Quantum Gravity, which attempts to unify the laws of quantum physics with gravity. This theory states that, due to quantum gravity effects, there is a minimum distance beyond which gravity no longer continues to increase, or that interpenetrating particle waves mask gravitational effects that would be felt at a distance.

The two most important types of space-time singularities are known as Curvature Singularities and Conical Singularities. Singularities can also be divided according to whether they are covered by an event horizon or not. In the case of the former, you have the Curvature and Conical; whereas in the latter, you have what are known as Naked Singularities.

A Curvature Singularity is best exemplified by a black hole. At the center of a black hole, space-time becomes a one-dimensional point which contains a huge mass. As a result, gravity become infinite and space-time curves infinitely, and the laws of physics as we know them cease to function.

Conical singularities occur when there is a point where the limit of every general covariance quantity is finite. In this case, space-time looks like a cone around this point, where the singularity is located at the tip of the cone. An example of such a conical singularity is a cosmic string, a type of hypothetical one-dimensional point that is believed to have formed during the early Universe.

And, as mentioned, there is the Naked Singularity, a type of singularity which is not hidden behind an event horizon. These were first discovered in 1991 by Shapiro and Teukolsky using computer simulations of a rotating plane of dust that indicated that General Relativity might allow for naked singularities.

In this case, what actually transpires within a black hole (i.e. its singularity) would be visible. Such a singularity would theoretically be what existed prior to the Big Bang. The key word here is theoretical, as it remains a mystery what these objects would look like.

For the moment, singularities and what actually lies beneath the veil of a black hole remains a mystery. As time goes on, it is hoped that astronomers will be able to study black holes in greater detail. It is also hoped that in the coming decades, scientists will find a way to merge the principles of quantum mechanics with gravity, and that this will shed further light on how this mysterious force operates.

We have many interesting articles about gravitational singularities here at Universe Today. Here is 10 Interesting Facts About Black Holes, What Would A Black Hole Look Like?, Was the Big Bang Just a Black Hole?, Goodbye Big Bang, Hello Black Hole?, Who is Stephen Hawking?, and Whats on the Other Side of a Black Hole?

If youd like more info on singularity, check out these articles from NASA and Physlink.

Astronomy Cast has some relevant episodes on the subject. Heres Episode 6: More Evidence for the Big Bang, and Episode 18: Black Holes Big and Small and Episode 21: Black Hole Questions Answered.

Sources:

Like Loading...

Read more from the original source:

What Is A Singularity? - Universe Today

Technological singularity – Wikipedia

Hypothetical point in time when technological growth becomes uncontrollable and irreversible

The technological singularityor simply the singularity[1]is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.[2][3] According to the most popular version of the singularity hypothesis, I.J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[4]

The first person to use the concept of a "singularity" in the technological context was John von Neumann.[5] Stanislaw Ulam reports a discussion with von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[6] Subsequent authors have echoed this viewpoint.[3][7]

The concept and the term "singularity" were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole",[8] and later in his 1993 essay The Coming Technological Singularity,[4][7] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.[4] Another significant contributor to wider circulation of the notion was Ray Kurzweil's 2005 book The Singularity is Near, predicting singularity by 2045.[7]

Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence (ASI) could result in human extinction.[9][10] The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated.

Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen,[11] Jeff Hawkins,[12] John Holland, Jaron Lanier, Steven Pinker,[12] Theodore Modis,[13] and Gordon Moore.[12] One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.

Although technological progress has been accelerating in most areas, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.[14] However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is significantly more intelligent than humans.[15]

If a superhuman intelligence were to be inventedeither through the amplification of human intelligence or through artificial intelligenceit would vastly improve over human problem-solving and inventive skills. Such an AI is referred to as Seed AI[16][17] because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware to design an even more capable machine, which could repeat the process in turn. This recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion:[18][19]

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an intelligence explosion, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

One version of intelligence explosion is one where computing power approaches infinity in a finite amount of time. In this version, once AIs are doing the research to improve themselves, speed doubles e.g. after 2 years, then 1 year, then 6 months, then 3 months, then 1.5 months, etc., where the infinite sum of the doubling periods is 4 years. Unless prevented by physical limits of computation and time quantization, this process would literally achieve infinite computing power in 4 years, properly earning the name "singularity" for the final state. This form of intelligence explosion is described in Yudkowsky (1996).[20]

A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence, arguing that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world.[4][21]

Technology forecasters and researchers disagree regarding when, or whether, human intelligence will likely be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that bypass human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence.[22][23] A number of futures studies scenarios combine these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. The book The Age of Em by Robin Hanson outlines a future in which uploads of human brains emerge instead of or on the way to the emergence of superintelligence.[24]

Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[25][26][27] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[4]

A speed superintelligence describes an AI that can function like a human mind, only much faster.[28] For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds.[29] Such a difference in information processing speed could drive the singularity.[30]

As per Chalmers, "Good (1965) predicts an ultraintelligent machine by 2000,[18] Vinge (1993) predicts greater-than-human intelligence between 2005 and 2030,[4] Yudkowsky (1996) predicts a singularity by 2021,[20] and Kurzweil (2005) predicts human-level artificial intelligence by 2030."[7] Moravec (1988) predicts human-level artificial intelligence in supercomputers by 2010 by extrapolating past trend using a chart,[31] while Moravec (1998/1999) predicts human-level artificial intelligence by 2040, and intelligence far beyond human by 2050.[32] Per 2017 interview, Kurzweil predicts human-level intelligence by 2029 and billion fold intelligence and singularity by 2045.[33][34]

Four polls of AI researchers, conducted in 2012 and 2013 by Nick Bostrom and Vincent C. Mller, suggested a confidence of 50% that artificial general intelligence (AGI) would be developed by 20402050.[35][36]

Prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen,[11] Jeff Hawkins,[12] John Holland, Jaron Lanier, Steven Pinker,[12] Theodore Modis,[13] and Gordon Moore,[12] whose law is often cited in support of the concept.[37]

Most proposed methods for creating superhuman or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The many speculated ways to augment human intelligence include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct braincomputer interfaces and mind uploading. These multiple possible paths to an intelligence explosion, all of which will presumably be pursued, makes a singularity more likely.[29]

Robin Hanson expressed skepticism of human intelligence augmentation, writing that once the "low-hanging fruit" of easy methods for increasing human intelligence have been exhausted, further improvements will become increasingly difficult.[38] Despite all of the speculated ways for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option among the hypotheses that would advance the singularity.[citation needed]

The possibility of an intelligence explosion depends on three factors.[39] The first accelerating factor is the new intelligence enhancements made possible by each previous improvement. Contrariwise, as the intelligences become more advanced, further advances will become more and more complicated, possibly outweighing the advantage of increased intelligence. Each improvement should generate at least one more improvement, on average, for movement towards singularity to continue. Finally, the laws of physics may eventually prevent further improvement.

There are two logically independent, but mutually reinforcing, causes of intelligence improvements: increases in the speed of computation, and improvements to the algorithms used.[7] The former is predicted by Moore's Law and the forecasted improvements in hardware,[40] and is comparatively similar to previous technological advances. But there are some AI researchers,[who?] who believe software is more important than hardware.[41]

A 2017 email survey of authors with publications at the 2015 NeurIPS and ICML machine learning conferences asked about the chance that "the intelligence explosion argument is broadly correct". Of the respondents, 12% said it was "quite likely", 17% said it was "likely", 21% said it was "about even", 24% said it was "unlikely" and 26% said it was "quite unlikely".[42]

Both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. An analogy to Moore's Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity.[43][20] Some upper limit on speed may eventually be reached. Jeff Hawkins has stated that a self-improving computer system would inevitably run into upper limits on computing power: "in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity."[12]

It is difficult to directly compare silicon-based hardware with neurons. But Berglas (2008) notes that computer speech recognition is approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude of being as powerful as the human brain.

The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[44] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[45]) increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[46] Between 1986 and 2007, machines' application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world's general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world's storage capacity per capita doubled every 40 months.[47] On the other hand, it has been argued that the global acceleration pattern having the 21st century singularity as its parameter should be characterized as hyperbolic rather than exponential.[48]

Kurzweil reserves the term "singularity" for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that "The Singularity will allow us to transcend these limitations of our biological bodies and brains ... There will be no distinction, post-Singularity, between human and machine".[49] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date "will not represent the Singularity" because they do "not yet correspond to a profound expansion of our intelligence."[50]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term "singularity" in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[6]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history".[51] Kurzweil believes that the singularity will occur by approximately 2045.[46] His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's April 2000 Wired magazine article "Why The Future Doesn't Need Us".[7][52]

Some intelligence technologies, like "seed AI",[16][17] may also have the potential to not just make themselves faster, but also more efficient, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on.

The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.[citation needed] An AI rewriting its own source code could do so while contained in an AI box.

Second, as with Vernor Vinges conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.[53]

There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might self-modify, potentially causing the AI to optimise for something other than what was originally intended.[54][55]

Secondly, AIs could compete for the same scarce resources humankind uses to survive.[56][57] While not actively malicious, AIs would promote the goals of their programming, not necessarily broader human goals, and thus might crowd out humans completely.[58][59][60]

Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.[61] An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."[62]

Some critics, like philosopher Hubert Dreyfus[63] and philosopher John Searle,[64] assert that computers or machines cannot in principle achieve true human intelligence. Others, like physicist Stephen Hawking,[65] object that whether machines can achieve a true intelligence or merely something similar to intelligence is irrelevant if the net result is the same.

Psychologist Steven Pinker stated in 2008: "There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. ..."[12]

Martin Ford[66] postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to those types of work traditionally considered to be "routine."[67]

Theodore Modis[68] and Jonathan Huebner[69] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advances in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[70]

Theodore Modis holds the singularity cannot happen.[71][13][72] He claims the "technological singularity" and especially Kurzweil lack scientific rigor; Kurzweil is alleged to mistake the logistic function (S-function) for an exponential function, and to see a "knee" in an exponential function where there can in fact be no such thing.[73] In a 2021 article, Modis pointed out that no milestones breaks in historical perspective comparable in importance to the Internet, DNA, the transistor, or nuclear energy had been observed in the previous twenty years while five of them would have been expected according to the exponential trend advocated by the proponents of the technological singularity.[74]

AI researcher Jrgen Schmidhuber stated that the frequency of subjectively "notable events" appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[75]

Microsoft co-founder Paul Allen argued the opposite of accelerating returns, the complexity brake;[11] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[76] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[69] The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse".

Hofstadter (2006) raises concern that Ray Kurzweil is not sufficiently scientifically rigorous, that an exponential tendency of technology is not a scientific law like one of physics, and that exponential curves have no "knees".[77] Nonetheless, he does not rule out the singularity in principle in the distant future.[12]

Jaron Lanier denies that the singularity is inevitable: "I do not think the technology is creating itself. It's not an autonomous process."[78] Furthermore: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics."[78]

Economist Robert J. Gordon points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 20072008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I. J. Good.[79]

Philosopher and cognitive scientist Daniel Dennett said in 2017: "The whole singularity stuff, thats preposterous. It distracts us from much more pressing problems", adding "AI tools that we become hyper-dependent on, that is going to happen. And one of the dangers is that we will give them more authority than they warrant."[80]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily.[81] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. Kelly (2006) argues that the way the Kurzweil chart is constructed with x-axis having time before present, it always points to the singularity being "now", for any date on which one would construct such a chart, and shows this visually on Kurzweil's chart.[82]

Some critics suggest religious motivations or implications of singularity, especially Kurzweil's version of it. The buildup towards the Singularity is compared with Judeo-Christian end-of-time scenarios. Beam calls it "a Buck Rogers vision of the hypothetical Christian Rapture".[83] John Gray says "the Singularity echoes apocalyptic myths in which history is about to be interrupted by a world-transforming event".[84]

Dramatic changes in the rate of economic growth have occurred in the past because of technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.[85]

The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[86][87] It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an existential threat.[88][89] Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human values, including the Future of Humanity Institute, the Machine Intelligence Research Institute,[86] the Center for Human-Compatible Artificial Intelligence, and the Future of Life Institute.

Physicist Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks."[90] Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand."[90] Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity:[90]

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here we'll leave the lights on"? Probably not but this is more or less what is happening with AI.

Berglas (2008) claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by humankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators.[91][92][93] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[94] AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[56][95] and humans would be powerless to stop them.[96] Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.[60]

Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

According to Eliezer Yudkowsky, a significant problem in AI safety is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[97] Bill Hibbard (2014) harvtxt error: no target: CITEREFBill_Hibbard2014 (help) proposes an AI design that avoids several dangers including self-delusion,[98] unintended instrumental actions,[54][99] and corruption of the reward generator.[99] He also discusses social impacts of AI[100] and testing AI.[101] His 2001 book Super-Intelligent Machines advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator.

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.[citation needed]

In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence.

A 2016 article in Trends in Ecology & Evolution argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes... With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction".

The article further argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life's evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition.

The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (51021 bytes).[103]

In biological terms, there are 7.2billion humans on the planet, each having a genome of 6.2billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 11019 bytes. The digital realm stored 500 times more information than this in 2014 (see figure). The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.31037 base pairs, equivalent to 1.3251037 bytes of information.

If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[47] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years".[102]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[104]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[104]

Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability.[105] Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion. One example of this is solar energy, where the Earth receives vastly more solar energy than humanity captures, so capturing more of that solar energy would hold vast promise for civilizational growth.

In a hard takeoff scenario, an AGI rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals. In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development.[107][108]

Ramez Naam argues against a hard takeoff. He has pointed out that we already see recursive self-improvement by superintelligences, such as corporations. Intel, for example, has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to... design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of Moore's law.[109] Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1."[110]

J. Storrs Hall believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular they seem to assume hyperhuman capabilities at the starting point of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.[111]

Ben Goertzel agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a hard five minute takeoff but speculates that a takeoff from human to superhuman level on the order of five years is reasonable. Goerzel refers to this scenario as a "semihard takeoff".[112]

Max More disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."[113]

Drexler (1986), one of the founders of nanotechnology, postulates cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines.[114] According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines. Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.[115]

Moravec (1988)[31] predicts possibility of "uploading" human mind into a human-like robot, achieving quasi-immortality by extreme longevity via transfer of the human mind between successive new robots and the old ones wear out; beyond that, he predicts later exponential acceleration of subjective experience of time leading to subjective sense of immortality.

Kurzweil (2005) suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[116] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[117]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious."[118]

A paper by Mahendra Prasad, published in AI Magazine, asserts that the 18th-century mathematician Marquis de Condorcet was the first person to hypothesize and mathematically model an intelligence explosion and its effects on humanity.[119]

An early description of the idea was made in John W. Campbell's 1932 short story "The last evolution".

In his 1958 obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."[6]

In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence.[18][19]

In 1977, Hans Moravec wrote an article with unclear publishing status where he envisioned a development of self-improving thinking machines, a creation of "super-consciousness, the synthesis of terrestrial life, and perhaps jovian and martian life as well, constantly improving and extending itself, spreading outwards from the solar system, converting non-life into mind."[120][121] The article describes the human mind uploading later covered in Moravec (1988). The machines are going to reach human level and then improve themselves beyond that ("Most significantly of all, they [the machines] can be put to work as programmers and engineers, with the task of optimizing the software and hardware which make them what they are. The successive generations of machines produced this way will be increasingly smarter and more cost effective.") Humans will no longer be needed, and will be overtaken by the machines: "In the long run the sheer physical inability of humans to keep up with these rapidly evolving progeny of our minds will ensure that the ratio of people to machines approaches zero, and that a direct descendant of our culture, but not our genes, inherits the universe." While the word "singularity" is not used, the notion of human-level thinking machines thereafter improving themselves beyond human level is there. There is no intelligence explosion in the sense of a very rapid intelligence increase once human equivalence is reached. An updated version of the article was published in 1979 in Analog Science Fiction and Fact.[122][121]

In 1981, Stanisaw Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirements because it finds them lacking internal logical consistency.

In 1983, Vernor Vinge addressed Good's intelligence explosion in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" (although not "technological singularity") in a way that was specifically tied to the creation of intelligent machines:[8][121]

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible.

In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[7][123]

In 1986, Vernor Vinge published Marooned in Realtime, a science-fiction novel where a few remaining humans traveling forward in the future have survived an unknown extinction event that might well be a singularity. In a short afterword, the author states that an actual technological singularity would not be the end of the human species: "of course it seems very unlikely that the Singularity would be a clean vanishing of the human race. (On the other hand, such a vanishing is the timelike analog of the silence we find all across the sky.)".[124][125]

In 1988, Vinge used the phrase "technological singularity" (including "technological") in the short story collection Threats and Other Promises, writing in the introduction to his story "The Whirligig of Time" (p.72): Barring a worldwide catastrophe, I believe that technology will achieve our wildest dreams, and soon. When we raise our own intelligence and that of our creations, we are no longer in a world of human-sized characters. At that point we have fallen into a technological "black hole," a technological singularity.[126]

In 1988, Hans Moravec published Mind Children,[31] in which he predicted human-level intelligence in supercomputers by 2010, self-improving intelligent machines far surpassing human intelligence later, human mind uploading into human-like robots later, intelligent machines leaving humans behind, and space colonization. He did not mention "singularity", though, and he did not speak of a rapid explosion of intelligence immediately after the human level is achieved. Nonetheless, the overall singularity tenor is there in predicting both human-level artificial intelligence and further artificial intelligence far surpassing humans later.

Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era",[4] spread widely on the internet and helped to popularize the idea.[127] This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[4]

Minsky's 1994 article says robots will "inherit the Earth", possibly with the use of nanotechnology, and proposes to think of robots as human "mind children", drawing the analogy from Moravec. The rhetorical effect of that analogy is that if humans are fine to pass the world to their biological children, they should be equally fine to pass it to robots, their "mind" children. As per Minsky, 'we could design our "mind-children" to think a million times faster than we do. To such a being, half a minute might seem as long as one of our years, and each hour as long as an entire human lifetime.' The feature of the singularity present in Minsky is the development of superhuman artificial intelligence ("million times faster"), but there is no talk of sudden intelligence explosion, self-improving thinking machines or unpredictability beyond any specific event and the word "singularity" is not used.[128]

Tipler's 1994 book The Physics of Immortality predicts future where superintelligent machines will build enormously powerful computers, people will be "emulated" in computers, life will reach every galaxy and people will achieve immortality when they reach Omega Point.[129] There is no talk of Vingean "singularity" or sudden intelligence explosion, but intelligence much greater than human is there, as well as immortality.

In 1996, Yudkowsky predicted a singularity by 2021.[20] His version of singularity involves intelligence explosion: once AIs are doing the research to improve themselves, speed doubles after 2 years, then 1 one year, then after 6 months, then after 3 months, then after 1.5 months, and after more iterations, the "singularity" is reached.[20] This construction implies that the speed reaches infinity in finite time.

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of robotics, genetic engineering, and nanotechnology.[52]

In 2005, Kurzweil published The Singularity Is Near. Kurzweil's publicity campaign included an appearance on The Daily Show with Jon Stewart.[130]

From 2006 to 2012, annual Singularity Summit conference was organized by Machine Intelligence Research Institute, founded by Eliezer Yudkowsky.

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting.[26][131] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.[26]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."[132] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA's Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the Joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[133][134][135]

Continue reading here:

Technological singularity - Wikipedia

Singularity | technology | Britannica

singularity, theoretical condition that could arrive in the near future when a synthesis of several powerful new technologies will radically change the realities in which we find ourselves in an unpredictable manner. Most notably, the singularity would involve computer programs becoming so advanced that artificial intelligence transcends human intelligence, potentially erasing the boundary between humanity and computers. Often, nanotechnology is included as one of the key technologies that will make the singularity happen.

In 1993 the magazine Whole Earth Review published an article titled Technological Singularity by Vernor Vinge, a computer scientist and science fiction author. Vinge imagined that future information networks and human-machine interfaces would lead to novel conditions with new qualities: a new reality rules. But there was a trick to knowing the singularity. Even if one could know that it was imminent, one could not know what it would be like with any specificity. This condition will be, by definition, so thoroughly transcendent that we cannot imagine what it will be like. There was an opaque wall across the future, and the new era is simply too different to fit into the classical frame of good and evil. It could be amazing or apocalyptic, but we cannot know the details.

Since that time, the idea of the singularity has been expanded to accommodate numerous visions of apocalyptic changes and technological salvation, not limited to Vinges parameters of information systems. One version championed by the inventor and visionary Ray Kurzweil emphasizes biology, cryonics, and medicine (including nanomedicine): in the future we will have the medical tools to banish disease and disease-related death. Another is represented in the writings of the sociologist William Sims Bainbridge, who describes a promise of cyberimmortality, when we will be able to experience a spiritual eternity that persists long after our bodies have decayed, by uploading digital records of our thoughts and feelings into perpetual storage systems. This variation circles back to Vinges original vision of a singularity driven by information systems. Cyberimmortality will work perfectly if servers never crash, power systems never fail, and some people in later generations have plenty of time to examine the digital records of our own thoughts and feelings.

One can also find a less radical expression of the singularity in Converging Technologies for Improving Human Performance. This 2003 collection tacitly accepts the inevitability of so-called NBIC convergence, that is, the near-future synthesis of nanotech, biotech, infotech, and cognitive science. Because this volume was sponsored by the U.S. National Science Foundation and edited by two of its officers, Mihail Roco and Bainbridge, some saw it as a semiofficial government endorsement of expectations of the singularity.

Unprecedented new technologies will continue to arise, and perhaps they will synthesize with each other, but it is not inevitable that the changes they create will be apocalyptic. The idea of the singularity is a powerful inspiration for people who want technology to deliver a new spiritual and material reality within our lifetimes. This vision is sufficiently flexible that each person who expects the singularity can customize it to his or her own preferences.

Read more from the original source:

Singularity | technology | Britannica

Singularity (2017) – IMDb

I gave it a 2, but it's more of 4/10 movie...maybe. I lowered the rating because producers/owners of the movie are using one of many sites that give you free positive ratings, which is illegal. They should've spent that budget on making the movie better! Just google: buy IMDb ratings and many sites will pop-up offering this service. Real rating of this movie is more like 4/10 at the MOST!

OK acting by some...Bad for some others and of course 1 good actor.

The issue I have with the movie is that it is pointless, slow and predictable. I won't discuss bad graphics sfx as certainly the budget wasn't big.

The story could've had so much more to it. It just sad they decided to dumb-out the script and make it so plain. It is a very slow pace movie and not in a good way as I got bored 1/3 of the way and not sure how I got through the rest- I was really expecting it to pick up and it never did. The idea behind the movie is nice,but it was never developed at all.

Do NOT believe the high rating (currently 8) as those are fake ratings. There are many sites that sell Anyone giving this more than 5 or 6 did not watch the movie at all. I'd say any reviews of 2-5 can be considered real.

The rest is here:

Singularity (2017) - IMDb

Singularity – Microsoft Research

Singularity was a multi-year research project focused on the construction of dependable systems through innovation in the areas of systems, languages, and tools. We built a research operating system prototype (called Singularity), extended programming languages, and developed new techniques and tools for specifying and verifying program behavior.

Advances in languages, compilers, and tools open the possibility of significantly improving software. For example, Singularity uses type-safe languages and an abstract instruction set to enable what we call Software Isolated Processes (SIPs). SIPs provide the strong isolation guarantees of OS processes (isolated object space, separate GCs, separate runtimes) without the overhead of hardware-enforced protection domains. In the current Singularity prototype SIPs are extremely cheap; they run in ring 0 in the kernels address space.

Singularity uses these advances to build more reliable systems and applications. For example, because SIPs are so cheap to create and enforce, Singularity runs each program, device driver, or system extension in its own SIP. SIPs are not allowed to share memory or modify their own code. As a result, we can make strong reliability guarantees about the code running in a SIP. We can verify much broader properties about a SIP at compile or install time than can be done for code running in traditional OS processes. Broader application of static verification is critical to predicting system behavior and providing users with strong guarantees about reliability.

Go here to see the original:

Singularity - Microsoft Research

Singularity Group to Host SingularityU India Summit on November 14 and 15 in Bangalore – Devdiscourse

The conference will bring together hundreds of leaders from India, Southeast Asia, Australia, and New Zealand to discuss exponential technologies as a tool to shape the future and their applications on individuals, organizations, and society Bangalore, Karnataka, India (NewsVoir) Singularity Group, a global impact organization that helps leaders leverage exponential technology to shape businesses and societies in the years ahead, today, announced the SingularityU India Summit, Re:Imagine the Future. Leaders from the world of technology, business, science and entrepreneurship will attend the event on November 14 and 15, 2022 at the Conrad Hilton in Bangalore. Machani Robotics will serve as the Diamond Sponsor with additional sponsorship opportunities still available for corporations, government organizations and venture capital groups interested in exponential technologies as a tool to shape the future. The two-day event, powered by HeroVired, and in association INK Talks and Machani Robotics will bring together innovative leaders and institutions, and create a plan to pole vault into the future. Over the course of two days, more than 20 experts will cover diverse topics including the future of work, finance, education, and AI, empowering a network of globally connected changemakers and leaders across India. Attendees can participate in master classes, workshops, and networking sessions discussing the future of work, electric mobility, education, cleantech and more.

Speakers include: Rob Nail, Serial entrepreneur, Associate Founder, Faculty member & former CEO of Singularity University, Shuo Chen, Singularity Expert: Entrepreneurship and Blockchain, Taddy Bletcher, Singularity Expert: Education, Lakshmi Pratury, Co-founder & CEO of INK Talks, Prerna Jhunjhunwala, Founder of Creative Galileo and others. The Summit will host a second stage for Indian entrepreneurs to showcase their companies and the impact they are making on the Indian startup community and larger ecosystem of the country. In this post-pandemic world, Singularity will be your guide as we work together to accelerate our journey towards a more equitable and sustainable future for us all, said Dermot Mee, COO of Singularity Group. If our ambition is to thrive, we need to collectively reimagine and design the future by shifting our mindset to think exponentially. We need to be the dreamers who learn to turn our dreams into reality. For ticketing information and registration, please visit http://www.singularityuindia.com. About Singularity Group Singularity Group is a global impact organization that looks into the future to help leaders better understand how exponential technology will shape businesses and societies in the years ahead. Through a deeper understanding of the accelerated pace of change and the role that technology plays in it, these leaders create tremendous positive impact that improves the wellbeing of people and the health of the planet. Over the past decade, Singularity has worked with more than 75,000 leaders drawn from corporations, nonprofits, governments, investors and academia. With 250,000 impact-minded innovators across the Singularity network, over 125 chapters and partners across six continents and a strong digital presence, Singularity Group reaches millions of people each month. The organization has launched over 5,000 social impact initiatives, and its alumni have started more than 200 companies.

For more information, visit su.org.

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

See the original post:

Singularity Group to Host SingularityU India Summit on November 14 and 15 in Bangalore - Devdiscourse

We need to manage AI better as we are approaching the Creative Singularity – RedShark News

David Shapton on why we can't ignore AI anymore and how, without active management, AI will be a threat to artists and creators not an opportunity.

If you've read my columns for the last ten years, you'll know I'm the opposite of a Luddite. I embrace new technology because I see it as a means to change the world for good. But whatever new tools technology brings us, it's how we use them that will determine their net effect on the level of happiness and well-being in our society.

And - to be perfectly clear - I see AI the same way. We're suddenly starting to see AI doing things for us that are supposed to be impossible: not just difficult, but properly impossible.

Like being able to "unmix" a musical recording. Want the dry, isolated lead vocal without the cacophony of the musical accompaniment?There's a web service for that.

Need to extract a person's portrait from a photograph with a distracting or unattractive background?A new version of the iPhone operating system will instantly do that for you, even with animals, objects, and human faces.

Can't read Welsh, Albanian or Icelandic?There's a translation app for that.

Need a background for your film that you're shooting on a virtual set? Just say the words"Mythical world populated by dragons and slightly scary looking tall people with mountains in the background and a spooky castle in the mid-distance",and... there's your background.

Amazing. What a giant leap forward. AI is doing things that shouldn't even be possible. The problem is that it's doing something that artists usually get paid to do. And that, at least, should provoke a reaction from us.

It will raise fundamental questions about who we are and what we can and should do. And it's not as clear-cut as you would think. We have to ask about skill, and not just skill as in expert-level muscle memory, but the talent in assessing a task or project, organising it and delivering a pleasing outcome without being ridiculously expensive (for example).

Let's step back for a minute and look at why this is becoming such an issue.

After several false starts, AI is taking off, and it's happening at a pace that surprises people. The key to understanding this is that word: "surprise". That's because we're accustomed to a world where we can see things coming. So, even though nobody can predict the future, we can identify trends. If you keep up with the news, then the chances are that nothing much will surprise you, especially if it's news about your own field of expertise.

But imagine a world where we can no longer make predictions based on trends. Where not even experts can know what's coming next. That's the stage we're at with AI, and it is a potential problem, as well as being a breathtaking display of technical virtuosity.

It's beginning to feel like we're approaching some sort of Singularity.

For those unfamiliar with the technological Singularity, it's a concept that Ray Kurzweil brought to the surface in his 2005 book The Singularity Is Near. There are several mutually compatible definitions of Singularity. I think the most useful one says that the technological Singularity is when the rate of progress is so steep that it appears to be a vertical line from our perspective. In other words, you get infinite progress in absolutely no time at all.

That's not likely to happen yet, but we're already starting to experience that the rate of progress is steeper than we can comprehend it to be.

One effect of that is that we start to be surprised by the rate of progress. Even experts are beginning to be surprised by AI. I'm not an expert, but I'm reasonably well informed, and I am extremely, totally surprised by the leaps that AI is making.

If you take Moore's law in its prime, progress was around 40% per year. Effectively, that's like compound interest. Add that into the mix each year, and you arrive at the sort of progress in computers that we've seen over the last four decades or so. Remember that percentage while I tell you that last year, Nvidia - arguably the leading developer of co-processors for AI -said that AI is developing at the rate of 116% per year. That's enough to give us a million-fold increase in ten years. On top of that, AI is capable of improving itself - it's "intelligent", after all. (But let's not be too picky about the definition of "intelligence" here!).

I remember talking to some digital video engineers at JVC around the start of this century. I was suggesting an approach to video encoding that would be pretty radical. My engineer friends told me that you'd need a thousand-fold increase in technology to do that. It was a figure plucked out of the air but from an informed viewpoint. They meant, "it won't happen in our lifetimes".

But that thousand-fold increasehashappened. Except that it's more than a million-fold if you include AI in the mix - and it would be negligent not to.

So, our handy instrument for detecting a Singularity is "surprised experts".

I was surprised by the quality of images from text-to-image programs like Stable Diffusion and by the AI's sheer virtuosity. But what surprised me even more - and I could have used the word "shocked" here - was that, quite spontaneously, friends of friends and colleagues started to use the AI images in places where they would previously have employed an artist or designer. Web pages, backdrops for virtual production, brochures, and probably loads more uses that I haven't heard about yet. It's happening. AI is taking our jobs. OMG!

But this isn't the end of it. Let's not go down the rabbit hole of arguments about sentient machines and AI "wanting" to take over the world. We're not quite there yet. But we have arrived at a critical point where we need to take a deep and measured look at how wemanageAI in the creative sphere.

AI can automate tedious processes; it can speed up repetitive tasks. It can match colours in previously unmatched shots. It can up-res and down-res. It can create fantasy backgrounds and photorealistic foregrounds.

So we have to decide: what will our relationship with AI be like? And it won't be easy. With the AI landscape changing so quickly, there is no informed answer. So there isn't a definitive way forward.

But the future for artists and creators is different this year from how it looked last year. We can't ignore AI, or it might end up ignoring us. Or, more likely, our clients will use AI to bypass us.

But AI will neverbeus. The new techniques appear to be extraordinarily good at identifying the essence of styles and themes. But will they ever be creative? Or can they only be derivative?

There will be more questions than answers. Meanwhile, let's not be Luddites. If we can manage AI, it can do great work. It might become a new and expressive canvas that takes our imagination further than before. Without our input, AI might only ever be a soul-less facsimile of art: devoid of emotion and wonder.

We may not know how it will turn out, but one thing is certain: we can't ignore it.

Read more:

We need to manage AI better as we are approaching the Creative Singularity - RedShark News

New Bayonetta 3 Trailer Reveals An In-Universe Singularity, And Lots Of Witches – Gameranx

Neither angel nor demon, but a secret third thing.

PlatinumGames and Nintendo have shared a new trailer for Bayonetta 3, and its quite a doozy.

While the sequence of events in the trailer are deliberately cropped and put together so that it doesnt make sense and viewers are left guessing, we can discuss some elements within that PlatinumGames have dropped as small teasers of what we can expect.

For one, Cereza talks quite a bit about a singularity that she and the other Umbran Witches need to stop, or defeat. In scientific language, a gravitational singularity is a situation where gravity becomes so intense that it breaks down spacetime to a catastrophic level. Such a situation, that literally breaks spacetime, cannot be defined to have a where or a when. When we read, watch, or play fiction that brings up the idea of singularity, they directly reference or create a variation of this scenario.

This matches what GameInformer has allegedly added to their Bayonetta 3 issue cover story. In that issue they confirm that this Bayonetta game takes place in a multiverse, something that was heavily hinted at in prior trailers as well.

The trailer also mentions an Arch-Eve falling. This is an entirely new character that hasnt been mentioned before, at least not by this name. Could this be another alter ego of antagonist Baldr? Notably, Baldr isnt seen or mentioned in this trailer either, but that doesnt mean he isnt in the game at all. Another character actually refers to Cereza as Arch-Eve Origin, which certainly deepens the mystery. Other things they name drop without explaining is an Alphaverse, which is apparently where they can stop the singularity, and Chaos Gears, which is something you will need to collect in the game.

But now we should talk about the many unnamed characters that are appearing in this trailer. Theres a spider-based Umbran Witch, who makes reference to having literal fish to fry. Theres a black skinned Witch, who seems to wear an Egyptian inspired outfit. And theres a fun looking masked Witch, who crosses her sword with Cereza. There are more familiar faces, a seemingly older Jeanne whos dragging a mysterious doctor along with her, and Baal, the Empress of the Fathoms. This is the large toad demon thats been around since Bayonetta 2, and her fabulous self returns, seemingly to match up with a new Bayonetta, or joining her for the first time.

But most interesting is the prominence of the newest Witch in town, Viola. She apparently gets tasked with taking care of Luka for Cereza at some point, which also implies we get to play a lot of her somewhere in the game. Viola even sees Cereza die in battle against a mysterious new enemy. Neither an angel, nor a devil, but a secret third thing. Also not a human, so this character really is a genuine mystery.

All mysteries will definitely be revealed soon, Bayonetta 3 will be releasing exclusively to the Nintendo Switch on October 28, 2022. You can watch the trailer and read more of our coverage of Bayonetta 3 below.

Bayonetta 3 Gets 7-Minute Gameplay Video Featuring Viola

Bayonetta 3 Gets New Story Details and Gameplay Trailer

Source: YouTube, Reddit

See the original post here:

New Bayonetta 3 Trailer Reveals An In-Universe Singularity, And Lots Of Witches - Gameranx

This Week’s Awesome Tech Stories From Around the Web (Through October 15) – Singularity Hub

9 Astonishing Ways That Living Standards Have Improved Around the WorldTony Morley | Big ThinkOver the last 200 years, the lives of average people in every country have been radically transformed and improved. In our modern day, we are living longer and are more prosperous than ever beforein both high-income and low-income countries. And while progress forward is by no means progress completed nor a guarantee of progress to come, the remarkable improvements in global living standards serve, not as a high water or finish line, but rather as a source of inspiration and hope.

Human Brain Cells Transplanted Into Baby Rats Brains Grow and Form ConnectionsJessica Hamzelou | MIT Technology ReviewThese animals could be used to learn more about human neuropsychiatric disorders, say the researchers behind the work. Its an important step forward in progress into [understanding and treating] brain diseases, says Julian Savulescu, a bioethicist at the National University of Singapore, who was not involved in the study. But the development also raises ethical questions, he says, particularly surrounding what it means to humanize animals.

Fake Joe Rogan Interviews Fake Steve Jobs in an AI-powered PodcastBenj Edwards | Ars TechnicaWhether its legal to use Jobs or Rogans vocal likenesses in this mannerparticularly to promote a commercial productremains to be seen. And despite the PR-stunt nature of the podcast, the concept of entirely fictional celebrity podcasts got our attention. As voice synthesis becomesmore widespread and potentially undetectable, were looking at a future where media artifacts from any era will likely be completely fluid and malleable, shapable to fit any narrative.

Stoke Space Aims to Build Rapidly Reusable Rocket With a Completely Novel DesignEric Berger | Ars TechnicaSpaceX had already shown the way on first-stage launch and recovery with the Falcon 9 and its vertical takeoff and landing, so Stoke started with the second stage. Last month, the company started to test-fire its upper-stage engines at a facility in Moses Lake, Washington. The images andvideo show an intriguing-looking ring with 15 discrete thrusters firing for several seconds. The circular structure is 13 feet in diameter, and this novel-looking design is Stokes answer to one of the biggest challenges of getting a second stage back from orbit.

Microsoft Brings DALL-E 2 to the Masses With Designer and Image CreatorKyle Wiggers | TechCrunchSeeking to bring OpenAIs tech to an even wider audience, Microsoft is launching Designer, a Canva-like web app that can generate designs for presentations, posters, digital postcards, invitations, graphics and more to share on social media and other channels. Designerwhose announcement leakedrepeatedly this spring and summerleverages user-created content and DALL-E 2 to ideate designs, with drop-downs and text boxes for further customization and personalization.

Can Start-Ups Significantly Lower the Cost of Gene Sequencing?Roy Furchgott | The New York TimesiIf someone drops the price of sequencing 10-fold, I can sequence 10 times as many people, [Dr. Bruce D. Gelb] said. And you build up your statistical oomph to discover stuff. The days of statistical oomphmeaning an explosion in the amount of data gleaned from lower-priced testsappear imminent. Ultima Genomics, a biotech start up, made news at the Advances in Genome Biology and Technology conference in June, unveiling a gene-sequencing machine that it claims can sequence a complete genome for $100.

Metas VR Headset Harvests Personal Data Right Off Your FaceKhari Johnson | WiredCameras inside the device that track eye and face movements can make an avatars expressions more realistic, but they raise new privacy questions. Raw images and pictures used to power these features are stored on the headset, processed locally on the device, and deleted after processing, Meta says.Eye-trackingandfacial-expressionprivacy notices the company published this week state that although raw images get deleted, insights gleaned from those images may be processed and stored on Meta servers.

The Case for and Against CryptocurrencyTyler Cowen | Big ThinkCryptocurrency is truly a new idea, and its rare for society to encounter fundamentally new ideas. Cryptocurrency is well positioned to serve a crucial financial and transactional role as a globalized internet grows to include more of our lives. Crypto enthusiasts espouse grand plans that do not sound realistic, while crypto skeptics fail to appreciate the revolutionary nature of the technology.

The Chinese Surveillance State Proves That the Idea of Privacy Is More Malleable Than Youd ExpectZeyi Yang | MIT Technology ReviewHow the world should respond to the rise of surveillance states might be one of the most important questions facing global politics at the moment, Chin says, because these technologies really do have the potential to completely alter the way governments interact with and control people.i

Image Credit: Simone Hutsch / Unsplash

Continued here:

This Week's Awesome Tech Stories From Around the Web (Through October 15) - Singularity Hub

800,000 Neurons in a Dish Learned to Play Pong in Just Five Minutes – Singularity Hub

Scientists just taught hundreds of thousands of neurons in a dish to play Pong. Using a series of strategically timed and placed electrical zaps, the neurons not only learned the game in a virtual environment, but played better over timewith longer rallies and fewer missesshowing a level of adaptation previously thought impossible.

Why? Picture literally taking a chunk of brain tissue, digesting it down to individual neurons and other brain cells, dumping them (gently) onto a plate, and now being able to teach them, outside a living host, to respond and adapt to a new task using electrical zaps alone.

Its not just fun and games. The biological neural network joins its artificial cousin, DeepMinds deep learning algorithms, in a growing pantheon of attempts at deconstructing, reconstructing, and one day mastering a sort of general intelligence based on the human brain.

The brainchild of Australian company Cortical Labs, the entire setup, dubbed DishBrain, is the first real-time synthetic biological intelligence platform, according to the authors of a paper published this month in Neuron. The setup, smaller than a dessert plate, is extremely sleek. It hooks up isolated neurons with chips that can both record the cells electrical activity and trigger precise zaps to alter those activities. Similar to brain-machine interfaces, the chips are controlled with sophisticated computer programs, without any human input.

The chips act as a bridge for neurons to link to a virtual world. As a translator for neural activity, they can unite biological electrical data with silicon bits, allowing neurons to respond to a digital game world.

DishBrain is set up to expand to further games and tests. Because the neurons can sense and adapt to the environment and output their results to a computer, they could be used as part of drug screening tests. They could also help neuroscientists better decipher how the brain organizes its activity and learns, and inspire new machine learning methods.

But the ultimate goal, explained Dr. Brett Kagan, chief scientific officer at Cortical Labs, is to help harness the inherent intelligence of living neurons for their superior computing power and low energy consumption. In other words, compared to neuromorphic hardware that mimics neural computation, why not just use the real thing?

Theoretically, generalized SBI [synthetic biological intelligence] may arrive before artificial general intelligence (AGI) due to the inherent efficiency and evolutionary advantage of biological systems, the authors wrote in their paper.

The DishBrain project started with a simple idea: neurons are incredibly intelligent and adaptable computing machines. Recent studies suggest that each neuron is a supercomputer in itself, with branches once thought passive acting as independent mini-computers. Like people within a community, neurons also have an inherent ability to hook up to diverse neural networks, which dynamically shifts with their environment.

This level of parallel, low-energy computation has long been the inspiration for neuromorphic chips and machine learning algorithms to mimic the natural abilities of the brain. While both have made strides, none have been able to recreate the complexity of a biological neural network.

From worms to flies to humans, neurons are the starting block for generalized intelligence. So the question was, can we interact with neurons in a way to harness that inherent intelligence? said Kagan.

Enter DishBrain. Despite its name, the plated neurons and other brain cells are from an actual brain with consciousness. As for intelligence, the authors define it as the ability to gather information, collate the data, and adjust firing activitythat is, how neurons process the datain a way that helps adapt towards a goal; for example, rapidly learning to place your hand on the handle of a piping hot pan without searing it on the rim.

The setup starts, true to its name, with a dish. The bottom of each one is covered with a computer chip, HD-MEA, that can record from stimulated electrical signals. Cells, either isolated from the cortex of mouse embryos or derived from human cells, are then laid on top. The dish is bathed in a nutritious fluid for the neurons to grow and thrive. As they mature, they grow from jiggly blobs into spindly shapes with vast networks of sinuous, interweaving branches.

Within two weeks, the neurons from mice self-organized into networks inside their tiny homes, bursting with spontaneous activity. Neurons from human originsskin cells or other brain cellstook a bit longer, establishing networks in roughly a month or two.

Then came the training. Each chip was controlled by commercially available software, linking it to a computer interface. Using the system to stimulate neurons is similar to providing sensory datalike those coming from your eyes as you focus on a moving ball. Recording the neurons activity is the outcomethat is, how they would react to (if inside a body) you moving your hand to hit the ball. DishBrain was designed so that the two parts integrated in real time: similar to humans playing Pong, the neurons could in theory learn from past misses and adapt their behavior to hit the virtual ball.

Heres how Pong goes. A ball bounces rapidly across the screen, and the player can slide a tiny vertical paddlewhich looks like a bold lineup and down. Here, the ball is represented by electrical zaps based on its location on the screen. This essentially translates visual information into electrical data for the biological neural network to process.

The authors then defined distinct regions of the chip for sensation and movements. One region, for example, captures incoming data from the virtual ball movement. A part of the motor region then controls the virtual paddle to move up, whereas another causes it to move down. These assignments were arbitrary, the authors explained, meaning that the neurons within needed to adjust their firings to excel at a match.

So how do they learn? If the neurons hit the ballthat is, showing the corresponding type of electrical activitythe team then zapped them at that location with the same frequency each time. Its a bit like establishing a habit for the neurons. If they missed the ball, then they were zapped with electrical noise that disrupted the neural network.

The strategy is based on a learning theory called the free energy principle, explained Kagan. Basically, it supposes that neurons hold beliefs about their surroundings, and adjust and repeat their electrical activity so they can better predict the environment, either changing their beliefs or their behavior.

The theory panned out. In just five minutes, both human and mice neurons rapidly improved their gameplay, including better rallies, fewer aceswhere the paddle failed to intercept the ball without a single hitand long gameplays with more than three consecutive hits. Surprisingly, mice neurons learned faster, though eventually they were outperformed by human ones.

The stimulations were critical for their learning. Separate experiments with DishBrain without any electrical feedback performed far worse.

The study is a proof of concept that neurons in a dish can be a sophisticated learning machine, and even exhibit signs of sentience and intelligence, said Kagan. Thats not to say theyre consciousrather, they have the ability to adapt to a goal when embodied into a virtual environment.

Cortical Labs isnt the first to test the boundaries of the data processing power of isolated neurons. Back in 2008, Dr. Steve Potter at the Georgia Institute of Technology and team found that with even just a few dozen electrodes, they could stimulate rat neurons to exhibit signs of learning in a dish.

DishBrain has a leg up with thousands of electrodes compacted in each setup, and the company hopes to tap into its biological power to aid drug development. The system, or its future derivations, could potentially act as a micro-brain surrogate for testing neurological drugs, or gaining insights into the neurocomputation powers of different species or brain regions.

But the long-term vision is a living bio-silicon computer hybrid. Integrating neurons into digital systems may enable performance infeasible with silicon alone, the authors wrote. Kagan imagines developing biological processing units that weave together the best of both worlds for more efficient computationand in the process, shed a light on the inner workings of our own minds.

This is the start of a new frontier in understanding intelligence, said Kagan. It touches on the fundamental aspects of not only what it means to be human, but what it means to be alive and intelligent at all, to process information and be sentient in an ever-changing, dynamic world.

Image Credit: Cortical Labs

See the original post here:

800,000 Neurons in a Dish Learned to Play Pong in Just Five Minutes - Singularity Hub