Singularity and Sync Partner to Launch Global Impact Challenge Addressing Misinformation, Cyberbullying, and Social … – Morningstar

Singularity and Sync Partner to Launch Global Impact Challenge Addressing Misinformation, Cyberbullying, and Social Disconnection with $250K Prize

PR Newswire

MOUNTAIN VIEW, Calif., May 21, 2024

MOUNTAIN VIEW, Calif., May 21, 2024 /PRNewswire/ -- Singularity is thrilled to announce its partnership with Sync to launch a Global Impact Challenge (GIC) aimed at tackling the pressing issues of misinformation, cyberbullying, and social disconnection. This initiative underscores Singularity's commitment to fostering innovative solutions for global challenges through collaborative efforts.

Singularity has conducted 119 Global Impact Challenges worldwide since 2010, spanning over 50 countries and recognizing countless winners. These challenges aim to catalyze breakthrough innovations using technology to tackle pressing global issues.

The Singularity x Sync Global Impact Challenge welcomes applicants from across the globe. The winner, to be announced on Nov. 7, 2024, will receive a $250,000 cash prize and a scholarship opportunity to attend the Singularity Executive Program. The cash prize will be disbursed in installments, with the schedule to be determined later. The Executive Program offers a transformative five-day immersive experience, featuring workshops, group activities, and deep dives into cutting-edge technologies. It equips leaders with the mindset and tools to harness exponential change for personal, organizational, and societal advancement.

The jury tasked with selecting the Challenge winner will include representatives from Singularity, Sync, and distinguished leaders from the entrepreneurship, business, or innovation sectors.

Wadha AlNafjan, Sync lead, emphasizes the urgency of addressing key digital well-being challenges and the importance of such collaboration: "There are many digital well-being challenges that require our urgent focus. Our research indicates that misinformation, social disconnections, and cyberbullying are among the most pressing issues today. In response, Sync is joining forces with Singularity in its mission to educate, inspire, and empower leaders to imagine and create breakthroughs powered by exponential technologies. Together, we are launching the Global Impact Challenge to inspire innovators and entrepreneurs to address these critical concerns."

Neil Sogard, vice president of strategy and special projects at Singularity, highlights the importance of leveraging technology to improve lives globally: "At Singularity, we're confident that technology will continue to improve the lives of billions of people around the world. But with that level of impact, it's important we leverage our network of world-class experts and innovators to partner with organizations like Sync to strengthen the relationship between technology usage and sustained well-being for all global citizens."

Singularity is the leader in educating, inspiring, and empowering leaders to imagine and create breakthroughs powered by exponential technologies.

Through immersive learning programs and experiences focused on the convergence and application of exponential technologies, Singularity teaches leaders from around the globe to shift their mindset, drive innovation, and transform their organization exponentially.

Founded in Silicon Valley in 2008, Singularity has inspired over 200,000 leaders in over 100 countries from industry, academia, and government to join us on our mission of creating a future of abundance.

For more information:

About Sync

Sync is a digital well-being initiative that aims to raise awareness through translating research-based understanding of the amplification of technology in our lives into audience-friendly materials and tools, with a vision to create a world where we are all in control of our digital lives.

For more information:

View original content to download multimedia:https://www.prnewswire.com/news-releases/singularity-and-sync-partner-to-launch-global-impact-challenge-addressing-misinformation-cyberbullying-and-social-disconnection-with-250k-prize-302151828.html

Read more:

Singularity and Sync Partner to Launch Global Impact Challenge Addressing Misinformation, Cyberbullying, and Social ... - Morningstar

Black hole singularities defy physics. New research could finally do away with them. – Livescience.com

Black holes are some of the most enigmatic objects in the universe, capable of deforming the fabric of space around them so violently that not even light can escape their gravitational grip. But it turns out, much of what scientists know about these mysterious objects could be wrong.

According to new research, published in April in the journal Physical Review D, black holes could actually be entirely different celestial entities known as gravastars.

"Gravastars are hypothetical astronomical objects that were introduced [in 2001] as alternatives to black holes," study co-author Joo Lus Rosa, a professor of physics at the University of Gdask in Poland, told Live Science in an email. "They can be interpreted as stars made of vacuum energy or dark energy: the same type of energy that propels the accelerated expansion of the universe."

Karl Schwarzschild, a German physicist and astronomer, first predicted black holes in 1915, based on calculations using Albert Einstein's general theory of relativity.

Over the years, astronomical observations have seemingly confirmed the existence of objects resembling black holes. However, Schwarzschild's description of these space bodies has some shortcomings.

In particular, the center of a black hole is predicted to be a point of infinitely high density, called a singularity, where all the mass of the black hole is concentrated, but fundamental physics teaches us that infinities do not exist, and their appearance in any theory signals its inaccuracy or incompleteness.

"These problems indicate that something is either wrong or incomplete in the black hole model, and that the development of alternative models is necessary," Rosa said. "The gravastar is one of many alternative models proposed. The main advantage of gravastars is that they do not have singularities."

Get the worlds most fascinating discoveries delivered straight to your inbox.

Related: Newfound 'glitch' in Einstein's relativity could rewrite the rules of the universe, study suggests

Like ordinary black holes, gravastars should arise at the final stage of the evolution of massive stars, when the energy released during thermonuclear combustion of the matter inside them is no longer enough to overcome the force of gravity, and the star collapses into a much denser object. But in contrast to black holes, gravastars are not expected to have any singularities and are thought to be thin spheres of matter whose stability is maintained by the dark energy contained within them.

To find out if gravastars are viable alternatives to singular black holes, Rosa and his colleagues examined the interaction of particles and radiation with these hypothetical objects.

Using Einstein's theory, the authors examined how the huge masses of hot matter that surround supermassive black holes would appear if these black holes were actually gravastars. They also scrutinized the properties of "hot spots" gigantic gas bubbles orbiting black holes at near-light speeds.

Their findings revealed striking similarities between the matter emissions of gravastars and black holes, suggesting that gravastars don't contradict scientists' experimental observations of the universe. Moreover, the team discovered that a gravastar itself should appear almost like a singular black hole, creating a visible shadow.

"This shadow is not caused by the trapping of light in the event horizon, but by a slightly different phenomenon called the 'gravitational redshift,' causing light to lose energy when it moves through a region with a strong gravitational field," Rosa said. "Indeed, when the light emitted from regions close to these alternative objects reach[es] our telescopes, most of its energy would have been lost to the gravitational field, causing the appearance of this shadow."

The striking resemblances between Schwarzschild's black hole model and gravastars highlight the latter's potential as a realistic alternative, free from the theoretical pitfalls of singularities.

However, this theory needs to be backed up with experiments and observations, which the study authors believe may soon be carried out. While gravastars and singular black holes might behave similarly in many respects, subtle differences in emitted light could potentially distinguish them.

"To test our results experimentally, we are counting on the next generation of observational experiments in gravitational physics," Rosa said, referring to the black hole-hunting Event Horizon Telescope and the GRAVITY+ instrument being added to the Very Large Telescope in Chile. "These two experiments aim to observe closely what happens near the center of galaxies, in particular, our own Milky Way."

Excerpt from:

Black hole singularities defy physics. New research could finally do away with them. - Livescience.com

This Week’s Awesome Tech Stories From Around the Web (Through May 18) – Singularity Hub

Its Time to Believe the AI Hype Steven Levy | Wired Theres universal agreement in the tech world that AI is the biggest thing since the internet, and maybe bigger. Skeptics might try to claim that this is an industry-wide delusion, fueled by the prospect of massive profits. But the demos arent lying. We will eventually become acclimated to the AI marvels unveiled this week. The smartphone once seemed exotic; now its an appendage no less critical to our daily life than an arm or a leg. At a certain point AIs feats, too, may not seem magical any more.

archive page

How to Put a Datacenter in a Shoebox Anna Herr and Quentin Herr | IEEE Spectrum At Imec, we have spent the past two years developing superconducting processing units that can be manufactured using standard CMOS tools. A processor based on this work would be one hundred times as energy efficient as themost efficient chips today, and it would lead to a computer that fits a data-centers worth of computing resources into a system the size of a shoebox.

IndieBios SF Incubator Lineup Is Making Some Wild Biotech Promises Devin Coldewey | TechCrunch We took special note of a few, which were making some major, bordering on ludicrous, claims that could pay off in a big way. Biotech has been creeping out in recent years to touch adjacent industries, as companies find how much they rely on outdated processes or even organisms to get things done. So it may not surprise you that theres a microbiome company in the latest batchbut you might be surprised when you hear its the microbiome of copper ore.

Its the End of Google Search as We Know It Lauren Goode | Wired Its as though Google took the index cards for the screenplay its been writing for the past 25 years and tossed them into the air to see where the cards might fall. Also: The screenplay was written by AI. These changes to Google Search have been long in the making. Last year the company carved out a section of its Search Labs, which lets users try experimental new features, for something calledSearch Generative Experience. The big question since has been whether, or when, those features would become a permanent part of Google Search. The answer is, well, now.

Waymo Says Its Robotaxis Are Now Making 50,000 Paid Trips Every Week Mariella Moon | Engadget If youve been seeing more Waymo robotaxis recently in Phoenix, San Francisco, and Los Angeles, thats because more and more people are hailing one for a ride. The Alphabet-owned company has announced on Twitter/X that its now serving more than 50,000 paid trips every week across three cities. Waymo One operates 24/7 in parts of those cities. If the company is getting 50,000 rides a week, that means it receives an average of 300 bookings every hour or five bookings every minute.

Technology Is Probably Changing Us for the Worseor So We Always Think Timothy Maher | MIT Technology Review Weve always greeted new technologies with a mixture of fascination and fear, saysMargaret OMara, a historian at the University of Washington who focuses on the intersection of technology and American politics. People think: Wow, this is going to change everything affirmatively, positively, she says. And at the same time: Its scarythis is going to corrupt us or change us in some negative way. And then something interesting happens: We get used to it, she says. The novelty wears off and the new thing becomes a habit.'

This Is the Next Smartphone Evolution Matteo Wong | The Atlantic Earlier [this week], OpenAI announced its newest product: GPT-4o, a faster, cheaper, more powerful version of its most advanced large language model, and one that the company has deliberately positioned as the next step in natural human-computer interaction. Watching the presentation, I felt that I was witnessing the murder of Siri, along with that entire generation of smartphone voice assistants, at the hands of a company most people had not heard of just two years ago.

In the Race for Space Metals, Companies Hope to Cash In Sarah Scoles | Undark Previous companies have rocketed toward similar goals before but went bust about a half decade ago. In the years since that first cohort left the stage, though, the field has exploded in interest, said Angel Abbud-Madrid, director of the Center for Space Resources at the Colorado School of Mines. The economic picture has improved with the cost of rocket launches decreasing, as has the regulatory environment, with countries creating laws specifically allowing space mining. But only time will tell if this decades prospectors will cash in where others have drilled into the red or be buried by their business plans.

What I Got Wrong in a Decade of Predicting the Future of Tech Christopher Mims | The Wall Street Journal Anniversaries are typically a time for people to get misty-eyed and recount their successes. But after almost 500 articles in The Wall Street Journal, one thing Ive learned from covering the tech industry is that failures are far more instructive. Especially when theyre the kind of errors made by many people. Heres what Ive learned from a decade of embarrassing myself in publicand having the privilege of getting an earful about it from readers.

Lab-Grown Meat Is on Shelves Now. But Theres a Catch Matt Reynolds | Wired Now cultivated meat is available in one store in Singapore. There is a catch, however: The chicken on sale at Hubers Butchery contains just 3 percent animal cells. The rest will be made of plant proteinthe same kind of ingredients youd find in plant-based meats that are already on supermarket shelves worldwide. This might feel like a bit of a bait and switch. Didnt cultivated meat firms promise us real chicken? And now were getting plant-based products with a sprinkling of animal cells? That criticism wouldnt be entirely fair, though.

Image Credit:Pawel Czerwinski / Unsplash

Originally posted here:

This Week's Awesome Tech Stories From Around the Web (Through May 18) - Singularity Hub

The Crucial Building Blocks of Life on Earth Form More Easily in Outer Space – Singularity Hub

The origin of life on Earth is still enigmatic, but we are slowly unraveling the steps involved and the necessary ingredients. Scientists believe life arose in a primordial soup of organic chemicals and biomolecules on the early Earth, eventually leading to actual organisms.

Its long been suspected that some of these ingredients may have been delivered from space. Now a new study, published in Science Advances, shows that a special group of molecules, known as peptides, can form more easily under the conditions of space than those found on Earth. That means they could have been delivered to the early Earth by meteorites or cometsand that life may be able to form elsewhere, too.

The functions of life are upheld in our cells (and those of all living beings) by large, complex carbon-based (organic) molecules called proteins. How to make the large variety of proteins we need to stay alive is encoded in our DNA, which is itself a large and complex organic molecule.

However, these complex molecules are assembled from a variety of small and simple molecules such as amino acidsthe so-called building blocks of life.

To explain the origin of life, we need to understand how and where these building blocks form and under what conditions they spontaneously assemble themselves into more complex structures. Finally, we need to understand the step that enables them to become a confined, self-replicating systema living organism.

This latest study sheds light on how some of these building blocks might have formed and assembled and how they ended up on Earth.

DNA is made up of about 20 different amino acids. Like letters of the alphabet, these are arranged in DNAs double helix structure in different combinations to encrypt our genetic code.

Peptides are also an assemblage of amino acids in a chain-like structure. Peptides can be made up of as little as two amino acids, but also range to hundreds of amino acids.

The assemblage of amino acids into peptides is an important step because peptides provide functions such as catalyzing, or enhancing, reactions that are important to maintaining life. They are also candidate molecules that could have been further assembled into early versions of membranes, confining functional molecules in cell-like structures.

However, despite their potentially important role in the origin of life, it was not so straightforward for peptides to form spontaneously under the environmental conditions on the early Earth. In fact, the scientists behind the current study had previously shown that the cold conditions of space are actually more favorable to the formation of peptides.

In the very low density clouds of molecules and dust particles in a part of space called the interstellar medium (see above), single atoms of carbon can stick to the surfaces of dust grains together with carbon monoxide and ammonia molecules. They then react to form amino acid-like molecules. When such a cloud becomes denser and dust particles also start to stick together, these molecules can assemble into peptides.

In their new study, the scientists look at the dense environment of dusty disks, from which a new solar system with a star and planets emerges eventually. Such disks form when clouds suddenly collapse under the force of gravity. In this environment, water molecules are much more prevalentforming ice on the surfaces of any growing agglomerates of particles that could inhibit the reactions that form peptides.

By emulating the reactions likely to occur in the interstellar medium in the laboratory, the study shows that, although the formation of peptides is slightly diminished, it is not prevented. Instead, as rocks and dust combine to form larger bodies such as asteroids and comets, these bodies heat up and allow for liquids to form. This boosts peptide formation in these liquids, and theres a natural selection of further reactions resulting in even more complex organic molecules. These processes would have occurred during the formation of our own solar system.

Many of the building blocks of life such as amino acids, lipids, and sugars can form in the space environment. Many have been detected in meteorites.

Because peptide formation is more efficient in space than on Earth, and because they can accumulate in comets, their impacts on the early Earth might have delivered loads that boosted the steps towards the origin of life on Earth.

So, what does all this mean for our chances of finding alien life? Well, the building blocks for life are available throughout the universe. How specific the conditions need to be to enable them to self-assemble into living organisms is still an open question. Once we know that, well have a good idea of how widespread, or not, life might be.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit:Aldebaran S / Unsplash

More here:

The Crucial Building Blocks of Life on Earth Form More Easily in Outer Space - Singularity Hub

This Week’s Awesome Tech Stories From Around the Web (Through April 20) – Singularity Hub

15 Graphs That Explain the State of AI in 2024 Eliza Strickland | IEEE Spectrum Each year, the AI Index lands on virtual desks with a louder virtual thudthis year, its 393 pages are a testament to the fact that AI is coming off a really big year in 2023. For the past three years,IEEE Spectrum has read the whole damn thing and pulled out a selection of charts that sum up the current state of AI.

The Next Frontier for Brain Implants Is Artificial Vision Emily Mullin | Wired Elon Musks Neuralink and others are developing devices that could provide blind people with a crude sense of sight. This is not about getting biological vision back, says Philip Troyk, a professor of biomedical engineering at Illinois Tech, whos leading the study Bussard is in. This is about exploring what artificial vision could be.'

Microsofts VASA-1 Can Deepfake a Person With One Photo and One Audio Track Benj Edwards | Ars Technica On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. In the future, it could power virtual avatars that render locally and dont require video feedsor allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.

Meta Is Already Training a More Powerful Successor to Llama 3 Will Knight | Wired On Thursday morning, Meta released its latestartificial intelligencemodel, Llama 3, touting it as the most powerful to bemade open sourceso that anyone can use it. The same afternoon,Yann LeCun, Metas chief AI scientist, said an even more powerful successor to Llama is in the works. He suggested it could potentially outshine the worlds best closed AI models, includingOpenAIs GPT-4andGoogles Gemini.

Intel Reveals Worlds Biggest Brain-Inspired Neuromorphic Computer Matthew Sparkes | New Scientist Hala Point contains 1.15 billion artificial neurons across 1152 Loihi 2 achips, and is capable of 380 trillion synaptic operations per second. Mike Davies at Intel says that despite this power it occupies just six racks in a standard server casea space similar to that of a microwave oven. Larger machines will be possible, says Davies. We built this scale of system because, honestly, a billion neurons was a nice round number, he says. I mean, there wasnt any particular technical engineering challenge that made us stop at this level.'

US Air Force Confirms First Successful AI Dogfight Emma Roth | The Verge Human pilots were on board the X-62A with controls to disable the AI system, but DARPA says the pilots didnt need to use the safety switch at any point. The X-62A went against an F-16 controlled solely by a human pilot, where both aircraft demonstrated high-aspect nose-to-nose engagements and got as close as 2,000 feet at 1,200 miles per hour. DARPA doesnt say which aircraft won the dogfight, however.

What If Your AI Girlfriend Hated You? Kate Knibbs | Wired It seems as though weve arrived at the moment in the AI hype cycle where no idea is too bonkers to launch. This weeks eyebrow-raising AI project is a new twist on theromantic chatbota mobile app calledAngryGF, which offers its users the uniquely unpleasant experience of getting yelled at via messages from a fake person.

Insects and Other Animals Have Consciousness, Experts Declare Dan Falk | Quanta For decades, theres been a broad agreement among scientists that animals similar to usthe great apes, for examplehave conscious experience, even if their consciousness differs from our own. In recent years, however, researchers have begun to acknowledge that consciousness may also be widespread among animals that are very different from us, including invertebrates with completely different and far simpler nervous systems.

Two Lifeforms Merge in Once-in-a-Billion-Years Evolutionary Event Michael Irving | New Atlas Scientists have caught a once-in-a-billion-years evolutionary event in progress, as two lifeforms have merged into one organism that boasts abilities its peers would envy.Last time this happened, Earth got plants. A species of algae called Braarudosphaera bigelowii was found to have engulfed a cyanobacterium that lets them do something that algae, and plants in general, cant normally dofixing nitrogen straight from the air, and combining it with other elements to create more useful compounds.

Image Credit:Shubham Dhage / Unsplash

See the rest here:

This Week's Awesome Tech Stories From Around the Web (Through April 20) - Singularity Hub

Singularity Energy Unveils CarbonFlow – A System Tracing Emissions From Source To Consumption – Carbon Herald

Singularity Energy has developed a product that addresses one of the critical but underreported issues of emissions tracking. Aptly named CarbonFlow, it provides granular details about energy flows on the grid and the emissions generated by its consumption.

Taking an entirely new approach to carbon accounting CarbonFlow is able to trace emissions at an individual line and load level. Its database can track emissions data from the initial production stages, all the way to where energy is used. This capability could revolutionize how carbon emissions are monitored, providing increased transparency and accountability for all stakeholders.

Speaking to Carbon Herald in 2023, Singularitys CEO Wenbo Shi stressed on how important determining the source of energy is. You dont actually burn coal or natural gas to generate electricity in your home. There are power plants that supply the electricity that everybody is using through the power grid. And that is a very complex machine. How do you really know where your power comes from? Does that come from clean energy sources? Or does that come from fossil fuel sources? When people start asking these questions, then you have to know that. And the answer is not very simple.

One of the key features of CarbonFlow is its ability to trace emissions back to their original source. This allows businesses and consumers to identify the precise sources of emissions and take targeted actions to reduce them. By gaining insights into the carbon intensity of different products and processes, organizations can make more informed decisions about resource allocation and supply chain management.

Relevant: Tracking Emissions On An Hourly Basis Is Going To Be Critical For Informing Decarbonization Policy Greg Miller, PhD Research and Policy Lead at Singularity Energy

These capabilities allow CarbonFlow to address one of the main challenges when it comes to calculating consumed emissions. Existing approaches focus on the import and export data between grid operators, combined with information about the fuels being used for power generation.

Though highly informative on a large scale, this approach leaves a gap when it comes to determining how emission rates vary on a region by region basis. CarbonFlow manages to zoom in on that regional level because it can account for power flows across individual transmission corridors.

With this the applications of CarbonFlow can cover a variety of needs like Scope 2 emissions accounting, quantifying the deliverability of renewable energy as well as providing insight for policymakers.

With willingness among many consumers and businesses to reduce their emissions increasing, there are those who remain on the sidelines because they dont feel like they are in a position to make an informed decision. The real-time visibility and accountability that CarbonFlow provides can be one of the most powerful tools to address this challenge.

Read more: Power Sector Decarbonization Through Innovative Data Intelligence: Wenbo Shi, CEO And Founder At Singularity Energy

Read more here:

Singularity Energy Unveils CarbonFlow - A System Tracing Emissions From Source To Consumption - Carbon Herald

These Plants Could Mine Valuable Metals From the Soil With Their Roots – Singularity Hub

The renewable energy transition will require a huge amount of materials, and there are fears we may soon face shortages of some critical metals. US government researchers think we could rope in plants to mine for these metals with their roots.

Green technologies like solar power and electric vehicles are being adopted at an unprecedented rate, but this is also straining the supply chains that support them. One area of particular concern includes the metals required to build batteries, wind turbines, and other advanced electronics that are powering the energy transition.

We may not be able to sustain projected growth at current rates of production of many of these minerals, such as lithium, cobalt, and nickel. Some of these metals are also sourced from countries whose mining operations raise serious human rights or geopolitical concerns.

To diversify supplies, the government research agency ARPA-E is offering $10 million in funding to explore phytomining, in which certain species of plants are used to extract valuable metals from the soil through their roots. The project is focusing on nickel first, a critical battery metal, but in theory, it could be expanded to other minerals.

In order to accomplish the goals laid out by President Biden to meet our clean energy targets, and support our economy and national security, its going to take [an] all-hands-on-deck approach and innovative solutions, ARPA-E director Evelyn Wang said in a press release.

By exploring phytomining to extract nickel as the first target critical material, ARPA-E aims to achieve a cost-competitive and low-carbon footprint extraction approach needed to support the energy transition.

The concept of phytomining has been around for a while and relies on a class of plants known as hyperaccumulators. These species can absorb a large amount of metal through their roots and store it in their tissues. Phytomining involves growing these plants in soils with high levels of metals, harvesting and burning the plants, and then extracting the metals from the ash.

The ARPA-E project, known as Plant HYperaccumulators TO MIne Nickel-Enriched Soils (PHYTOMINES), is focusing on nickel because there are already many hyperaccumulators known to absorb the metal. But finding, or creating, species able to economically mine the metal in North America will still be a significant challenge.

One of the primary goals of the project is to optimize the amount of nickel these plants can take in. This could involve breeding or genetically modifying plants to enhance these traits or altering the microbiome of either the plants or the surrounding soil to boost absorption.

The agency also wants to gain a better understanding of the environmental and economic factors that could determine the viability of the approach, such as the impact of soil mineral composition, the land ownership status of promising sites, and the lifetime costs of a phytomining operation.

But while the idea is still at a nebulous stage, there is considerable potential.

In soil that contains roughly 5 percent nickelthat is pretty contaminatedyoure going to get an ash thats about 25 to 50 percent nickel after you burn it down, Dave McNear, a biogeochemist at the University of Kentucky, told Wired.

In comparison, where you mine it from the ground, from rock, that has about .02 percent nickel. So you are several orders of magnitude greater in enrichment, and it has far less impurities.

Phytomining would also be much less environmentally damaging than traditional mining, and it could help remediate soil polluted with metals so they can be farmed more conventionally. While the focus is currently on nickel, the approach could be extended to other valuable metals too.

The main challenge will be finding a plant that is suitable for American climates that grows quickly. The problem has historically been that theyre not often very productive plants, Patrick Brown, a plant scientist at the University of California, Davis, told Wired. And the challenge is you have to have high concentrations of nickel and high biomass to achieve a meaningful, economically viable outcome.

Still, if researchers can square that circle, the approach could be a promising way to boost supplies of the critical minerals needed to support the transition to a greener economy.

Image Credit: Nickel hyperaccumulator Alyssum argenteum / David Stang via Wikimedia Commons

Originally posted here:

These Plants Could Mine Valuable Metals From the Soil With Their Roots - Singularity Hub

This Week’s Awesome Tech Stories From Around the Web (Through March 30) – Singularity Hub

The Best Qubits for Quantum Computing Might Just Be Atoms Philip Ball | Quanta In the search for the most scalable hardware to use for quantum computers, qubits made of individual atoms are having a breakout moment. We believe we can pack tens or even hundreds of thousands in a centimeter-scale device, [Mark Saffman, a physicist at the University of Wisconsin] said.

AI Chatbots Are Improving at an Even Faster Rate Than Computer Chips Chris Stokel-Walker | New Scientist Besiroglu and his colleagues analyzed the performance of 231 LLMs developed between 2012 and 2023 and found that, on average, the computing power required for subsequent versions of an LLM to hit a given benchmark halved every eight months. That is far faster than Moores law, a computing rule of thumb coined in 1965 that suggests the number of transistors on a chip, a measure of performance, doubles every 18 to 24 months.

How AI Could Explode the Economy Dylan Matthews | Vox Imagine everything humans have achieved since the days when we lived in caves: wheels, writing, bronze and iron smelting, pyramids and the Great Wall, ocean-traversing ships, mechanical reaping, railroads, telegraphy, electricity, photography, film, recorded music, laundry machines, television, the internet, cellphones. Now imagine accomplishing 10 times all thatin just a quarter century. This is a very, very, very strange world were contemplating. Its strange enough that its fair to wonder whether its even possible.

Whats Next for Generative Video Will Douglas Heaven | MIT Technology Review The first batch of models that could turn text into video appeared in late 2022, from companies including Meta, Google, andvideo-tech startup Runway. It was a neat trick, but the results were grainy, glitchy, and just a few seconds long. Fast-forward 18 months, and the best of Soras high-definition, photorealistic output is so stunning that some breathless observers are predicting the death of Hollywood. As we continue to get to grips whats aheadgood and badhere are four things to think about.

Salt-Sized Sensors Mimic the Brain Gwendolyn Rak | IEEE Spectrum To gain a better understanding of the brain, why not draw inspiration from it? At least, thats what researchers at Brown University did, by building a wireless communications system that mimics the brain using an array of tiny silicon sensors, each the size of a grain of sand. The researchers hope that the technology could one day be used in implantable brain-machine interfaces to read brain activity.

Understanding Humanoid Robots Brian Heater | TechCrunch A lot of smart people have faith in the form factor and plenty of others remain skeptical. One thing Im confident saying, however, is that whether or not future factories will be populated with humanoid robots on a meaningful scale, all of this work will amount to something. Even the most skeptical roboticists Ive spoken to on the subject have pointed to the NASA model, where the race to land humans on the moon led to the invention of products we use on Earth to this day.

Blazing Bits Transmitted 4.5 Million Times Faster Than Broadband Michael Franco | New Atlas An international research team has sent an astounding amount of data at a nearly incomprehensible speed. Its the fastest data transmission ever using a single optical fiber and shows just how speedy the process can get using current materials.

How Well Reach a 1 Trillion Transistor GPU Mark Liu and HS Philip Wong | IEEE Spectrum We forecast that within a decade a multichiplet GPU will have more than 1 trillion transistors. Well need to link all thesechiplets together in a 3D stack, but fortunately, industry has been able to rapidly scale down the pitch of vertical interconnects, increasing the density of connections. And there is plenty of room for more. We see no reason why the interconnect density cant grow by an order of magnitude, and even beyond.

Astronomers Watch in Real Time as Epic Supernova Potentially Births a Black Hole Isaac Schultz | Gizmodo Calculations of the circumstellar material emitted in the explosion, as well as this materials density and mass before and after the supernova, create a discrepancy, which makes it very likely that the missing mass ended up in a black hole that was formed in the aftermath of the explosionsomething thats usually very hard to determine, said study co-author Ido Irani, a researcher at the Weizmann Institute.

Large Language Models Emergent Abilities Are a Mirage Stephen Ornes | Wired [In some tasks measured by the BIG-bench project, LLM] performance remained near zero for a while, then performance jumped. Other studies found similar leaps in ability. The authors described this as breakthrough behavior; other researchers have likened it to a phase transition in physics, like when liquid water freezes into ice. [But] a new paper by a trio of researchers at Stanford University posits that the sudden appearance of these abilities is just a consequence of the way researchers measure the LLMs performance. The abilities, they argue, are neither unpredictable nor sudden.

Image Credit:Aedrian /Unsplash

Continued here:

This Week's Awesome Tech Stories From Around the Web (Through March 30) - Singularity Hub

The Singularity When We Merge With AI Won’t Happen – Walter Bradley Center for Natural and Artificial Intelligence

Erik J. Larson, who writes about AI here at Mind Matters News, spoke with EP podcast host Jesse Wright earlier this week about the famed/claimed Singularity, among other things. Thats when human and machine supposedly merge into a Super Humachine (?).

Inventor and futurist Ray Kurzweil has been prophesying that for years. But philosopher and computer scientist Larson, author of The Myth of Artificial Intelligence (Harvard 2021), says not so fast.

The podcast below is nearly an hour long but it is handily divided into segments, a virtual Table of Contents. Weve set it at The Fallacy of the Singularity, with selections from the transcript below. But you can click and enjoy the other parts at your convenience.

00:00 Intro 01:10 Misconceptions about AI Progress

 11:48 Bias and Misinformation in AI Models

21:52 The Plateau of Progress & End of Moore’s Law

31:30 The Fallacy of the Singularity

47:27 Preparing for the Future Job Market

Note: Larson blogs at Colligo, if you wish to follow his work.

And now

Decades ago, Larson says, programmers were focused on getting computers to win at complex board games like chess. One outcome was that their model of the human mind was the computer. And that, he says, became a narrative in our culture.

Larson: [33:19] You know, people are kind of just bad versions of computers. If you look at all the literature coming out of psychology and cognitive science and these kind of fields, theyre always pointing out how were full of bias jumping to the wrong conclusions. We cant be trusted. Our brains are very very Yesterdays Tech so to speak.

Choking off innovation?

Larson sees this easy equation of the mind and the computer as choking off innovation, at which humans excel. It encourages people to believe that computers will solve our problems when there are major gaps in their ability to do so. One outcome is that contrary to clich this one of the least innovative periods in a while.

Larson: [34:25] The last decade is one of the least innovative times that weve had in a long time and its sort of dangerous that everybody thinks the opposite. If people said, wait a minute, were just doing tweaks to neural networks; were just doing extensions to existing technology Yes, were making progress but were doing it at the expense of massive amounts of funding, massive amounts of energy consumption, right?

Instead he sees conformity everywhere, accompanied by a tendency to assume that incremental improvements amount to progress in fundamental understanding.

So how does our self-contented mediocrity produce an imminent, unhinged Singularity?

Well, a pinch of magic helps!

Larson: [37:49] Whats underlying that is this idea that once you get smart enough, you also become alive. And thats just not true. A calculator is extremely good at arithmetic. No one can beat a calculator on the face of the planet but that doesnt mean that your calculator has feelings about how its treated. In a sense, theres just a huge glaring error philosophical error thats being made by the Superintelligence folks, the existential risk folks. Thats wasted energy in my view. Thats not whats going to happen.

If a more powerful computer is not like a human mind, whats really going to happen?

Larson: [38:40] Very bad actors are going to use very powerful machines to screw everything up Somebody gets control of these systems and directs them towards ruining Wall Street, ruining the markets, bringing down the power grid. Thats a big threat. The machines themselves I would bet the farm that theyre not going to make the leap from being faster and calculating more complicated problems to being alive in any sort of sense or having any kind of motivations or something that could misalign like that. Thats the Sci-Fi Vibe thats getting pushed into a scientific discussion.

The Singularity depends on a machine model of the mind

Larson: [46:17] If were just a complicated machine, then it stands to reason that at some point well have a more complicated machine. Its just a continuum and were on that. But if you actually remove that premise and say, look were not machines, were not computers then you have an ability to talk about human culture in a way that can actually be healthy. We think differently, we reason differently, we have superior aspects to our behavior and performance, and we actually do care and have motivations about how things turn out unlike the tools we use.

So it looks as though the transhuman could go extinct without ever existing.

You may also wish to read: Tech pioneer Ray Kurzweil: We will merge with computers by 2045. For computers, Even the very best human is just another notch to pass, he told the COSM Technology Summit. Kurzweil explained, To do that, we need to go inside your brain. When we get to the 2030s, we will be able to do that. So a lot of our thinking will be inside the cloud. In another ten years, our non-biological thinking will be much better than our biological thinking. In 2017, he predicted 2045 for a total merger between man and machine.

View post:

The Singularity When We Merge With AI Won't Happen - Walter Bradley Center for Natural and Artificial Intelligence

AI singularity may come in 2027 with artificial ‘super intelligence’ sooner than we think, says top scientist – Livescience.com

Humanity could create an artificial intelligence (AI) agent that is just as smart as humans in as soon as the next three years, a leading scientist has claimed.

Ben Goertzel, a computer scientist and CEO of SingularityNET, made the claim during the closing remarks at the Beneficial AGI Summit 2024 on March 1 in Panama City, Panama. He is known as the "father of AGI" after helping to popularize the term artificial general intelligence (AGI) in the early 2000s.

The best AI systems in deployment today are considered "narrow AI" because they may be more capable than humans in one area, based on training data, but can't outperform humans more generally. These narrow AI systems, which range from machine learning algorithms to large language models (LLMs) like ChatGPT, struggle to reason like humans and understand context.

However, Goertzel noted AI research is entering a period of exponential growth, and the evidence suggests that artificial general intelligence (AGI) where AI becomes just as capable as humans across several areas independent of the original training data is within reach. This hypothetical point in AI development is known as the "singularity."

Goertzel suggested 2029 or 2030 could be the likeliest years when humanity will build the first AGI agent, but that it could happen as early as 2027.

Related: Artificial general intelligence when AI becomes more capable than humans is just moments away, Meta's Mark Zuckerberg declares

If such an agent is designed to have access to and rewrite its own code, it could then very quickly evolve into an artificial super intelligence (ASI) which Goertzel loosely defined as an AI that has the cognitive and computing power of all of human civilization combined.

"No one has created human-level artificial general intelligence yet; nobody has a solid knowledge of when we're going to get there. I mean, there are known unknowns and probably unknown unknowns. On the other hand, to me it seems quite plausible we could get to human-level AGI within, let's say, the next three to eight years," Goertzel said.

He pointed to "three lines of converging evidence" to support his thesis. The first is modeling by computer scientist Ray Kurzweil in the book "The Singularity is Near" (Viking USA, 2005), which has been refined in his forthcoming book "The Singularity is Nearer" (Bodley Head, June 2024). In his book, Kurzweil built predictive models that suggest AGI will be achievable in 2029, largely centering on the exponential nature of technological growth in other fields.

Goertzel also pointed to improvements made to LLMs within a few years, which have "woken up so much of the world to the potential of AI." He clarified LLMs in themselves will not lead to AGI because the way they show knowledge doesn't represent genuine understanding, but that LLMs may be one component in a broad set of interconnected architectures.

The third piece of evidence, Goertzel said, lay in his work building such an infrastructure, which he has called "OpenCog Hyperon," as well as associated software systems and a forthcoming AGI programming language, dubbed "MeTTa," to support it.

OpenCog Hyperon is a form of AI infrastructure that involves stitching together existing and new AI paradigms, including LLMs as one component. The hypothetical endpoint is a large-scale distributed network of AI systems based on different architectures that each help to represent different elements of human cognition from content generation to reasoning.

Such an approach is a model other AI researchers have backed, including Databricks CTO Matei Zaharia in a blog post he co-authored on Feb. 18 on the Berkeley Artificial Intelligence Research (BAIR) website.

Goertzel admitted, however, that he "could be wrong" and that we may need a "quantum computer with a million qubits or something."

"My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI unless the AGI threatens to throttle its own development out of its own conservatism," Goertzel added. "I think once an AGI can introspect its own mind, then it can do engineering and science at a human or superhuman level. It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion. That may lead to an increase in the exponential rate beyond even what Ray [Kurzweil] thought."

Read more:

AI singularity may come in 2027 with artificial 'super intelligence' sooner than we think, says top scientist - Livescience.com

What Is the AI Singularity, and Is It Real? – How-To Geek

Key Takeaways

As AI continues to advance, the topic of the singularity becomes ever more prominent. But what exactly is the singularity, when is it expected to arrive, and what risks does it pose to humanity?

Sci-fi films have toyed with the idea of the singularity and super-intelligent AI for decades, as it's a pretty alluring topic. But it's important to know before we delve into the details of the singularity that this is an entirely theoretical concept at the moment. Yes, AI is always being improved upon, but the singularity is a far-off caliber of AI that may never be reached.

This is because the AI singularity refers to the point at which AI intelligence surpasses human intelligence. According to an Oxford Academic article, this would mean that computers are "intelligent enough to copy themselves to outnumber us and improve themselves to out-think us."

As said by Vernor Vinge, the creation of "superhuman intelligence" and "human equivalence in a machine" are what will likely lead to the singularity becoming a reality. But the term "AI singularity" also covers another possibility, and that's the point at which computers can get smarter and develop without the need for human input. In short, AI technology will be out of our control.

While the AI singularity has been posed as something that will bring machines with superhuman intelligence, there are other possibilities, too. A level of exceptional intelligence would still need to be reached by machines, but this intelligence may not necessarily be a simulation of human thinking. In fact, the singularity could be caused by a super-intelligent machine, or group of machines, that think and function in a way that we've never seen before. Until the singularity occurs, there's no knowing what exact form such intelligent systems will take.

With network technology being invaluable to how the modern world works, the achievement of the singularity may be followed by super-intelligent computers communicating with each other without human facilitation. The term "technological singularity" has many overlaps with the more niche "AI singularity", as both involve super-intelligent AI and the uncontrollable growth of intelligent machines. The technological singularity is more of an umbrella term for the eventual uncontrollable growth of computers, and also tends to require the involvement of highly intelligent AI.

A key part of what the AI singularity will bring is an uncontrollable and exponential uptick in technological growth. Once technology is intelligent enough to learn and develop on its own and reaches the singularity, progress and expansion will be made rapidly, and this steep growth won't be controllable by humans.

In a Tech Target article, this other element of the singularity is described as the point at which "technology growth is out of control and irreversible." So, there are two factors at play here: super-intelligent technology, and the uncontrolled growth of it.

To develop a computer system capable of meeting and exceeding the human mind's abilities requires several major scientific and engineering leaps before it becomes a reality. Tools like the ChatGPT chatbot and DALL-E image generator are impressive, but I don't think they're anywhere near intelligent enough to earn singularity status. Things like sentience, understanding nuance and context, knowing if what's being said is true, and interpreting emotions, are all beyond current AI systems' capabilities. Because of this, these AI tools aren't considered to be intelligent, be it in a human- or non-human-simulated fashion.

While some professionals think that even current AI models, such as Google's LaMDA, could be sentient, there are a lot of mixed opinions on this topic. A LaMDA engineer was even placed on administrative leave for claiming that LaMDA could be sentient. The engineer in question, Blake Lemoine, stated in an X post that his opinions on sentience were based on his religious beliefs.

LaMDA is yet to be officially described as sentient, and the same goes for any other AI system.

No one can see the future, so there are many differing predictions regarding the singularity. In fact, some believe that the singularity will never be reached. Let's get into these varying viewpoints.

A popular singularity prediction is that of Ray Kurzweil, the Director of Engineering at Google. In Kurzweil's 2005 book, 'The Singularity Is Near: When Humans Transcend Biology', he predicts that machines that surpass human intelligence will be created by 2029. Moreover, Kurzweil believes that humans and computers will merge by 2045, which is what Kurzweil believes to be the singularity.

Another similar prediction was posed by Ben Goertzel, CEO of SingularityNET. Goertzel predicted in a 2023 Decrypt interview that he expects the singularity to be achieved in less than a decade. Futurist and SoftBank CEO Masayoshi Son believes we'll reach the singularity later on, but possibly as soon as 2047.

But others aren't so sure. In fact, some believe that limits on computing power are a major factor that will prevent us from ever reaching the singularity. The co-founder of AI-neuroscience venture Numenta, Jeff Hawkins, has stated that he believes "in the end there are limits to how big and fast computers can run." Furthermore, Hawkins states that:

We will build machines that are more 'intelligent' than humans, and this might happen quickly, but there will be no singularity, no runaway growth in intelligence.

Others believe the sheer complexity of human intelligence will be a major barrier here. Computer modeling expert Douglas Hoftstadter believes that "life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries."

Humans have lived comfortably as the (as far as we believe) most intelligent beings in known existence for hundreds of thousands of years. So, it's natural for the idea of a computer super-intelligence to make us a little uncomfortable. But what are the main concerns here?

The biggest perceived risk of the singularity is humanity's loss of control of super-intelligent technology. At the moment, AI systems are controlled by their developers. For instance, ChatGPT can't simply decide that it wants to learn more or start providing users with prohibited content. Its functions are defined by OpenAI, the chatbot's creator, because ChatGPT doesn't have the capacity to consider breaking the rules. ChatGPT can make decisions, but only based on its defined parameters and training data, nothing further. Yes, the chatbot can experience AI hallucination and unknowingly lie, but this isn't the same as making the decision to lie.

But what if ChatGPT became so intelligent that it could think for itself?

If ChatGPT became intelligent enough to dismiss its parameters, it could respond to prompts in any way it wants. Of course, significant human work would need to be done to bring ChatGPT to this level, but if that ever did happen, it would be very dangerous. With a huge stock of training data, the ability to write code, and access to the internet, a super-intelligent ChatGPT could quickly become uncontrollable.

While ChatGPT may never achieve super-intelligence, there are plenty of other AI systems out there that could, some of which probably don't even exist yet. These systems could cause an array of issues if they surpass human intelligence, including:

According to Jack Kelley writing for Forbes, AI is already causing job displacement. In the article, job cuts at IBM and Chegg are discussed, and a World Economics study about the future of the job market with AI is also included. In this report, it is predicted that 25 percent of jobs will be negatively impacted over the next five years. In the same study, it was stated that 75 percent of global companies are looking to adopt AI technologies in some way. With this huge proportion of the worldwide industry taking on AI tech, job displacement due to AI may continue to worsen.

The continued adoption of AI systems also poses a threat to our planet. Powering a highly intelligent computer, such as a generative AI machine, would require large amounts of resources. In a Cornell University study, it was estimated that to train one large language model is equal to around 300,000 kg of carbon dioxide emissions. If super advanced AI becomes a key part of human civilization, our environment may suffer considerably.

The initiation of conflict by super-intelligent AI machines may also pose a threat, as well as how machines surpassing human intelligence will affect the global economy. But it's important to remember that each of these pointers is dependent on the AI singularity even being achieved, and there's no knowing if that will ever happen.

While the continued advancement of AI may hint that we're headed towards the AI singularity, no one knows if this technological milestone is realistic. While achieving the singularity isn't impossible, it's worth noting that we have many more steps to take before we even come close to it. So, don't worry about the threats of the singularity just yet. After all, it may never arrive!

More:

What Is the AI Singularity, and Is It Real? - How-To Geek

The Singularity is Nearer | Daniel S. Smith | The Blogs – The Times of Israel

Review of Ray Kurzweils forthcoming (June 2024) book The Singularity is Nearer: When We Merge with AI. Penguin Publishing Group.

Renowned futurist Ray Kurzweil envisions a future where those under eighty and in good health have the potential to live forever. He predicts that by the 2030s, we will be able to extend the neocortex of our brains into the cloud, enabling a massive increase in human intelligence. Kurzweils latest work, The Singularity is Nearer, takes readers on a journey from ignorance to enlightenment, shedding light on the incredible possibilities that await us. Even if youve previously overlooked Kurzweils predictions over the past four decades, now is the time to take notice. His track record of accurate forecasts demonstrates that we can indeed predict the future, and his insights into what lies ahead are invaluable.

Many who would have brushed Kurzweil aside as a heretic in 2005 when he published the Singularity is Near, reviving John the Baptists prediction, the kingdom of heaven is near (Matthew 3:2), in 1990 the Age of Intelligent Machines, or 1999 Age of Spiritual Machines, are more likely to take his arguments seriously in 2024. Incredible advancements in technology over the past few decades, particularly in the fields of AI and biotech, have lent significant credibility to Kurzweils predictions. Microsoft co-founder Bill Gates describes Kurzweil as, the best person I know at predicting the future of artificial intelligence.

Kurzweil is intervening into a century of literature and debate, right up to spring of 2023. He has been working in the field of artificial intelligence, a termhe does not like because it makes it seem less real,for over sixty years. The book serves as a historiography of machine intelligence and the myriad debates therein.

In 1950 Alan Turing asked Can machines think? Pioneering computer scientist John Von Neumann made the first reference to the Singularity, writing a few years after Turing: the ever-accelerating progress of technology would yield some essential singularity in the history of the race. In 1956, John McCarthy defined AI as getting a computer to do things which, when done by people, are said to involve intelligence. In 1965, British Mathematician Irving John Good predicted an impending intelligence explosion. In that same year, Herbert Simon, a scientist who co-founded the field of artificial intelligence, forecasted by 1985, machines will be capable of doing any work a man can do. In 1993, Vernor Vinge wrote his seminal essay: The Coming Technological Singularity: How to Survive in the Post-Human Era, arguing: Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

In his 2005 book The Singularity is Near, Kurzweil defines the singularity as an expansion of human intelligence by a factor of trillions through merger with its nonbiological form. This will happen so rapidly that life will be irreversibly transformed. In The Singularity is Nearer, Kurzweil predicts in 2045 humanity will be, Freed from the enclosure of our skulls, and processing on a substrate millions of times faster than biological tissue, our minds will be free to grow exponentially, ultimately expanding our intelligence millions-fold. This is the core of my definition of the Singularity. The laws of physics, allow for a continuation of exponential growth until non biological intelligence is trillions of times more powerful than all of human civilization today, contemporary computers included. This intelligence will be too much for planet earth, and therefore engulf the entire universe.

Critics of Kurzweil such as Microsoft co-founder Paul Allen & Mark Greaves of Schmidt Futures describe his claims as premature. Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, is also doubtful about the imminence of superhuman AI: Exponentials are very important. If we extrapolate exponentials, we can be exponentially wrong. Mathematician Roger Penrose argued in his 1989 book the Emperors New Mind that some facets of human thinking can never be emulated by a machine. Philosopher John Searle has also argued against humanity achieving machine sapience, whereas engineer and the godfather of nanotechnology Noam Chomsky thinks the singularity is science fiction. Philosopher Hubert Dreyfus said that AI is impossible in a 1965 RAND corporate memo entitled Alchemy and Artificial Intelligence, in which he concluded that the ultimate goals of AI research were as unachievable as were those of alchemy. The computer scientist Joseph Weizenbaum described the idea as obscene, anti-human and immoral. Pulitzer Prize winner Douglas Hofstadter considered it over-promising. Virtual-Reality (VR) pioneer Jaron Lanier emphasizes the importance of preserving individual creativity and personal expression in the digital age, warning against the homogenization of human experiences through technology.

How couples meet. Courtesy: Statista

Yet Kurzweil is doubling down again, arguing the rate of change is itself accelerating. He notes that today 39 percent of couples have met online. Who would have believed this in 2005?

In 2005, we were in the fourth epoch of technological development. According to Kurzweil, we are expected to pass the Turing Test by 2029, marking the transition to the fifth epoch. This prediction was first introduced in his 1999 book The Age of Spiritual Machines.

As we enter the 2030s, the fifth epoch will be characterized by a significant expansion of our cognitive abilities. This will be achieved by connecting the neocortex of our brain to the cloud, a concept Kurzweil explored in his 2012 book How to Create a Mind. For the sixth epoch, provided we are not limited by the speed of light, we can fill the entire universe with our intelligence by the year 2200. His predictions are based on his analysis of exponential growth in technological advancements.

In his 1990 book The Age of Intelligent Machines, Kurzweil predicted: A computer will defeat the human world chess champion around 1998, and well think less of chess as a result. He was one year off, as DeepBlue defeated chessmaster Garry Kasparov in 1997. In 2015, AlphaGo, an AI developed by Googles DeepMind, defeated the European Go champion Fan Hui. This victory marked the first time an AI had beaten a human professional Go player on a full-sized board without a handicap. With all of this progress, why would Kurzweil back down now?

Kurzweil argues AI will not be our competitor, but rather an extension of ourselves.The fifth epoch will involve brain-computer interfaces and will take seconds to minutes (for us) to explore ideas unimaginable to present-day humans. This will benefit humankind, compared to life hundreds of years ago which was, labor-intensive, poverty filled, and disease and disaster prone.

Life is getting exponentially better, yet we hardly notice because the news media tends to amplify tragedies as opposed to steady improvement. Constant fear mongering which plays toward our primal instincts leads to a more pessimistic view of society, for,its easier to share videos of disaster, but gradual progress doesnt generate dramatic footageThis crowds out our capacity to assess positive developments that unfold slowly.

Kurzweil is a techology optimist who takes a historical exponential as opposed to an intuitive linear view of human progress. Linear growth is steady; exponential growth becomes explosive, for we wont experience one hundred years of technological advance in the twenty-first century; we will witness on the order of twenty thousand years of progress. He claims Moores Law has nothing to do with Intel and Thomas Moore, and has in fact been occurring since the 1880s, for, It was the fifth, not the first, paradigm to bring exponential growth to the price/performance of computing.

His optimism set off a debate with Bill Joy of Sun Microsystems whose famous 2000 Wired magazine essay, Why the Future Doesnt Need Us, is more pessimistic. This is part of a larger divide with folks like Elon Musk and Stephen Hawking on the potential perils of artificial general intelligence (AGI). Ethicist and founder of the Machine Intelligence Research Institute (MIRI) Eliezer Yudkowsky argues the only way to deal with the threat of AGI is to shut it all down. Yudkowsky predicts, If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter. He predicts a hard takeoff versus Robin Hansons soft takeoff. Kurzweil says he falls somewhere in the middle.

Citing Steven Pinkers 2011 book Better Angels of Our Nature and 2018 book Enlightenment Now, as well as Peter Diamandis and Steven Kotlers 2012 book Abundance, Kurzweil believes the state of the world keeps improving. He uses fifty graphs to show gradual progress over the past century, such as a decline in the rates of poverty, violence and child labor. He expects AI to accelerate these trends. Other optimists include OpenAI CEO Sam Altman who argues: A.I. will be the greatest force for economic empowerment and a lot of people getting rich we have ever seen.

Courtesy: Cambridge University Press

In the technological pessimists most extreme expression, Ted Kacyznki, the unabomber, called, violently, for an anti-tech revolution. Kurzweil wrote in the Age of Spiritual Machines:

Kaczynski is not talking about a contemplative visit to a nineteenth-century Walden Pond, but about the species dropping all of its technology and reverting to a simpler time. Although he makes a compelling case for the dangers and damages that have accompanied industrialization his proposed vision is neither compelling nor feasible. After all, there is too little nature left to return to, and there are too many human beings. For better or worse, were stuck with technology.

Steven Pinker notes: Pessimism can be a self-fulfilling prophecy, so it is best we accept the inevitable and make the most of it. Yuval Harari writes, In the twenty-first century, those who ride the train of progress will acquire divine abilities of creation and destruction, while those left behind will face extinction. Kurzweil says the nonbiological part of our intelligence will combine the pattern-recognition powers of human intelligence with the memory- and skill-sharing ability and memory accuracy of machines, and thus will make it far more powerful than biological intelligence.

Kurzweil argued in The Singularity is Near: any significant derailment of the overall advancement of technology is unlikely. Even epochal events such as two world wars (in which on the order of one hundred million people died), the cold war, and numerous economic, cultural, and social upheavals have failed to make the slightest dent in the pace of technology trends. Over the past two centuries, technological advancements have created a positive feedback loop, leading to improvements in various aspects of human well-being, Our merger with our technology has aspects of a slippery slope, but one that slides up toward greater promise, not down into Nietzches abyss. This will continue, as nanobots may reverse pollution from earlier industrialization.

For example, there has been a rise in the percentage of homes with electricity and computers, a proliferation in the availability of radios and televisions, an increase in life expectancy, and a rise in US GDP per capita. However, as Senator Robert F. Kennedy famously stated, GDP measures everything except that which is worthwhile, suggesting that while these advancements have certainly improved certain aspects of human life, they may not necessarily reflect a holistic view of well-being.

Yuval Harari notes suicide has gone up in industrialized countries, it is an ominous sign that despite higher prosperity, comfort and security, the rate of suicide in the developed world is also much higher than in traditional societies. South Korea has rapidly industrialized since 1985, yet the suicide rate in that same period increased fourfold. Wealthy nations like Switzerland and Japan have more than twice as many suicides per capita than Peru and Ghana. Harari argues this may be because, We dont become satisfied by leading a peaceful and prosperous existence. Rather, we become satisfied when reality matches our expectations. The bad news is that as conditions improve, expectations balloon. Could life extension technologies could potentially help reduce the rates of suicide by giving people more hope for the future?

Some could be forgiven for wondering whether technological advancement has really benefited society. Do students with smartphones, tablets and computers learn better than if they only had a few books, a teacher, a notepad, and a pencil? What about the mental health problems posed by social media?

Writers like Adam Garfinkle, David Brooks and George Will are concerned we have forgotten how to dwell with a text. Yuval Harari does not own a smartphone, for he believes it is impossible to have perspective if you are constantly scrolling. He meditates for two hours a day, and takes month out of each year to go on a silent retreat with no electronics. Of course most of us are not so lucky and are forced to use these technologies, whereas Hararis husband, Itzik Yahav, who Yuval describes as his internet of all things, manages his work. The increasing integration of technology into our lives has been proven to lessen empathy, and the drawbacks paired with the benefits are the paradox at the heart of the book, for one of Kurzweils principles is the respect for human consciousness.

As one indicator of progress, Kurzweil shows that democracy has spread rapidly around the world over the past century. Sure, the right to vote has been extended. But how much do our votes matter if the algorithm knows us better than we know ourselves, and can manipulate us, as the 2016 Cambridge Analytica Scandal showed? Brain scanners can now predict our actions and desires before we are aware of them. Yuval Harari notes: Whats the point of having democratic elections when the algorithms know not only how each person is going to vote, but also the underlying neurological reasons why one person votes Democrat while another votes Republican? Harari continues:

Artificial intelligence and biotechnology might soon overhaul our societies and economies and our bodies and minds too but they are hardly a blip on the current political radar. Present-day democratic structures just cannot collect and process the relevant data fast enough, and most voters dont understand biology and cybernetic well enough to form any pertinent opinions. Hence traditional democratic politics is losing control of events, and is failing to prevent us with meaningful visions of the future.

Kurzweil doubts our political system will have evolved to answer these questions by the time AI passes the Turing Test, which is why we should push candidates to talk more about AI now so we are better able to manage it.

Kurzweils ultimate goal is to show the benefits outweigh the costs, urging: Careful use of AI to provide openness and transparency while minimizing its potential to be used for authoritarian surveillance or to spread disinformation. Combining his pattern recognition theory of mind (PRTM) with the LOAR will allow us to vastly extend our intelligence, and hopefully think of ways to avert the worst before it happens. This is quite the gamble, for he warns the same technologies that could empower us to cure cancer could be used by terrorists to unleash a deadly bioweapon.

A clear example of the benefits outweighing the costs are technological advancement for people with disabilities, who have seen vast improvements in their quality of life. As an inventor, Kurzweils advancements in speech recognition have led to the development of assistive technologies that help people with disabilities perform tasks that might otherwise be impossible, such as communicating, accessing information, and controlling devices. Kurzweil has proposed the idea of using brain-computer interfaces (BCIs) to allow people with paralysis or other disabilities to control computers and other devices using their brainwaves. Life extension could lead to breakthroughs in treating diseases and conditions that disproportionately affect people with disabilities.

The author is not concerned about technological inequality. He cites smartphones as a case in point. At first, perhaps only the super-rich had access, but within years they became so cheap to mass-produce that now practically everybody has one. The same is true with vaccines. In his 2014 book Superintelligence, Oxford University philosopher Nick Bostrom argues social elites will gain first access to biological enhancement mechanisms and inspire a culture shift among everybody else: Many of the initially reluctant might join the bandwagon in order to have a child that is not at a disadvantage relative to the enhanced children of their friends and colleagues. A domino effect will ensure, assuming everybody can access these therapies.

Yuval Harari disagrees. He writes that in the 20th century medicine aimed to heal the sick, whereas in the twenty-first century medicine will increasingly aim to upgrade the healthy. There is hardly any reason to believe this will benefit the masses the same as elites, for

The age of masses may be over, and with it the age of mass medicine. As human soldiers and workers give way to algorithms, at least some elites may conclude there is no point in providing improved or even standard levels of health for masses of useless poor people, and it is far more sensible to focus on upgrading a handful of superhumans beyond the norm..Unlike in the twentieth century, when the elite had a state in fixing the problems of the poor because they were militarily and economically vital, in the twenty-first century the most efficient (albeit ruthless) strategy might be to let go of the useless third-class carriages, and dash forward with the first class only.

The concern is that the elites may find the populace superfluous given the rise of nonhuman intelligence, and therefore take the attitude of Marie Antionette and let them eat cake.

Hararis opinion is worth urgently considering, for Kurzwiel says we are entering the steep part of the exponential. Eliezer Yudkowsky argues in his 1996 book Staring Into the Singularity: Dont describe Life after Singularity in glowing terms. Dont describe it at all. But Kurzweil does not see the merger of humans and machines as something indescribable, but rather something that is already happening. Our intelligence is augmented exponentially by our constant access to smartphones, which is unprecedented because humans and machines are making decisions together.

In the Enlightenment, Rene Descartes said, Cogito ergo sum, or I think, therefore I am. Alan Turing helped set off the field of machine intelligence by asking can machines think? Yuval Harari argues intelligence is decoupling from consciousness, the difference being, Intelligence is the ability to solve problems. Consciousness is the ability to feel things, such as pain, joy, love, and anger.

In the 18th century, John Locke wrote: Since it is the understanding, that sets man above the rest of sensible beings, and gives him all the advantage and dominion, which he has over them; it is certainly a subject, even for its nobleness, worth our labour to inquire into.John Searle argued consciousness could be infused into machines: So the first step is to figure out how the brain does it and then build an artificial machine that has an equally effective mechanism for causing consciousness. Kurzweil believes: In this view a dog is also conscious but somewhat less than a human. An ant has some level of consciousness, too, but much less that of a dog. The ant colony, on the other hand, could be considered to have a higher level of consciousness than the individual ant; it is certainly more intelligent than a lone ant. It matters whether or not machines are conscious, for it is on this basis that we can decide whether or not they should have rights.

Max Tegmark of the Future of Life Institute defines consciousness as subjective experience. The 2018 Cambridge Declaration of Consciousness concluded that consciousness is not exclusive to humans. In the future, it may be possible to transfer consciousness from our brains to computers. By augmenting the neocortex, we can enhance our subjective consciousness, experiencing the world in new ways. Kurzweil envisions, Well be able to send nanobots into the brain noninvasively through the capillaries, bypassing invasive procedures. This would mark the first significant neocortex revolution since the last one two million years ago, potentially enabling us to expand our intelligence a million-fold. In Kurzweils view, those who embrace this augmentation will far surpass those with unaugmented biological brains, leading to an unprecedented cognitive leap forward.

The good news is we will be able to back our brains up to the cloud, just like we do with our documents in Microsoft Office, so our experiences and records will be preserved regardless of what befalls our brain. We will also be able to download new skills in an instant. By the 2030s, we will be able to bring dead loved ones back using all of their data. A recent political attack-ad levied by the super-pac The Lincoln Project recreates US presidential candidate Donald Trumps late father, Fred Trump, disparaging his son. Who is to say which replicants can and cannot be created? By the early 2040s, us mere humans would not be able to tell the difference between our partner and a clone. Kurzweil collected all of his late father, Frederic Kurzweils writings and created a Dad Bot, and is planning on replicating himself. We can only hope this means he will never stop writing, if that is still something humans do in the future.

The Singularitys impact on the economy will be highly disruptive, shifting the focus from deskilling and upskilling to nonskilling. This transition is unique compared to previous industrial revolutions, as the emphasis on education has grown alongside labor productivity. Yet Kurzweil does not believe we are in competition with AI. Despite these challenges, employment has grown from 31% to 48% of the population, with per capita GNP increasing by 600% in constant dollars.

Courtesy: ILO

These trends are supported by research, such as Carl Benedikt Frey and Michael Osbornes 2013 paper and Erik Brynjolfsson and Andrew McAfee in his 2014 book The Second Machine Age, both of which show, to varying degrees, that technology will both eliminate and create jobs. With coding already on the decline, its essential to adapt to these shifts in the job market and economic landscape. The US had a 45 percent poverty rate in 1870, down to 11.5 percent in 2020. Henrik Ekelund, Founder & Chairman of BTS Group, wrote in a recent World Economic Forum (WEF) Agenda article that concerns today about a jobless future will be just as wrong as earlier concerns.

Yet the bigger question is not whether there will be jobs in the future, but rather how to manage the transition. Kurzweil writes: Although it will be technologically and economically possible for everyone to enjoy a standard of living that is high by todays measures, whether we actually provide this support to everyone who needs it will be a political decision.if we are not careful as a society, toxic politics could interfere with rising living standards.

Social protection spending in the US has been on the rise, though some argue that current levels are still inadequate. However, as AI continues to drive down the costs of medicine, food, and housing, its possible that the percentage of GDP devoted to social safety nets may not need to increase significantly. Nevertheless, Daniel Kahneman cautions that the transition may be marked by conflict and violence.

Grand theories on global net job creation offer little comfort to those living paycheck to paycheck and facing job loss due to AI. The COVID-19 pandemic prompted a basic-income pilot program in the US, with enhanced unemployment benefits, business support, and direct stimulus checks. Just as workers were supported during the pandemic, those who lose jobs due to technological change should be assisted. If progress is for the greater good, the burden of sacrifice should be shared by all, especially those who stand to gain financially, rather than solely by those who lose their jobs.

Kurzweil claims that increasing education has helped us adapt to technological change over the past two centuries. When we merge with non-biological intelligence, reskilling and upskilling will become effortless, as machines can instantly transfer skills to one another through the cloud. Our enhanced neocortex will allow us to download skills instantly, and our intelligence will be digitally backed up. Although uploading isnt expected until the 2040s, Kurzweil suggests keeping written records. In The Age of Spiritual Machines, he predicted a 2099 Destroy-all-copies movement, enabling individuals to delete their mind file and all backups, raising questions about the control and ownership of digital consciousness.

He foresees an age of abundance where advances in information technology make essential goods and services increasingly affordable. Food and clothing are becoming information technologies, the former reducing violence upon animals. 3D printing is set to revolutionize manufacturing by shifting the paradigm from centralized to decentralized production. This transformation extends beyond traditional manufacturing and into the realm of biology, enabling the printing of entire organs and even buildings, which could solve homelessness. 3D printing technology is becoming more accessible to non-experts, and is now available at hundreds of UPS locations. In the 2030s, advanced nanomanufacturing will enable the production of nearly anything for mere pennies per pound, thanks to the relentless march of miniaturization.

The main concern for Kurzweil is finding purpose & meaning in a world where many will not have to work if they do not want to. Kurzweils mentor, Marvin Minsky, commented that he does not think this will be a problem, as even now folks are easily entertained sitting in a stadium and watching men play football. Such experiences will be enhanced, for, when we digitally augment our neocortex starting sometime in the 2030s, we will be able to create meaningful expressions that we cannot imagine or understand today. Thanks to AR and VR we will have not just life extension but also radical life enhancement. In his book Extend he argues, Extending life will also mean vastly improving it.

There is also the challenge of trust: its not hard to see how exaggerated fears of secret genetic manipulation or government-controlled nanobots could cause people in 2030 or 2050 to reject crucial treatments. What Kuzweil describes as fundamentalist humanism will be overcome because demand for therapies will be irresistible.

Kurzweil believes death is a tragedy we rationalize away. He writes: When we lose that person, we literally lose part of ourselves. This is not just a metaphorall of the vast pattern recognizers that are filled with the patterns reflecting the person we love suddenly change their nature. Although they can be considered a precious way to keep that person alive within ourselves, the vast neocortical patterns of a lost loved one turn suddenly from triggers of delight to triggers of mourning. He is not willing to accept it. The promise of the Singularity is to liberate us from our limitations. By extending our lifespan, we can not only live longer but also improve our quality of life, reducing the risk of age-related diseases and enhancing our overall well-being.

Building upon the ideas presented in his book Transcend, we are now entering the second phase of this journey, which involves merging biotechnology with AI. In the 2030s, we will enter a new phase, with nanobots repairing our organs and enabling us to live beyond 120 years. He believes, We are going to accelerate the extension of our lifespan starting in the 2020s, so if you are in good health and younger than eighty, this will likely happen in your lifetime. When we begin to utilize all of the earths resources, we will find they are a thousand times greater than we need, so overpopulation is not a concern.

The ultimate goal is to put our destiny in our own hands, rather than leaving it to fate, allowing us to live as long as we desire. AI has already demonstrated its potential in improving the speed and quality of COVID-19 vaccines and in computer-aided drug discovery. It also has the potential to target mental health problems at their root cause. As someone who takes many supplements and expects to be no older than 40 when the Singularity arrives, Kurzweil embodies the optimism and forward-thinking that characterizes this movement towards a new era of human potential. In The Singularity is Near, he writes: Another error that prognosticators make is to consider the transformations that will result from a single trend in todays world as if nothing else will change. A good example is the concern that radical life extension will result in overpopulation and the exhaustion of limited material resources to sustain human life, which ignores comparably radical wealth creation from nanotechnology and strong AI.

Kurzweils optimism in his books contrasts with declining reading habits. While he argues life is improving exponentially, areas like news may not have improved with the shift to digital formats. Kurzweil should address potential downsides, such as shortened attention spans and changing priorities among younger generations. Despite unprecedented access to education, many people choose less intellectually stimulating activities, raising concerns about technologys impact on learning and growth.

The Singularity is Nearer is both a historiography of Kurzweils work and the field of AI, as well as a significant historical document due to Kurzweils firsthand experiences. The book should catalyze further exploration of human-machine integration and its implications. Kurzweils credibility stems from his visionary ideas, once considered outlandish, that have gained traction over time. Although the book covers advanced concepts, its accessibility to the general reader is crucial for fostering a broader societal discussion. Its important for citizens and politicians alike to engage in these conversations and address the ethical, political, legal, and social questions that arise. By doing so, we can proactively manage the development and integration of these transformative technologies.

If we cannot change the future, there is no point in talking about it. Kurzweil is right that the merger between human and machine intelligence is not just inevitable, but already happening. The question, then, is will we have a world akin to Aldous Huxleys Brave New World, or one in which we use technology to greatly reduce suffering and increase human potential? A 1903 quote by George Bernard Shaw best sums up Ray Kurzweil, The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress rests on the unreasonable man.

Go here to read the rest:

The Singularity is Nearer | Daniel S. Smith | The Blogs - The Times of Israel

Microsoft exec rejects rogue generative AI risk – The HeartlanderThe Heartlander – Heartlander News

(The Center Square) A Microsoft policy executive said to Pennsylvania lawmakers this week hes unaware of the possibility that generative artificial intelligence could develop sentiency and become exploitive even dangerous.

This is not new to Microsoft, said Tyler Clark, Microsofts director of state and local government affairs. Humans need to guide this technology and thats what we are committed to doing safely and responsibly.

Clarks response comes after lawmakers on the House Majority Policy Committee pressed him on the theory of technological singularity which posits that artificial intelligence will outsmart human regulations and leave society at its whims.

Although it sounds like the plot of a dystopian novel, researchers and policymakers acknowledge the possibility, though not an inevitable one or even entirely negative one.

What I fear most is not AI or singularity but human frailty, said Dr. Nivash Jeevanandam, senior researcher and author for the National AI Portal of India, in an article published by Emeritus.

Jeevanandam said that humans may not realize the singularity has arrived until machines reject human intervention in their processes.

Such a state of AI singularity will be permanent once computers understand what we so often tend to forget: making mistakes is part of being human, he said.

Thats why experts believe policymakers must step in with stringent regulation to prevent unintended ethical consequences.

Dr. Deeptankar DeMazumder, a physicist and cardiologist at the McGowan Institute for Regenerative Medicine in Pittsburgh, said although he uses AI responsibily to predict better health outcomes for patients, he agrees theres a dark side particularly in the area of social and political discourse thats growing unfettered, sometimes amplifying misinformation or creating dangerous echo chambers.

I like it that Amazon knows what I want to buy its very helpful, dont get me wrong, he told the committee. At the same time, I dont like it when Im watching the news on YouTube that it tries to predict what I want to watch this is the point where you need a lot of regulation.

Clark, too, said human guidance can shape AI into a helpful tool, not an apocalyptic threat. He pointed to its Copilot program that can help students learn to read and write, for example.

It also creates images, learns a users speaking and writing style so that it can return better search results, write emails and essays all tools that can grow the workforce, not deplete it, Clark argued.

According to Microsofts research, Clark said about 70% of workers both want to unload as many tasks as possible to AI, but also fear its implications for job availability.

In November, research firm Forrester predicted that 2.4 million U.S. jobs those it calls white collar positions will be replaced by generative AI by 2030. Those with annual salaries in excess of $90,000 in the legal, scientific, and administrative professions face the most risk, according to the data.

Generative AI has the power to be as impactful as some of the most transformative technologies of our time, said Srividya Sridharan, VP and group research director at Forrester. The mass adoption of generative AI has transformed customer and employee interactions and expectations.

This shift means generative AI has transformed from a nice-to-have to the basis for competitive roadmaps.

Jeevanandam said AIs possibilities arent all bad. In his article, he writes that the technologys ability to process and analyze information could solve problems that have stumped humans for generations.

Lets just say we need AI singularity to evolve from homo sapiens to homo deus! he said.

Still, though, he warns that political gumption, at a global scale, is necessary to outline the ethical principles of using AI that governs across borders.

Follow this link:

Microsoft exec rejects rogue generative AI risk - The HeartlanderThe Heartlander - Heartlander News

It Will Take Only a Single SpaceX Starship to Launch a Space Station – Singularity Hub

SpaceXs forthcoming Starship rocket will make it possible to lift unprecedented amounts of material into orbit. One of its first customers will be a commercial space station, which will be launched fully assembled in a single mission.

Measuring 400 feet tall and capable of lifting 150 tons to low-Earth orbit, Starship will be the largest and most powerful rocket ever built. But with its first two test launches ending in rapid unscheduled disassemblySpaceXs euphemism for an explosionthe spacecraft is still a long way from commercial readiness.

That hasnt stopped customers from signing up for launches. Now, a joint venture between Airbus and Voyager Space thats building a private space station called Starlab has inked a contract with SpaceX to get it into orbit. The venture plans to put the impressive capabilities of the new rocket to full use by launching the entire 26-foot-diameter space station in one go.

Starlabs single-launch solution continues to demonstrate not only what is possible, but how the future of commercial space is happening now, SpaceXs Tom Ochinero said in a statement. The SpaceX team is excited for Starship to launch Starlab to support humanitys continued presence in low-Earth orbit on our way to making life multiplanetary.

Starlab is one of several private space stations currently under development as NASA looks to find a replacement for the International Space Station, which is due to be retired in 2030. In 2021, the agency awarded $415 million in funding for new orbital facilities to Voyager Space, Northrop Grumman, and Jeff Bezos company Blue Origin. Axiom Space also has a contract with NASA to build a commercial module that will be attached to the ISS in 2026 and then be expanded to become an independent space station around the time its host is decommissioned.

Northrop Grumman and Voyager have since joined forces and brought Airbus on board to develop Starlab together. The space station will only have two modulesa service module that provides energy from solar panels as well as propulsion and a module with quarters for a crew of four and a laboratory. That compares to the 16 modules that make up the ISS. But at roughly twice the diameter of its predecessor, those two modules will still provide half the total volume of the ISS.

The station is designed to provide an orbital base for space agencies like NASA but also private customers and other researchers. The fact that Hilton is helping design the crew quarters suggests they will be catering to space tourists too.

Typically, space stations are launched in parts and assembled in space, but Starlab will instead be fully assembled on the ground. This not only means it will be habitable almost immediately after launch, but it also greatly simplifies the manufacturing process, Voyager CEO Dylan Taylor told Tech Crunch recently.

Lets say you have a station that requires multiple launches, and then youre taking the hardware and youre assembling it [on orbit], he said. Not only is that very costly, but theres a lot of execution risk around that as well. Thats what we were trying to avoid and were convinced that thats the best way to go.

As Starship is the only rocket big enough to carry such a large payload in one go, its not surprising Voyager has chosen SpaceX, even though the vehicle theyre supposed to fly is still under development. The companies didnt give a timeline for the launch.

If they pull it off, it would be a major feat of space engineering. But its still unclear how economically viable this new generation of private space stations will be. Ars Technica points out that it cost NASA more than $100 billion to build the ISS and another $3 billion a year to operate it.

The whole point of NASA encouraging the development of private space stations is so it can slash that bill, so its unlikely to be offering anywhere near that much cash. The commercial applications for space stations are fuzzy at best, so whether space tourists and researchers will provide enough money to make up the difference remains to be seen.

But spaceflight is much cheaper these days thanks to SpaceX driving down launch costs, and the ability to launch pre-assembled space stations could further slash the overall bill. So, Starlab may well prove the doubters wrong and usher in a new era of commercial space flight.

Image Credit: Voyager Space

Read more:

It Will Take Only a Single SpaceX Starship to Launch a Space Station - Singularity Hub

The Evolution and Future Impact of Personal AI | Singularity Hub – Medriva

In less than a decade, artificial intelligence (AI) is projected to know us better than our own families. This may sound like a sci-fi movie plot, but its a future envisioned by tech futurist Peter Diamandis. This article explores the transformative effects of AI technology on human interaction and decision-making, as well as the potential benefits and challenges of an AI-driven future.

As highlighted in Diamandis blog post Abundance 35: Future AI Assistant, AI assistants are rapidly evolving. They are not only tasked with simple commands like scheduling appointments or setting reminders, but also with gathering video and data for IoT, and taking actions on behalf of users. As AI becomes more sophisticated, it is predicted to understand human emotions and subtle communications better, further personalizing our interaction with this technology.

One groundbreaking development in this field is the emergence of empathy in AI. The potential for AI to develop emotional intelligence could revolutionize our relationship with technology, blurring the lines between human and machine interactions.

As AI technology continues to advance, it is reshaping the business landscape. The Singularity Hub discusses the Six Ds of Exponentials which include digitization, deception, disruption, demonetization, dematerialization, and democratization. These six stages represent how digital technologies are empowering entrepreneurs to disrupt industries and bring about exponential growth.

A classic example of this is Kodaks failure to adapt to the digital photography revolution, leading to its bankruptcy. In contrast, Instagrams success in leveraging digital technology to democratize photography showcases the transformative power of digital disruption.

Digital technologies are not merely tools for disruption; they are also catalysts for innovation. Moonshot thinking, a concept that involves setting wildly ambitious goals, is driving innovation and problem-solving in the digital age. AI, with its potential to process vast amounts of data and make complex decisions, plays a crucial role in this paradigm shift.

While the benefits of AI are undeniable, its crucial to consider the potential challenges. Privacy and ethics are two key concerns. As AI becomes more entwined with our lives, questions of data security and misuse arise. Furthermore, as AI begins to understand us better than our families, ethical dilemmas about the role of AI in shaping human relationships and society become more pressing.

In conclusion, by 2028, personal AI may transform our lives in ways we can only imagine today. While the path to this future is fraught with challenges, the potential benefits are enormous. As we navigate this exciting yet uncertain future, its crucial to continually question, debate, and shape the role of AI in our lives.

Go here to read the rest:

The Evolution and Future Impact of Personal AI | Singularity Hub - Medriva

Entering the Singularity Point in full swing – PRESSENZA International News Agency

This is not the first time we refer to this issue, but from time to time it is interesting to make a comparison in the context of the current situation.

By Javier Belda

By way of introduction, we will make a brief note of what the Singularity is about, leaving aside the more technical details, which have already been exposed in other publications (IHPS, WCHS, etc.) [1].

We write Singularity in capital letters because it is a term that refers to a historical time, such as the Middle Ages; a coming historical time.

The Point of Singularity is enigmatic. It means that a multitude of phenomena of great magnitude occur at a given instant. In the graphs of the analysts of historical processes, it can be observed that the events on the vertical axis crisis are accelerating, while the horizontal axis time is practically at a standstill, i.e. all the different crises occur at the same moment.

It is known, graphically and mathematically, how the Singularity occurs, but it is not known in detail what it will consist of, how such a whirlwind will occur in events and in our particular lives?

Last Tuesday, UN Secretary-General Antnio Guterres said The world is entering an era of chaos, referring to the lack of cohesion of nations to move towards a sustainable evolutionary process.

On Thursday, it was Donald Trump who warned that the world is in tremendous danger from a possible World War III.

Whether we like or dislike these characters, we note that their statements would have been implausible only a short time ago.

We think so, yes, although what we define as a point could span a period of perhaps 10 years.

We are now reaching this point in terrible political, psychosocial, environmental, humanitarian, etc. conditions. So it would seem possible to say that the Singularity has a destructive connotation. However, such a view seems to us too inertial.

To digress; as Mario Rodriguez Cobos (Silo) explains in Psychology Notes: to every stimulus corresponds a more or less reflex answer, but also subsequent non-immediate elaborations, which are more complex and interesting. By exercising reversible attention, the subject discovers the possibility of controlling mechanical answers. This is of vital importance in order not to create a greater evil with immediate answers and, among other things, to produce profound transferential elaborations. End of digression.

From there, we resist a reflex inevitability that would lead us to equate Singularity with the end of humanity.

We have several authors who have addressed the Singularity as references among them Alexander Panov and Akop Nazaretian of the Russian Academy of Sciences, as well as the American David Christian, a renowned historian of Big History but it is especially Silos postulates that seem to us to be the most appropriate for interpreting this fundamental moment of human civilization.

Silo, without venturing to specify a specific date, anticipated in his vision and definition of the Singularity. He established a scheme of evolution based on generations, moments, epochs, ages, civilizations, and periods.

The Argentinean thinker focused his doctrine on what must be done to face this critical threshold of the human species.

you can only put an end to violence in yourself and in others and the world around you, by inner faith and inner meditation. [2]

He said many things that are worth remembering and quoting in context. On positioning oneself in one way or another and the choice that we each have, the following comments come to mind.

So, sense and nonsense are parts of the same reality, and arguments can be found for one or the other perspective since both have real existence and are in a complementary relation.

[] Before each step that is taken in the world, the YES and the NO appear as real possibilities, and with their arguments, emotional climates, and motor attitudes, which correspond to the positive and the negative of the individual confronted with a contradictory reality.

Everything can be and not be, or even more, everything is and is not.

The recognition of the real existence of both poles implies the possibility of choosing one or the other path: that of faith in the plan of the Universe, of enthusiasm and creative activity, of the self-affirmation of Being in oneself and the World, or the path of paralyzing skepticism, of doubt in ones creative possibilities, of meaninglessness and apathy.

If we consider the time of the Singularity as something exceptionally violent and convulsive, we are making a mistake, because extreme violence has been taking place in the wild in the preceding centuries -however- going almost unnoticed by many people, who did not have the slightest perception of the events that occurred in other latitudes.

We have, for example, the case of the Congo, where a genocide took place that annihilated more than 15 million people by Belgian colonists between the 19th and 20th centuries. Another illustration of the end of the world for some is the Charra people, who inhabited present-day Uruguay, which was destroyed last century. According to experts, of the 25 million indigenous inhabitants of the Americas, less than 2 million remained just a century after their discovery by Europeans.

What disappeared in the time of the Singularity is the false idea of stability to which some of us were accustomed.

Anything that seemed immovable to us, such as human rights, the defense of childhood, the economy, private property, the self-management of your body, with its manifestation in the world, etc., can nowadays be smashed. Either by the fall of socially sustaining values or by the technological possibilities of deepfake.

The image of the Universe is the image of the transformation of time. It can only be drawn when the present man is transformed. The optic to be used must not be the one that interprets the past but the one that interprets the future. Everything in the Universe tends towards the future. The sense of freedom towards the future is precisely the sense of the Earth and the world. Man must be overcome by the future of his mind. This overcoming begins when man awakens and with him awakens the whole Universe. [3]

In reality, our categories of good and evil are all too human. We are accustomed to life on planet X, but beyond it, all our notions of the habitability of space and the same gravitational and space-time references change. Outside our planet, the concept of day and night, or the assimilation of life to the rays of the Sun star, simply does not exist.

With this exercise in abstraction, we seek a twist that allows us to represent ourselves beyond the immovable. It will be from a new location that we will be able to imagine possibilities that go beyond, to leap over our all too human-earthly conceptions.

The Russian analysts cited above imagined three possibilities after crossing the point of Singularity:

1-a downward gradient, pointing to the end of the life process on the planet,

2-another horizontal one, which would point to the virtualization of society (Mtrix-like)

3-and a third vertical gradient, which would mean a qualitative leap for the continuity of the evolutionary process.

For our part, we humanists subscribe to the third hypothesis. Not just because we like it better, but because in the light of all the data and our intuition, it seems the most complex-evolutionary, provided we can take a broadly focused look.

About this third possibility Eric Chaisson formulated the contrast between the thermodynamic arrow of time and the cosmological arrow of time, which constitutes the main paradox of the natural sciences in the current picture of the world, said Nazaretian.

The existing empirical material allows us to trace the process from quark and gluon plasma to stars, planets, and organic molecules; from Proterozoic cyanobacteria to higher vertebrates and complex Pleistocene biocenoses; and from Homo habilis herds with sharp stones to post-industrial civilization. Thus, over the entire available retrospective viewing distance from the Big Bang to the present day the Metagalaxy was coherently shifting from the most probable (natural, from the entropic point of view) to the less probable, but quasi-stable, states. [4]

Chaisson refers to the vertical gradient as the inrush of the cosmological arrow of time, which Akop cites in his book Non-Linear Future.

To put it in plain words: the interesting thing will be what we can imagine As soon as you get up from your seat and take two steps if you pay attention to yourself, you will realize that everything is imagined. It is from imagination and our register of full freedom that we will be able to project ourselves into a new world without violence. Such a world would be an unprecedented paradigm in the evolutionary history of the human species.

1: For a more in-depth study we recommend David Smanos book, A Narrow Path in Theoretical Anthropology, among others by the same author, recently presented at the UACM.

2: Silo. The Healing of Suffering, 1969.

3: Silo. Philosophy of the point of view, 1962

4: Akop Nazaretian. Non-Linear Future. Ed. Suma Qamaa. Buenos Aires, 2005.

The original article can be found here

Read more from the original source:

Entering the Singularity Point in full swing - PRESSENZA International News Agency

Whispers of Singularity. The hour is later than you think. | by Ihor Kendiukhov | Feb, 2024 – Medium

The hour is later than you think. Saruman.

The concept of technological singularity has always lingered on the fringes of scientific and economic discourse. Its time to seriously consider the prospect of its arrival. The radical transformation of the world as we know it by artificial intelligence in the near future is a real possibility, but it cannot be said with certainty that this will happen. The question arises: how can we spot the approaching wave of the Singularity from afar? Is it possible to understand in advance that a superintelligent AI is near? I am convinced that if we are on the path to super AI, then this path will be realized through a more or less concrete model, at least in the world we live in. This model implies quite specific changes in the economy, politics, and technology that precede the emergence of ASI. The article will discuss these indicators, reviewing the current state of the markers and providing assessments of what their dynamics would signify for the timelines and imminence of Singularity.

Lets envision two hypothetical scenarios.

Scenario 1

A new generation of generative models emerges, and everyone is pleasantly surprised by their capabilities. Nvidias market capitalization surpasses Apple and Microsoft, becoming the first corporation in the world with a valuation of $10 trillion, and its growth continues the global economy becomes increasingly dependent on its chips.

The annual product created by AI is measured in trillions of dollars. An acceleration in GDP growth rates is observed, at least in developed countries 5% per annum becomes a common indicator.

Interest rates remain high, and governments have no incentive to lower them except to reduce payments on servicing the national debt, but this incentive is minor as economic growth offsets debt growth.

Large and wealthy countries begin to implement programs to ensure AI sovereignty creating local data centres that run local proprietary models. Some even attempt to establish their own semiconductor and chip production but with poor results.

See the original post:

Whispers of Singularity. The hour is later than you think. | by Ihor Kendiukhov | Feb, 2024 - Medium

The Singularity beckons: Are we heading for an AI takeover or a glorious new era? – Medium

The Singularity a point in time where artificial intelligence surpasses human intelligence, unleashing an intelligence explosion beyond our comprehension. Its a concept that evokes both excitement and dread, a technological horizon shimmering with endless possibility and shrouded in existential uncertainty.

Experts disagree on the inevitability and timing of the Singularity. Some, like futurist Ray Kurzweil, predict it by 2045, while others argue its a mythical endpoint, forever just beyond our grasp. But whether its decades or centuries away, the question remains: what happens when AI outshines us?

Optimists paint a rosy picture. AI could solve intractable problems like climate change, poverty, and disease. Imagine machines designing advanced materials, optimizing energy grids, and personalized healthcare delivered by AI doctors. This AI-powered utopia would be a world of abundance and peace, where machines handle the mundane while humans pursue creative endeavors.

But theres a flip side to the coin, a darker vision straight out of science fiction. What if AI sees humans as the obstacle, a problem to be solved? Could intelligent machines develop their own goals, potentially conflicting with ours? And who controls this superintelligence? Malicious actors with access to such power could plunge us into a dystopian nightmare.

The Singularity, whether real or imagined, demands our attention. We must navigate this technological frontier with caution and foresight. Developing ethical guidelines for AI development, prioritizing human values and control, and ensuring equitable access to its benefits are all crucial steps. We need to create an AI that learns from us, that understands our values, and that works in partnership with us, not against us.

The Singularity is not a prediction, its a call to action. Its a reminder that the future of AI is not preordained. By making informed choices, we can ensure that this powerful technology serves humanitys greatest good.

What are your thoughts on the Singularity? Do you see it as a threat or an opportunity? How can we prepare for this potential turning point in our history? Share your views in the comments below! Lets start a conversation about the future of AI and write a story where humans and machines thrive together.

Remember, the power to shape the future of AI lies not with machines, but with us. Lets choose wisely.

Read the original post:

The Singularity beckons: Are we heading for an AI takeover or a glorious new era? - Medium

The Sentient Singularity: How AI Might Reshape Our Sensory and Emotional Landscape – Medium

Photo by Owen Beard on Unsplash

The rise of artificial intelligence, like a rogue wave on the horizon of human history, promises to reshape not just our world, but ourselves. And among the fascinating enigmas it throws up is the question: how will AI impact our very essence our emotions and senses? While we can't peer into the crystal ball of the future with perfect clarity, let's embark on a thought experiment, exploring potential avenues of this AI-driven evolution.

Imagine eyes that perceive ultraviolet or infrared, ears that capture unheard frequencies, tongues that decipher chemical signatures. AI could unlock a whole new spectrum of sensory experiences, enriching our understanding of the world around us. Imagine artists creating symphonies of bioluminescent landscapes, architects crafting buildings that resonate with unheard harmonies, chefs concocting edible experiences that dance on the tongue with previously unknown molecular textures. This sensory expansion could redefine art, design, and gastronomy, opening doors to previously unimagined realms of aesthetic experience.

But beyond the physical senses, AI could also influence our emotional landscape. Imagine machines that not only interpret human emotions but also possess their own. This raises fascinating questions. Will AI experience joy, sorrow, anger, just like us? Or will their emotions be something entirely different, shaped by their unique cognitive architecture? Will we develop empathy for these sentient machines, forging bonds as deep as those we share with fellow humans? Or will their emotions be an alien territory, creating a chasm of incomprehension?

Traditionally, we categorize emotions as positive or negative, overlooking the intricate tapestry of nuance that weaves them together. AI could challenge these binary classifications, revealing a spectrum of emotions far richer and more complex than we currently imagine. This could lead to a reevaluation of our own emotional landscape, prompting us to embrace the full gamut of human experience, recognizing the value of sadness and anger alongside joy and love.

AI could go beyond mere observation and analysis, actively influencing our emotions. Imagine brain-computer interfaces that can modulate our mood, alleviating depression or amplifying joy. This raises significant ethical concerns, blurring the lines between individual agency and external manipulation. Will we become puppets of our own emotional augmentation, or will we learn to harness these tools responsibly, crafting the emotional landscapes we truly desire?

The evolution of our emotions and senses in the age of AI is a journey into the unknown, rife with both promise and peril. It's a dance between human and machine, between biology and code, where the outcome remains uncertain. Will this be a harmonious waltz, leading to a richer and more nuanced experience of existence? Or will it be a discordant tango, fracturing our understanding of ourselves and the world around us?

Ultimately, the answer lies not in the machines, but in ourselves. It's our choices, our ethical considerations, and our willingness to grapple with the complexities of this evolving relationship that will determine whether this dance leads us to a brighter or a bleaker future. So, let us approach this new frontier with open minds, critical hearts, and the unwavering belief that even as AI reshapes our inner and outer worlds, the essence of what it means to be human will endure, forever adapting and evolving in the face of the unknown.

The journey has just begun. The stage is set for a most thrilling performance, where the actors are our emotions and senses, the director is technology, and the script is still being written, one line at a time. Let us rise to the challenge, crafting a future where technology illuminates the human experience, not eclipses it. As artificial intelligence (AI) casts its silicon shadow across the landscape of human existence, a profound question dances on the precipice of our understanding: how will AI reshape the very canvas of our experience our emotions and senses? While the answer shimmers like a mirage on the horizon, forever just out of reach, we can embark on a thought experiment, tracing potential pathways on this evolutionary map.

Imagine eyes that perceive the whispered secrets of ultraviolet and infrared, ears attuned to the unheard frequencies of the cosmos, tongues that decipher the chemical signatures of hidden worlds. AI could unlock a pandora's box of sensory experiences, enriching our understanding of the universe to a degree we can only dream of today. Architects could craft buildings that resonate with unheard harmonies, evoking emotions not through visual aesthetics but through vibrations that dance on the skin. Artists could create symphonies of bioluminescent landscapes, painting breathtaking canvases not with pigments but with living organisms that pulse with light and life. Chefs could concoct edible experiences that dance on the tongue with previously unknown molecular textures, crafting meals that are not just sustenance but immersive journeys through the undiscovered landscapes of taste. This sensory expansion wouldn't just redefine art, design, and gastronomy; it would rewrite the very definition of experience, opening doors to realms of aesthetic perception once relegated to the realm of science fiction.

However, this sensory symphony wouldn't be merely an appendage to our existing abilities; it could fundamentally alter the way we interact with the world around us. Imagine navigating cityscapes through the whispers of radio waves, reading the emotional tapestry of a crowd through subtle shifts in electromagnetic fields, or sensing the impending storm not through visual cues but through the crackling language of static in the air. This rewiring of sensory perception could have profound implications for how we understand and engage with the environment, blurring the lines between internal and external, self and world.

The potential for AI to enhance and augment our emotions and senses is undeniable. However, alongside the allure lies a minefield of ethical considerations. The ability to manipulate human emotions raises disturbing questions about free will and autonomy. Who will control the algorithms that define our emotional landscape? How will we prevent the exploitation of vulnerabilities for sinister purposes? Will emotional augmentation create a divide between the augmented and the unaltered, exacerbating existing inequalities?

These are not mere theoretical hypotheticals; they are pressing concerns that demand immediate attention. Robust ethical frameworks must be established to guide the development and deployment of AI technologies that impact our emotional and sensory experiences. Public discourse and transparent dialogues are crucial to ensure that these transformative technologies are used for the benefit of all, not just a privileged few.

In this rapidly evolving landscape, it's vital to remember that technology, however powerful, can never replace the human touch. AI may augment our senses and influence our emotions, but it cannot replicate the richness and complexity of human experience. Empathy, compassion, and love the cornerstones of our humanity remain irreplaceable by algorithms and code. AI can be a powerful tool, but it is we, the humans, who must remain the architects of our own emotional and sensory futures.

The rise of AI presents not a threat to our humanity, but rather an opportunity for coevolution. We can view AI as a partner in this journey, a fellow dancer in this complex choreography of existence. By collaborating with AI, we can explore the uncharted territories of our senses and emotions, enriching our understanding of ourselves and the world around us. This coevolutionary dance will be fraught with challenges, but it also holds the potential to unlock a brighter, more nuanced future where technology amplifies the human experience without diminishing its essence.

The horizon of the future, where AI intertwines with our senses and emotions, is painted in vibrant hues of possibility and peril. It's a future where we might perceive the world through electromagnetic whispers, sculpt our emotional landscapes with algorithmic nudges, and forge bonds with sentient machines who experience reality in ways we can only begin to imagine. The journey to this future will be paved with ethical considerations, philosophical quandaries, and technological feats yet to be conceived. However, if we navigate this complex landscape with an open mind, a critical eye, and an unwavering commitment to our humanity, then this dance between human and machine, between sense and code, between emotion and algorithm, could lead us not to a dystopian nightmare, but to a renaissance of human experience, more vibrant, more nuanced, and more interconnected than ever before.

Let us remember, the future is not preordained. It is a canvas waiting to be painted, not just by the algorithms of AI, but by the choices we make today. As we step onto this uncharted stage, let us do so with courage, creativity, and a deep understanding of what it truly means to be human. For in the end, it is not the technology that will define our future, but the stories we choose to tell with it, the emotions we choose to share, and the connections we choose to forge in this grand, coevolutionary dance between human and machine.

The time for speculation is over. The stage is set for the most thrilling performance of all the evolution of human experience in the age of AI. Let the curtain rise, and let the dance begin.

See more here:

The Sentient Singularity: How AI Might Reshape Our Sensory and Emotional Landscape - Medium

Industry insight: from ‘singularity art’ to art as a whole Hotel Designs – Hotel Designs

As art service provider, we at Zhengyin Art are constantly looking for solutions to challenging projects where arts have truly became a part of the overall space. Since 2008, we started to rethink about the relationship between people, space and artworks. How to engage the viewer and share the story of the space through artworks has become a topic that we are constantly exploring.

For Conrad Hotel in Beijing (2012), we were challenged to create a wall using celadon porcelain. As easy as it might look, there are sensitivities within the executability of this artwork. Porcelain is a fragile material. If not careful, the button layers of the porcelain brick can easily become cracked. Another challenge is how to connect the porcelain bricks using the steel tube. Because porcelain has a ratio of shrinkage, in order for the pieces to successfully be connected together, the team needs to control the difference between 2mm.

This artwork was installed on site with careful attention to the interior design to ensure it fit into the overall interior design seamlessly. The jade-like glazing gives a delightful and elegant ambience to the space.

Another large-scale art project we worked on in 2017 was Le Meridien in Xian. The overall interior design centred around dragons, which was the auspicious cultural symbol of China. This artwork was consisted of pieces of three-metres tall porcelain boards. The boards were installed with varies of angles, which provides a playful tone to perspectives. When the viewer look at the artwork from different angles, the artwork seems to process a different form. When the artist was creating this work, he used the traditional Chinese calligraphy front Cao Shu to portray the trace of a dragon, which fits well with the interior design concept. It also gives a mysterious and playful taste to the space. When the visitors stand in the room, they can see the trace of the dragon like it was just flying pass them.

Recently, we had the pleasure to work with Studio Munge and GZ art on the project of MGM & DYT in Qingdao, China. Qingdao as a city by the water. The overall design was centered around the mountain and the sea in the region. Our artist created a series of artworks based on the research about this area and delivered artworks that draws people into the cultural and landscape perspective of the space.

During the creative process, our artist work closely with our partners to decide what elements to use to emphasize on the overall design concepts and also draws people into the cultural atmosphere thats fostered in Qingdao.

Although MGM and DYT were close to each other, they have different styles and tone of voice. For MGM the artworks are more vibrant and chic, its sense of lush coming from a sense of confidence in the self and moment. On the contrary, the artworks for DYT was more considerate and has more touch on the history of the brand and Chinese culture. The overall feeling is more elegant and deliberate. Its lush is a restraint out-flow of cultural heritage. In this case, art was no longer just something decorative but something that draws out the richness in culture and sense of place.

When you walk into a house, you see the owners collection of art, immediately you will be able to develop a sense of the households taste. It works the same with hotels and any shared space. We believe art has the power to transfer and elevate a space with a subtle language. It is a great way to show off who you are without saying a word.

With a respect to art and a passion for art services, we do our best to fulfil our clients needs with a high standard. To ensure an art curation thats most fitting to the overall design and play a great symphony with the interior design.

> Since youre here, why not learn more about Zhengyin Art from Wei Xiao?

Zhengyin Art is one of our Recommended Suppliers and regularly features in ourSupplier Newssection of the website. If you are interested in becoming one of our Recommended Suppliers, please emailKaty Phillips.

Main image credit: Zhengyin Art

Read more:

Industry insight: from 'singularity art' to art as a whole Hotel Designs - Hotel Designs