Weve come a long way as a whole over the past few centuries. Take a time machine back to 1750 and life would be very different indeed. There was no power outage, to communicate with someone long distance was virtually impossible, and there were no gas stations or supermarkets anywhere. Bring someone from that era to todays world, and they would almost certainly have some form of breakdown. I mean how would they cope seeing capsules with wheels whizz around the roads, electrical devices everywhere you look, and even just talking to someone on the other side of the world in real time. These are all simple things that we take for granted. But someone from a few centuries ago would probably think it was all witchcraft, and could even possibly die.
But then imagine that person went back to 1750 and suddenly became jealous that we saw their reaction of awe and amazement. They may want to re-create that feeling themselves in someone else. So, what would they do? They would take the time machine and go back to say 1500 or so and get someone from that era to take to their own. Although the difference from being in 1500 to then being in 1750 would, of course, be different, it wouldnt be anything as extreme as the difference between 1750 and today. So the 1500 person would still almost certainly be shocked by a few things, its highly unlikely they would die. So, in order for the 1750 person to see the same kind of reaction that we would have, they would need to travel back much, much, farther to say 24,000 BC.
For someone to actually die from the shock of being transported into the future, theyd need to go that far ahead that a Die Progress Unit (DPU) is achieved. In hunter-gatherer times, a DPU took over 100,000 years, and thanks to the Agricultural Revolution rate it took around 12,000 years during that period. Nowadays, because of the rate of advancement following the Industrial Revolution a DPU would happen after being transported just a couple hundred years forward. Futurist Ray Kurzweil calls this pattern of human progression moving quicker as time goes on, the Law of Accelerating Returns and is all down to technology.
This theory also works on smaller scales too. Cast your mind back to that great 1985 movie, Back to the Future. In the movie, the past era they went back to was 1955, where there were various differences of course. But if we were to remake the same movie today, but use the past era as 1985, there would be more dramatic differences. Again, this all comes down to the Law of Accelerating Returns. Between 1985 and 2015 the average rate of advancement was much higher than between 1955 and 1985. Kurzweil suggests that by 2000 the rate of progress was five times faster that the average rate during the 20th century. He also suggests that between 2000 and 2014 another 20th centurys worth of progress happened, and by 2021 another will happen, taking just seven years to get there. This means that keeping with the same pattern, in a couple of decades, a 20th centurys worth of progress will happen multiple times in one year, and eventually, in one month.
If Kurzweil is right then by the time 2030 gets here, we may all be blown away with the technology all around us and by 2050 we may not even recognize anything. But many people are skeptical of this for three main reasons:
1. Our own experiences make us stubborn about the future. Our imagination takes our experiences and uses it to predict future outcomes. The problem is that were limited in what we know and when we hear a prediction that goes against what weve been led to believe we often have trouble accepting it as the truth. For example, if someone was to tell you that youd live to be 200, you would think that was ridiculous because of what youve been taught. But at the end of the day, there has to be a first time for everything, and no one knew airplanes would fly until they gave it a go one day.
2. We think in straight lines when we think about history. When trying to project what will happen in the next 30 years we tend to look back at the past 30 years and use that as some sort of guideline as to whats to come. But, in doing that we arent considering the Law of Accelerating Returns. Instead of thinking linearly, we need to be thinking exponentially. In order to predict anything about the future, we need to picture things advancing at a much faster rate than they are today.
3. The trajectory of recent history tells a distorted story. Exponential growth isnt smooth and progress in this area happens in S-curves. An S curve is created when the wave of the progress of a new paradigm sweeps the world and happens in three phases: slow growth, rapid growth, and a leveling off as the paradigm matures. If you view only a small section of the S-curve youll get a distorted version of how fast things are progressing.
What do we mean by AI?
Artificial intelligence (AI) is big right now; bigger than it ever has been. But, there are still many people out there that get confused by the term for various reasons. One is that in the past weve associated AI with movies like Star Wars, Terminator, and even the Jetsons. Because these are all fictional characters, it makes AI still seem like a sci-fi concept. Also, AI is such a broad topic that ranges from self-driving cars to your phones calculator, so getting to grips with all it entails is not easy. Another reason its confusing is that we often dont even realize when were using AI.
So, to try and clear things up and give yourself a better idea of what AI is, first stop thinking about robots. Robots are simply shells that can encompass AI. Secondly, consider the term singularity. Vernor Vinge wrote an essay in 1993 where this term was applied to the moment in future when the intelligence of our technology exceeds that of ourselves. However, that idea was later confused by Kurzweil defining the singularity as the time when the Law of Accelerating Returns gets so fast that well find ourselves living in a whole new world.
To try and narrow AI down a bit, try to think of it as being separated into three major categories:
1. Artificial Narrow Intelligence (ANI): This is sometimes referred to as Weak AI and is a type if AI that specializes in one particular area. An example of ANI is a chess playing AI. It may be great at winning chess, but that is literally all it can do.
2. Artificial General Intelligence (AGI): Often known as Strong AI or Human-Level AI, AGI refers to a computer that has the intelligence of a human across the board and is much harder to create than ANI.
3. Artificial Superintelligence (ASI): ASI ranges from a computer thats just a little smarter than a human to one thats billions of time smarter in every way. This is the type of AI that is most feared and will often be associated with the words immortality and extinction.
Right now, were progressing steadily through the AI revolution and are currently running in a world of ANI. Cars are full of ANI systems that range from the computer that tells the car when the ABS should kick into the various self-driving cars that are about. Phones are another product thats bursting with ANI. Whenever youre receiving music recommendations from Pandora or using your map app to navigate, or various other activities youre utilizing ANI. An email spam filter is another form of ANI because it learns whats spam and whats not. Google Translate and voice recognition systems are also examples of ANI. And, some of the best Checkers and Chess players of the world are also ANI systems.
So, as you can see, ANI systems are all around us already, but luckily these types of systems dont have the capability to cause any real threat to humanity. But, each new ANI system that is created is simply another step towards AGI and ASI. However, trying to create a computer that is at least, if not more intelligent than ourselves, is no easy feat. But, the hard parts are probably not what you were imagining. To build a computer that can calculate sums quickly is simple, but to build a computer than can tell the difference between a cat and a dog is much harder. As summed up by computer scientist, Donald Knuth, AI has by now succeeded in doing essentially everything that requires thinking but has failed to do most of what people and animals do without thinking.'
The next move in which to make AGI a possibility and to compete with the human brain is to increase the power of computers hardware. One way to demonstrate this capacity is by expressing it in calculations per second (cps) that the brain can handle. Kurzweil created a shortcut for calculating this by taking an estimate for the caps of one structure and its weight, comparing it to that of the whole brain, the multiplying it proportionally until an estimate for the total has been reached. After carrying out this calculation several times, Kurzweil always got the same answer of around 1016, or 10 quadrillion cps.
The worlds fastest supercomputer is currently Chinas Tianhe-2 and has clocked in at around 34 quadrillion cps. But, thats hardly a surprise when it uses 24 megawatts of power, takes up 720 square meters of space, and cost $390 million to build. Perhaps if we were to scale that down slightly to 10 quadrillion cps (the human-level) we may be able to achieve a more workable model and AGI would then become a part of everyday life. Currently, the worlds $1,000 computers are about a thousandth of the human level and while that may not sound like much its actually a huge leap forward. In 1985 we were only about a trillionth of human level. If we keep progressing in the same manner then by 2025 we should have an affordable computer that can rival the power of the human brain. Then its just a case of merging all that power with human-level intelligence.
However, thats so much easier said than done. No one really knows how to make computers smart, but here are the most popular strategies weve come across so far:
1. Make everything the computers problem. This is usually a scientists last resort and involves building a computer whose main skill would be to carry out research on AI and coding them changes into itself.
2. Plagiarize the brain. It makes sense to copy the best of whats already available and currently, scientists are working hard to uncover all we can about the mighty organ. As soon as we know how a human brain can run so efficiently we can begin to replicate it in the form of AI. Artificial neural networks do this already, where they mimic the human brain. But there is still a long way to go before they are anywhere near as sophisticated or effective as the human brain. A more extreme example of plagiarism involves whats known as whole brain emulation. Here the aim is to slice a brain into layers, scan each one, create an accurate 3D model then implement that model on a computer. Wed then have a fully working computer that has a brain as capable as our own.
3. Try to make history and evolution repeat itself in our favor. If building a computer just as powerful as the human brain is too hard to mimic, we could instead try to mimic the evolution of it instead. This is a method called genetic algorithms. They would work by taking part in a performance-and-evaluation process that would happen over and over. When a task is completed successfully the computer would be bred with another just as capable in an attempt to merge them and recreate a better computer. This natural selection process would be done several times until we finally have the result we wanted. The downside is that this process could take billions of years.
Various advancements in technology are happening so quickly that AGI could be here before we know it for two main reasons:
1. Exponential growth is very intense and so much can happen in such a short space of time.
2. Even minute software changes can make a big difference. Just one tweak could have the potential to make it 1,000 times more effective.
Once AGI has been achieved and people are happy living alongside human-level AGI, well then move on to ASI. But, just to clarify, even though AGI has the same level of intelligence (theoretically) as a human, they would still have several advantages over us, including:
Speed: Todays microprocessors can run at speeds 10 million times faster than our own neurons and they can also communicate optically at the speed of light.
Size and storage: Unlike our brains, computers can expand to any size, allowing for a larger working memory and long-term memory that will outperform us any day.
Reliability and durability: Computer transistors are far more accurate than biological neurons and are easily repaired too.
Editability: Computer software can be easily tweaked to allow for updates and fixes.
Collective capability: Humans are great at building a huge amount of collective intelligence and is one of the main reasons why weve survived so long as a species and are far more advanced. A computer that is designed to essentially mimic the human brain, will be even better at it as it could regularly sync with itself so that anything another computer learned could be instantly uploaded to the whole network of them.
Most current models that focus on reaching AGI concentrate on AI achieving these goals via self-improvement. Once everything is able to self-improve, another concept to consider is recursive self-improvement. This is where something has already self-improved and so if therefore considerably smarter than it was original. Now, to improve itself further, will be much easier as it is smarter and not so much to learn and therefore takes bigger leaps. Soon the AGIs intelligence levels will exceed that of a human and thats when you get a superintelligent ASI system. This process is called an Intelligence Explosion and is a prime example of The Law of Accelerating Returns. How soon we will reach this level is still very much in debate.
More News to Read
comments
Read more here:
Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech
- Ethical Issues In Advanced Artificial Intelligence [Last Updated On: December 14th, 2016] [Originally Added On: December 14th, 2016]
- How Long Before Superintelligence? - Nick Bostrom [Last Updated On: December 21st, 2016] [Originally Added On: December 21st, 2016]
- Parallel universes, the Matrix, and superintelligence ... [Last Updated On: January 4th, 2017] [Originally Added On: January 4th, 2017]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why [Last Updated On: January 25th, 2017] [Originally Added On: January 25th, 2017]
- Superintelligence - Nick Bostrom - Oxford University Press [Last Updated On: January 27th, 2017] [Originally Added On: January 27th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal [Last Updated On: February 8th, 2017] [Originally Added On: February 8th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ [Last Updated On: February 8th, 2017] [Originally Added On: February 8th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Elon Musk jokes 'I'm not an alien' while discussing how to contact extraterrestrials - Yahoo News [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Artificial intelligence predictions surpass reality - UT The Daily Texan [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think [Last Updated On: February 27th, 2017] [Originally Added On: February 27th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News [Last Updated On: February 27th, 2017] [Originally Added On: February 27th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN ... - TechCrunch [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News [Last Updated On: March 2nd, 2017] [Originally Added On: March 2nd, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism [Last Updated On: March 2nd, 2017] [Originally Added On: March 2nd, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine [Last Updated On: March 2nd, 2017] [Originally Added On: March 2nd, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC [Last Updated On: March 3rd, 2017] [Originally Added On: March 3rd, 2017]
- Supersentience [Last Updated On: March 7th, 2017] [Originally Added On: March 7th, 2017]
- Superintelligence | Guardian Bookshop [Last Updated On: March 7th, 2017] [Originally Added On: March 7th, 2017]
- The AI debate must stay grounded in reality - Prospect [Last Updated On: March 8th, 2017] [Originally Added On: March 8th, 2017]
- Who is afraid of AI? - The Hindu [Last Updated On: April 8th, 2017] [Originally Added On: April 8th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) [Last Updated On: April 8th, 2017] [Originally Added On: April 8th, 2017]
- How humans will lose control of artificial intelligence - The Week Magazine [Last Updated On: April 8th, 2017] [Originally Added On: April 8th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) [Last Updated On: June 6th, 2017] [Originally Added On: June 6th, 2017]
- A reply to Wait But Why on machine superintelligence [Last Updated On: June 6th, 2017] [Originally Added On: June 6th, 2017]
- Using AI to unlock human potential - EJ Insight [Last Updated On: June 9th, 2017] [Originally Added On: June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox [Last Updated On: June 13th, 2017] [Originally Added On: June 13th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering [Last Updated On: June 17th, 2017] [Originally Added On: June 17th, 2017]
- U.S. Navy reaches out to gamers to troubleshoot post ... [Last Updated On: June 18th, 2017] [Originally Added On: June 18th, 2017]
- The bots are coming - The New Indian Express [Last Updated On: June 20th, 2017] [Originally Added On: June 20th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard [Last Updated On: June 20th, 2017] [Originally Added On: June 20th, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint [Last Updated On: June 30th, 2017] [Originally Added On: June 30th, 2017]
- Integrating disciplines 'key to dealing with digital revolution' | Times ... - Times Higher Education (THE) [Last Updated On: July 4th, 2017] [Originally Added On: July 4th, 2017]
- What an artificial intelligence researcher fears about AI - Huron Daily ... - Huron Daily Tribune [Last Updated On: July 16th, 2017] [Originally Added On: July 16th, 2017]
- AI researcher: Why should a superintelligence keep us around? - TNW [Last Updated On: July 18th, 2017] [Originally Added On: July 18th, 2017]
- Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual ... - The Good Men Project (blog) [Last Updated On: July 22nd, 2017] [Originally Added On: July 22nd, 2017]
- Will we be wiped out by machine overlords? Maybe we need a ... - PBS NewsHour [Last Updated On: July 22nd, 2017] [Originally Added On: July 22nd, 2017]
- The end of humanity as we know it is 'coming in 2045' and Google is preparing for it - Metro [Last Updated On: July 27th, 2017] [Originally Added On: July 27th, 2017]
- The Musk/Zuckerberg Dustup Represents a Growing Schism in AI - Motherboard [Last Updated On: August 4th, 2017] [Originally Added On: August 4th, 2017]
- Infographic: Visualizing the Massive $15.7 Trillion Impact of AI - Visual Capitalist (blog) [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Being human in the age of artificial intelligence - Science Weekly podcast - The Guardian [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Why won't everyone listen to Elon Musk about the robot apocalypse? - Ladders [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Friendly artificial intelligence - Wikipedia [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Steam Workshop :: Superintelligence [Last Updated On: February 15th, 2018] [Originally Added On: February 15th, 2018]
- Steam Workshop :: Superintelligence (BNW) [Last Updated On: February 15th, 2018] [Originally Added On: February 15th, 2018]
- Artificial Superintelligence: The Coming Revolution ... [Last Updated On: June 3rd, 2018] [Originally Added On: June 3rd, 2018]
- Superintelligence survey - Future of Life Institute [Last Updated On: June 23rd, 2018] [Originally Added On: June 23rd, 2018]
- Amazon.com: Superintelligence: Paths, Dangers, Strategies ... [Last Updated On: August 18th, 2018] [Originally Added On: August 18th, 2018]
- Superintelligence: From Chapter Eight of Films from the ... [Last Updated On: October 11th, 2018] [Originally Added On: October 11th, 2018]
- Superintelligence - Hardcover - Nick Bostrom - Oxford ... [Last Updated On: March 6th, 2019] [Originally Added On: March 6th, 2019]
- Global Risks Report 2017 - Reports - World Economic Forum [Last Updated On: March 6th, 2019] [Originally Added On: March 6th, 2019]
- Superintelligence: Paths, Dangers, Strategies - Wikipedia [Last Updated On: May 3rd, 2019] [Originally Added On: May 3rd, 2019]
- Elon Musk warns 'advanced A.I.' will soon manipulate social media - Big Think [Last Updated On: October 1st, 2019] [Originally Added On: October 1st, 2019]
- Aquinas' Fifth Way: The Proof from Specification - Discovery Institute [Last Updated On: October 22nd, 2019] [Originally Added On: October 22nd, 2019]
- The Best Artificial Intelligence Books you Need to Read Today - Edgy Labs [Last Updated On: October 22nd, 2019] [Originally Added On: October 22nd, 2019]
- Here's How to Watch Watchmen, HBOs Next Game of Thrones - Cosmopolitan [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Idiot Box: HBO Max joins the flood of streaming services - Weekly Alibi [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It - IndieWire [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Melissa McCarthy And Ben Falcone Have Decided To Release 'Superintelligence' Via HBO Max Ins - Science Fiction [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence - Deadline [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes [Last Updated On: December 5th, 2019] [Originally Added On: December 5th, 2019]
- AI R&D is booming, but general intelligence is still out of reach - The Verge [Last Updated On: December 18th, 2019] [Originally Added On: December 18th, 2019]
- Playing Tetris Shows That True AI Is Impossible - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- Liquid metal tendons could give robots the ability to heal themselves - Digital Trends [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- NIU expert: 4 leaps in technology to expect in the 2020s | NIU - NIU Newsroom [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- Thinking Beyond Flesh and Bones with AI - Ghana Latest Football News, Live Scores, Results - Ghanasoccernet.com [Last Updated On: February 18th, 2020] [Originally Added On: February 18th, 2020]
- Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche - Pulse Nigeria [Last Updated On: February 18th, 2020] [Originally Added On: February 18th, 2020]
- Is Artificial Intelligence (AI) A Threat To Humans? - Forbes [Last Updated On: March 4th, 2020] [Originally Added On: March 4th, 2020]
- The world's best virology lab isn't where you think - Spectator.co.uk [Last Updated On: April 3rd, 2020] [Originally Added On: April 3rd, 2020]
- Josiah Henson: the forgotten story in the history of slavery - The Guardian [Last Updated On: June 21st, 2020] [Originally Added On: June 21st, 2020]
- The Shadow of Progress - Merion West [Last Updated On: July 13th, 2020] [Originally Added On: July 13th, 2020]
- If you can't beat 'em, join 'em Elon Musk tweets out the mission statement for his AI-brain-chip Neuralink - Business Insider India [Last Updated On: July 13th, 2020] [Originally Added On: July 13th, 2020]
- Consciousness Existing Beyond Matter, Or in the Central Nervous System as an Afterthought of Nature? - The Daily Galaxy --Great Discoveries Channel [Last Updated On: July 17th, 2020] [Originally Added On: July 17th, 2020]