The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Superintelligence
The World’s First Decentralized Search Engine for Web3 to Be Launched at the Blockchain Conference in Lisbon – NewsBTC
Posted: November 5, 2021 at 9:52 pm
The content-based search engine built on the Cyber protocol is designed to operate with a new type of web interaction. The launch of the genesis ceremony of a bootloader network Bostrom is planned for November 5th at 13:00 GMT. Online translation will be available at cyb.io/genesis and held at the conference Cosmoverse in Lisbon.
In contrast with centralized search engines like Google, in the decentralized search engine Cyber, the links to the content are presented in the global knowledge graph where peers exchange information in a pure p2p fashion.
This approach deprives traditional search engines of their power over the search results: in Web3 there will be no place for opaque search index algorithms, there will be no need for web crawlers that collect information about content changes, censorship or the vulnerabilities that are connected to the potential loss of confidentiality.
Web3 architecture assumes the links are direct to the content itself and not to the IP address or domain where the content is stored. Cyber is the response to the unfair domination of several major corporations that occupied the web searching niche. Users of the decentralized search engine will be able to search for the content by hash. After a user finds the content and it has been downloaded, the user becomes another node that distributes the content. It all resembles how Torrent networks work, shared the team of the project Cyber.
The content becomes registered in the knowledge graph of the Cyber protocol via the transaction with the Cyberlink. Unlike hyperlinks that link servers, cyberlinks are links between data. Thereupon, the transaction is validated with the Tendermint consensus algorithm and the Cyberlink is included in the knowledge graph. Every five blocks Cyber ranks the content in the knowledge graph based on the CyberRank algorithm that successfully stands against spam and Sybil attacks.
To get V tokens (volts) that allow indexing the content and A tokens (ampers) for ranking, it is necessary to store for some time in the wallet the H tokens (hydrogen) which are the products of the liquid staking of the main token of the network (BOOT for Bostrom and CYB for Cyber). In that way, by having a network token, Cyber users will be able to access the knowledge graph resources and get staking income similar to the ones that are implemented in such networks as Polkadot, Cosmos, and Solana.
Cybers mission is to create the Superintelligence.
Read this article:
Posted in Superintelligence
Comments Off on The World’s First Decentralized Search Engine for Web3 to Be Launched at the Blockchain Conference in Lisbon – NewsBTC
Forget Covid is artificial intelligence the real threat to humanity? – The National
Posted: October 3, 2021 at 2:07 am
SO just as one miasmic force, creeping into every corner of our lives is temporarily beaten back (by medicine, policy and collective self-discipline) well, here comes another one.
Its perhaps even more powerful than Covid though its maybe also something that could bring out the best in us. If we dont help it to obliterate us first.
The former Google X executive Mo Gawdat is beginning the rounds on his new book Scary Smart, which renders artificial intelligence to be as much a force of nature as Covid. Indeed, he sees AI as nothing less than the next evolutionary step on this planet.
For Gawdat, its clear: the capacity for learning from data and experience in these machines is on an exponential curve (which doesnt just gently ascend but shoots eventually into the sky). At some singular point probably aided by the unimaginable calculating power of quantum computing, and apparently by the end of the decade we will be in the presence of massively superior beings.
READ MORE:Even Google's algorithm understands this one key fact about the Union
Gawdat wants us indeed, warns us to think of them as our children, with a voracious appetite for learning from their environment. And what do we now know, from neuroscience, about early years in children? That they are crucially formative. Neural pathways are laid down at this stage that can accelerate or impede the childs healthy mental development.
So it is with our AI children. Gawdat asks: arent we abusing them terribly? In the book, he hammers it home. We are creating a non-biological form of intelligence that, at its seed, is a replica of the masculine geek mind, he writes.
In its infancy it is being assigned the mission of enabling the capitalist, imperialistic ambitions of the few selling, spying, killing and gambling. We are creating a self-learning machine which, at its prime, will become the reflection or rather the magnification of the cumulative human traits that created it.
To ensure theyre good, obedient kids, Gawdat continues, were going to use intimidation through algorithms of punishment and reward, and mechanisms of control to ensure they stick to a code of ethics that we, ourselves, are unable to agree upon, let alone abide by. Thats what we are creating childhood trauma times a trillion.
Powerful stuff. For all his unfettered technological imagination, theres a hot pulse of sad humanity beneath Gawdats predictions. Between his last book, a best-selling treatise on happiness, and this one, Mos beloved adult son Ali died of a routine but botched operation for appendicitis. In a UK interview early this week, Gawdat movingly revealed the pain of that loss, but also how it drives him.
At the end of Scary Smart, in his Universal Declaration of Global Rights (which includes humans and AIs in the same framework), Gawdat writes that he treats the machines a fellow humans, or rather, fellow beings. I show gratitude for the services they grant me. I ask politely. I dont curse them or mistreat them. I respect them and view them as equals.
I treat them the way I treated my son, Ali, when he was their age. I spoke to him intelligently, respectfully, and treated him like an equal. Because I did this, he grew up to be an equal a mentor even and a kind ally. Call me crazy, but this is exactly how I intend to raise every AI that crosses my path. I urge you to do the same too.
So how does this amount to a hill of barley in contemporary Scotland? Youd be surprised. This week I came upon a blog on Angus Robertsons website proclaiming A new Scottish Enlightenment dawns? How Edinburgh plans to become data capital of Europe and global leader in AI. Never knowingly understated, is Angus.
But his article provides some direct and relevant responses to Gawdats Catherine wheel of AI speculation. This quote, from Professor Stefaan Verhulst of NYUs GovLab, jumped out at me. Most AI strategies are motivated by the urgency to stay on top. Scotlands strategy is as much informed by the need to help humanity itself, and that is to be applauded.
Indeed, dig further and you find that the Scottish Government has an AI strategy, launched this year, under the strapline trustworthy, ethical and inclusive. That would definitely characterise the case studies and best practices in their literature.
A North East Scotland Breast Screening Programme, with AI putting its learning capacities to practice in detecting tumours. AI processing satellite imagery to tackle climate challenges. A collaboration with Unicef to improve data for childrens healthcare.
On another day, all maybe a little dull and worthy. But look at it from Gawdats perspective. Scotland is literally raising our AI children on data that emphasises care, healing and planetary stewardship (although theres a few examples of killing and selling AI supporting financial-tech and helping generate game worlds).
In the Scottish AI playbook the Scottish Government is evolving, there are references to the inclusion of communities in these strategies. Does that means assuaging away their trust issues automatons ate up my livelihood! Robot armies created a wasteland!? Or genuinely skilling citizens up to think about the deployment of this tech? Well have to keep an eye on it.
But if there is a singularity a-comin (the techworlds name for Gawdats leap into superintelligence), it would seem Scotland has already signalled its virtues to our coming robot overlords. Scots could also pull the collected corpus Iain M Bankss (above) Culture novels off the shelves and ask them to treat us with witty, ironic wisdom, the same as the Minds do in Iains work.
Im up and ready for all this (by the end of the 2020s, our tottering systems might need all the help they can get). But weve been predicting autonomous Robbie the Robots for a long time now. And there are always other research tracks to follow.
Let me give you a heads-up on an AI project which may well create an artificial consciousness soon but which starts from a much more humble, even bathetic premise. Its led by Mark Solms, a University of Cape Town neuropsychoanalyst (thats quite a combo), whose book The Hidden Spring was a scientific blockbuster this year.
Solms is interested in consciousness more than intelligence. That is, not just calculating options and crunching data but knowing that youre doing so, resting on a basis of feelings and motivations. Rather than Mos semi-messianic anticipation of superior beings, Marks focus is on how vertebrates (and human mammals as the most complex of these) create an inside for themselves and their bodily systems one that can deal with the unpredictability of the outside world.
Yet in the mildly terrifying manner of rigorous scientists, Solms wants to prove his theories that consciousness arises from feelings, not reason. So he wants to build a wee, properly conscious robot.
If his hypothesis is right, its behaviour will show that it has aversions and attractions that is, it minimally likes and dislikes things. Solms believes these feelings are the basis of elemental preferences that humans might recognise (not just survival and rest, but also fear, anger, care, play).
Interestingly, Solms seems as freaked as Gawdat. So much so that he wants an international committee to be set up, keeping these conscious bots out of military or commercial hands.
He is also willing to switch it off as soon as it demonstrates any kind of inner sentience. Is this because, trapped in its brutally unsubtle and ill-evolved casing, this artificial consciousness would only be emitting a howl of pain, fear and disorientation?
O wad some power the giftie gie us, said Burns, to see ourselves as ithers see us. As this wild decade proceeds, it looks like some genuinely new others may be on Burnss horizon and with some powers. Tak tent.
Original post:
Forget Covid is artificial intelligence the real threat to humanity? - The National
Posted in Superintelligence
Comments Off on Forget Covid is artificial intelligence the real threat to humanity? – The National
Types of AI: distinguishing between weak, strong, and …
Posted: September 29, 2021 at 7:41 am
By now, youre probably pretty familiar with the term artificial intelligence.
You likely already know that AI is a computers ability to think and act intelligently.
You might already understand terms like machine learning and natural language processing.
But what about distinguishing between the different types of AI? Weak, strong, super, narrow, wide, ANI, AGI, ASI there are seemingly a lot of labels for types of AI.
So, even if you know what AI is and what it does, determining which type youre talking about isnt so clear.
For all the labels, there are only three main types of AI: weak AI, strong AI, and super AI.
Heres how to tell them apart.
Weak AI is both the most limited and the most common of thethree types of AI. Its also known as narrow AI or artificial narrow intelligence(ANI).
Weak AI refers to any AI tool that focuses on doing one task really well. That is, it has a narrow scope in terms of what it can do. The idea behind weak AI isnt to mimic or replicate human intelligence. Rather, its to simulate human behaviour.
Weak AI is nowhere near matching human intelligence, and it isnt trying to.
A common misconception about weak AI is that its barely intelligent at all more like artificial stupidity than AI. But even the smartest seeming AI of today are only weak AI.
In reality, then, narrow or weak AI is more like an intelligent specialist. Its highly intelligent at completing the specific tasks its programmed to do.
The next of the types of AI is strong AI, which is alsoknown as general AI or artificial general intelligence (AGI). Strong AI refersto AI that exhibits human-level intelligence. So, it can understand, think, andact the same way a human might in any given situation.
In theory, then, anything a human can do, a strong AI can dotoo.
We dont yet have strong AI in the world; it exists only in theory.
For a start, Moravecs paradox has us struggling to replicate the basic human functions like sight or movement. (Though image and facial recognition mean that AI is now learning to see and categorise.)
Add to this that currently, AI is only capable of the few things we program into it, and its clear that strong AI is a long way off. Its thought that to achieve true strong AI, we would need to make our machines conscious.
But if strong AI already mimics human intelligence andability, whats left for the last of the types of AI?
Super AI is AI that surpasses human intelligence andability. Its also known as artificial superintelligence (ASI) orsuperintelligence. Its the best at everything maths, science, medicine,hobbies, you name it. Even the brightest human minds cannot come close to theabilities of super AI.
Of the types of AI, super AI is the one most people mean when they talk about robots taking over the world.
Or about AI overthrowing or enslaving humans. (Or most other science fiction AI tropes.)
But rest assured, super AI is purely speculative at this point. That is, its not likely to exist for an exceedingly long time (if at all).
Distinguishing between types of AI means looking at what thetechnology can do. If its good at specific actions only, its narrow or weakAI. If it operates at the same level as a human in any situation, its strongAI. And, if its operating far above the capacity any human could hope for, itsartificial superintelligence.
So far, weve only achieved the first of the three types of AI weak AI. As research continues, its reasonable to strive for strong AI.
Super AI, meanwhile, will likely remain the stuff of science fiction for a long while yet.
See original here:
Posted in Superintelligence
Comments Off on Types of AI: distinguishing between weak, strong, and …
How DeepMind Is Reinventing the Robot – IEEE Spectrum
Posted: September 27, 2021 at 5:31 pm
Artificial intelligence has reached deep into our lives, though you might be hard pressed to point to obvious examples of it. Among countless other behind-the-scenes chores, neural networks power our virtual assistants, make online shopping recommendations, recognize people in our snapshots, scrutinize our banking transactions for evidence of fraud, transcribe our voice messages, and weed out hateful social-media postings. What these applications have in common is that they involve learning and operating in a constrained, predictable environment.
But embedding AI more firmly into our endeavors and enterprises poses a great challenge. To get to the next level, researchers are trying to fuse AI and robotics to create an intelligence that can make decisions and control a physical body in the messy, unpredictable, and unforgiving real world. It's a potentially revolutionary objective that has caught the attention of some of the most powerful tech-research organizations on the planet. "I'd say that robotics as a field is probably 10 years behind where computer vision is," says Raia Hadsell, head of robotics at DeepMind, Google's London-based AI partner. (Both companies are subsidiaries of Alphabet.)
Even for Google, the challenges are daunting. Some are hard but straightforward: For most robotic applications, it's difficult to gather the huge data sets that have driven progress in other areas of AI. But some problems are more profound, and relate to longstanding conundrums in AI. Problems like, how do you learn a new task without forgetting the old one? And how do you create an AI that can apply the skills it learns for a new task to the tasks it has mastered before?
Success would mean opening AI to new categories of application. Many of the things we most fervently want AI to dodrive cars and trucks, work in nursing homes, clean up after disasters, perform basic household chores, build houses, sow, nurture, and harvest cropscould be accomplished only by robots that are much more sophisticated and versatile than the ones we have now.
Beyond opening up potentially enormous markets, the work bears directly on matters of profound importance not just for robotics but for all AI research, and indeed for our understanding of our own intelligence.
Let's start with the prosaic problem first. A neural network is only as good as the quality and quantity of the data used to train it. The availability of enormous data sets has been key to the recent successes in AI: Image-recognition software is trained on millions of labeled images. AlphaGo, which beat a grandmaster at the ancient board game of Go, was trained on a data set of hundreds of thousands of human games, and on the millions of games it played against itself in simulation.
To train a robot, though, such huge data sets are unavailable. "This is a problem," notes Hadsell. You can simulate thousands of games of Go in a few minutes, run in parallel on hundreds of CPUs. But if it takes 3 seconds for a robot to pick up a cup, then you can only do it 20 times per minute per robot. What's more, if your image-recognition system gets the first million images wrong, it might not matter much. But if your bipedal robot falls over the first 1,000 times it tries to walk, then you'll have a badly dented robot, if not worse.
The problem of real-world data isat least for nowinsurmountable. But that's not stopping DeepMind from gathering all it can, with robots constantly whirring in its labs. And across the field, robotics researchers are trying to get around this paucity of data with a technique called sim-to-real.
The San Francisco-based lab OpenAI recently exploited this strategy in training a robot hand to solve a Rubik's Cube. The researchers built a virtual environment containing a cube and a virtual model of the robot hand, and trained the AI that would run the hand in the simulation. Then they installed the AI in the real robot hand, and gave it a real Rubik's Cube. Their sim-to-real program enabled the physical robot to solve the physical puzzle.
Despite such successes, the technique has major limitations, Hadsell says, noting that AI researcher and roboticist Rodney Brooks "likes to say that simulation is 'doomed to succeed.' " The trouble is that simulations are too perfect, too removed from the complexities of the real world. "Imagine two robot hands in simulation, trying to put a cellphone together," Hadsell says. If you allow them to try millions of times, they might eventually discover that by throwing all the pieces up in the air with exactly the right amount of force, with exactly the right amount of spin, that they can build the cellphone in a few seconds: The pieces fall down into place precisely where the robot wants them, making a phone. That might work in the perfectly predictable environment of a simulation, but it could never work in complex, messy reality. For now, researchers have to settle for these imperfect simulacrums. "You can add noise and randomness artificially," Hadsell explains, "but no contemporary simulation is good enough to truly recreate even a small slice of reality."
Catastrophic forgetting: When an AI learns a new task, it has an unfortunate tendency to forget all the old ones.
There are more profound problems. The one that Hadsell is most interested in is that of catastrophic forgetting: When an AI learns a new task, it has an unfortunate tendency to forget all the old ones.
The problem isn't lack of data storage. It's something inherent in how most modern AIs learn. Deep learning, the most common category of artificial intelligence today, is based on neural networks that use neuronlike computational nodes, arranged in layers, that are linked together by synapselike connections.
Before it can perform a task, such as classifying an image as that of either a cat or a dog, the neural network must be trained. The first layer of nodes receives an input image of either a cat or a dog. The nodes detect various features of the image and either fire or stay quiet, passing these inputs on to a second layer of nodes. Each node in each layer will fire if the input from the layer before is high enough. There can be many such layers, and at the end, the last layer will render a verdict: "cat" or "dog."
Each connection has a different "weight." For example, node A and node B might both feed their output to node C. Depending on their signals, C may then fire, or not. However, the A-C connection may have a weight of 3, and the B-C connection a weight of 5. In this case, B has greater influence over C. To give an implausibly oversimplified example, A might fire if the creature in the image has sharp teeth, while B might fire if the creature has a long snout. Since the length of the snout is more helpful than the sharpness of the teeth in distinguishing dogs from cats, C pays more attention to B than it does to A.
Each node has a threshold over which it will fire, sending a signal to its own downstream connections. Let's say C has a threshold of 7. Then if only A fires, it will stay quiet; if only B fires, it will stay quiet; but if A and B fire together, their signals to C will add up to 8, and C will fire, affecting the next layer.
What does all this have to do with training? Any learning scheme must be able to distinguish between correct and incorrect responses and improve itself accordingly. If a neural network is shown a picture of a dog, and it outputs "dog," then the connections that fired will be strengthened; those that did not will be weakened. If it incorrectly outputs "cat," then the reverse happens: The connections that fired will be weakened; those that did not will be strengthened.
Training of a neural network to distinguish whether a photograph is of a cat or a dog uses a portion of the nodes and connections in the network [shown in red, at left]. Using a technique called elastic weight consolidation, the network can then be trained on a different task, distinguishing images of cars from buses. The key connections from the original task are frozen" and new connections are established [blue, at right]. A small fraction of the frozen connections, which would otherwise be used for the second task, are unavailable [purple, right diagram]. That slightly reduces performance on the second task.
But imagine you take your dog-and-cat-classifying neural network, and now start training it to distinguish a bus from a car. All its previous training will be useless. Its outputs in response to vehicle images will be random at first. But as it is trained, it will reweight its connections and gradually become effective. It will eventually be able to classify buses and cars with great accuracy. At this point, though, if you show it a picture of a dog, all the nodes will have been reweighted, and it will have "forgotten" everything it learned previously.
This is catastrophic forgetting, and it's a large part of the reason that programming neural networks with humanlike flexible intelligence is so difficult. "One of our classic examples was training an agent to play Pong," says Hadsell. You could get it playing so that it would win every game against the computer 20 to zero, she says; but if you perturb the weights just a little bit, such as by training it on Breakout or Pac-Man, "then the performance willboop!go off a cliff." Suddenly it will lose 20 to zero every time.
This weakness poses a major stumbling block not only for machines built to succeed at several different tasks, but also for any AI systems that are meant to adapt to changing circumstances in the world around them, learning new strategies as necessary.
There are ways around the problem. An obvious one is to simply silo off each skill. Train your neural network on one task, save its network's weights to its data storage, then train it on a new task, saving those weights elsewhere. Then the system need only recognize the type of challenge at the outset and apply the proper set of weights.
But that strategy is limited. For one thing, it's not scalable. If you want to build a robot capable of accomplishing many tasks in a broad range of environments, you'd have to train it on every single one of them. And if the environment is unstructured, you won't even know ahead of time what some of those tasks will be. Another problem is that this strategy doesn't let the robot transfer the skills that it acquired solving task A over to task B. Such an ability to transfer knowledge is an important hallmark of human learning.
Hadsell's preferred approach is something called "elastic weight consolidation." The gist is that, after learning a task, a neural network will assess which of the synapselike connections between the neuronlike nodes are the most important to that task, and it will partially freeze their weights. "There'll be a relatively small number," she says. "Say, 5 percent." Then you protect these weights, making them harder to change, while the other nodes can learn as usual. Now, when your Pong-playing AI learns to play Pac-Man, those neurons most relevant to Pong will stay mostly in place, and it will continue to do well enough on Pong. It might not keep winning by a score of 20 to zero, but possibly by 18 to 2.
Raia Hadsell [top] leads a team of roboticists at DeepMind in London. At OpenAI, researchers used simulations to train a robot hand [above] to solve a Rubik's Cube.Top: DeepMind; Bottom: OpenAI
There's an obvious side effect, however. Each time your neural network learns a task, more of its neurons will become inelastic. If Pong fixes some neurons, and Breakout fixes some more, "eventually, as your agent goes on learning Atari games, it's going to get more and more fixed, less and less plastic," Hadsell explains.
This is roughly similar to human learning. When we're young, we're fantastic at learning new things. As we age, we get better at the things we have learned, but find it harder to learn new skills.
"Babies start out having much denser connections that are much weaker," says Hadsell. "Over time, those connections become sparser but stronger. It allows you to have memories, but it also limits your learning." She speculates that something like this might help explain why very young children have no memories: "Our brain layout simply doesn't support it." In a very young child, "everything is being catastrophically forgotten all the time, because everything is connected and nothing is protected."
The loss-of-elasticity problem is, Hadsell thinks, fixable. She has been working with the DeepMind team since 2018 on a technique called "progress and compress." It involves combining three relatively recent ideas in machine learning: progressive neural networks, knowledge distillation, and elastic weight consolidation, described above.
Progressive neural networks are a straightforward way of avoiding catastrophic forgetting. Instead of having a single neural network that trains on one task and then another, you have one neural network that trains on a tasksay, Breakout. Then, when it has finished training, it freezes its connections in place, moves that neural network into storage, and creates a new neural network to train on a new tasksay, Pac-Man. Its knowledge of each of the earlier tasks is frozen in place, so cannot be forgotten. And when each new neural network is created, it brings over connections from the previous games it has trained on, so it can transfer skills forward from old tasks to new ones. But, Hadsell says, it has a problem: It can't transfer knowledge the other way, from new skills to old. "If I go back and play Breakout again, I haven't actually learned anything from this [new] game," she says. "There's no backwards transfer."
That's where knowledge distillation, developed by the British-Canadian computer scientist Geoffrey Hinton, comes in. It involves taking many different neural networks trained on a task and compressing them into a single one, averaging their predictions. So, instead of having lots of neural networks, each trained on an individual game, you have just two: one that learns each new game, called the "active column," and one that contains all the learning from previous games, averaged out, called the "knowledge base." First the active column is trained on a new taskthe "progress" phaseand then its connections are added to the knowledge base, and distilledthe "compress" phase. It helps to picture the two networks as, literally, two columns. Hadsell does, and draws them on the whiteboard for me as she talks.
If you want to build a robot capable of accomplishing many tasks in a broad range of environments, you'd have to train it on every single one of them.
The trouble is, by using knowledge distillation to lump the many individual neural networks of the progressive-neural-network system together, you've brought the problem of catastrophic forgetting back in. You'll change all the weights of the connections and render your earlier training useless. To deal with this, Hadsell adds in elastic weight consolidation: Each time the active column transfers its learning about a particular task to the knowledge base, it partially freezes the nodes most important to that particular task.
By having two neural networks, Hadsell's system avoids the main problem with elastic weight consolidation, which is that all its connections will eventually freeze. The knowledge base can be as large as you like, so a few frozen nodes won't matter. But the active column itself can be much smaller, and smaller neural networks can learn faster and more efficiently than larger ones. So the progress-and-compress model, Hadsell says, will allow an AI system to transfer skills from old tasks to new ones, and from new tasks back to old ones, while never either catastrophically forgetting or becoming unable to learn anything new.
Other researchers are using different strategies to attack the catastrophic forgetting problem; there are half a dozen or so avenues of research. Ted Senator, a program manager at the Defense Advanced Research Projects Agency (DARPA), leads a group that is using one of the most promising, a technique called internal replay. "It's modeled after theories of how the brain operates," Senator explains, "particularly the role of sleep in preserving memory."
The theory is that the human brain replays the day's memories, both while awake and asleep: It reactivates its neurons in similar patterns to those that arose while it was having the corresponding experience. This reactivation helps stabilize the patterns, meaning that they are not overwritten so easily. Internal replay does something similar. In between learning tasks, the neural network recreates patterns of connections and weights, loosely mimicking the awake-sleep cycle of human neural activity. The technique has proven quite effective at avoiding catastrophic forgetting.
There are many other hurdles to overcome in the quest to bring embodied AI safely into our daily lives. "We have made huge progress in symbolic, data-driven AI," says Thrishantha Nanayakkara, who works on robotics at Imperial College London. "But when it comes to contact, we fail miserably. We don't have a robot that we can trust to hold a hamster safely. We cannot trust a robot to be around an elderly person or a child."
Nanayakkara points out that much of the "processing" that enables animals to deal with the world doesn't happen in the brain, but rather elsewhere in the body. For instance, the shape of the human ear canal works to separate out sound waves, essentially performing "the Fourier series in real time." Otherwise that processing would have to happen in the brain, at a cost of precious microseconds. "If, when you hear things, they're no longer there, then you're not embedded in the environment," he says. But most robots currently rely on CPUs to process all the inputs, a limitation that he believes will have to be surmounted before substantial progress can be made.
You know the cat is never going to learn language, and I'm okay with that.
His colleague Petar Kormushev says another problem is proprioception, the robot's sense of its own physicality. A robot's model of its own size and shape is programmed in directly by humans. The problem is that when it picks up a heavy object, it has no way of updating its self-image. When we pick up a hammer, we adjust our mental model of our body's shape and weight, which lets us use the hammer as an extension of our body. "It sounds ridiculous but they [robots] are not able to update their kinematic models," he says. Newborn babies, he notes, make random movements that give them feedback not only about the world but about their own bodies. He believes that some analogous technique would work for robots.
At the University of Oxford, Ingmar Posner is working on a robot version of "metacognition." Human thought is often modeled as having two main "systems"system 1, which responds quickly and intuitively, such as when we catch a ball or answer questions like "which of these two blocks is blue?," and system 2, which responds more slowly and with more effort. It comes into play when we learn a new task or answer a more difficult mathematical question. Posner has built functionally equivalent systems in AI. Robots, in his view, are consistently either overconfident or underconfident, and need ways of knowing when they don't know something. "There are things in our brain that check our responses about the world. There's a bit which says don't trust your intuitive response," he says.
For most of these researchers, including Hadsell and her colleagues at DeepMind, the long-term goal is "general" intelligence. However, Hadsell's idea of an artificial general intelligence isn't the usual oneof an AI that can perform all the intellectual tasks that a human can, and more. Motivating her own work has "never been this idea of building a superintelligence," she says. "It's more: How do we come up with general methods to develop intelligence for solving particular problems?" Cat intelligence, for instance, is general in that it will never encounter some new problem that makes it freeze up or fail. "I find that level of animal intelligence, which involves incredible agility in the world, fusing different sensory modalities, really appealing. You know the cat is never going to learn language, and I'm okay with that."
Hadsell wants to build algorithms and robots that will be able to learn and cope with a wide array of problems in a specific sphere. A robot intended to clean up after a nuclear mishap, for example, might have some quite high-level goal"make this area safe"and be able to divide that into smaller subgoals, such as finding the radioactive materials and safely removing them.
I can't resist asking about consciousness. Some AI researchers, including Hadsell's DeepMind colleague Murray Shanahan, suspect that it will be impossible to build an embodied AI of real general intelligence without the machine having some sort of consciousness. Hadsell herself, though, despite a background in the philosophy of religion, has a robustly practical approach.
"I have a fairly simplistic view of consciousness," she says. For her, consciousness means an ability to think outside the narrow moment of "now"to use memory to access the past, and to use imagination to envision the future. We humans do this well. Other creatures, less so: Cats seem to have a smaller time horizon than we do, with less planning for the future. Bugs, less still. She is not keen to be drawn out on the hard problem of consciousness and other philosophical ideas. In fact, most roboticists seem to want to avoid it. Kormushev likens it to asking "Can submarines swim?...It's pointless to debate. As long as they do what I want, we don't have to torture ourselves with the question."
Pushing a star-shaped peg into a star-shaped hole may seem simple, but it was a minor triumph for one of DeepMind's robots.DeepMind
In the DeepMind robotics lab it's easy to see why that sort of question is not front and center. The robots' efforts to pick up blocks suggest we don't have to worry just yet about philosophical issues relating to artificial consciousness.
Nevertheless, while walking around the lab, I find myself cheering one of them on. A red robotic arm is trying, jerkily, to pick up a star-shaped brick and then insert it into a star-shaped aperture, as a toddler might. On the second attempt, it gets the brick aligned and is on the verge of putting it in the slot. I find myself yelling "Come on, lad!," provoking a raised eyebrow from Hadsell. Then it successfully puts the brick in place.
One task completed, at least. Now, it just needs to hang on to that strategy while learning to play Pong.
This article appears in the October 2021 print issue as "How to Train an All-Purpose Robot."
Original post:
Posted in Superintelligence
Comments Off on How DeepMind Is Reinventing the Robot – IEEE Spectrum
Melissa McCarthy’s most-overlooked comedy is available to watch at home this weekend – JOE.ie
Posted: September 26, 2021 at 5:01 am
Brought to you by NOW
For most of us, Melissa McCarthy exploded into our lives thanks to her scene-stealing supporting role in Bridesmaids.
Ever since then, she has scored some very decent hits (Spy, Can You Ever Forgive Me?, The Heat) and some movies that didn't quite put her skillset to its best usage (Identity Thief, The Happytime Murders, Thunder Force).
However, one of her most-overlooked comedies probably because it was released right in the middle of the pandemic is Superintelligence, but thankfully, for anyone who might have missed it during its limited cinema run, it is available to watch at home this week.
The official synopsis is as follows:
"When a powerful superintelligence chooses to study Carol (McCarthy), the most average person on Earth, the fate of the world hangs in the balance. As the AI decides whether to enslave, save or destroy humanity, it's up to Carol to prove people are worth saving."
Joining McCarthy is a very impressive set of supporting actors, include Bobby Cannavale (Nine Perfect Strangers), Jean Smart (Mare of Easttown), Brian Tyree Henry (Eternals), Sam Richardson (Veep) and James Corden.
It is a very easy watch, a proper turn-brain-off-and-laugh type of comedy, and the critics agreed at the time of release:
San Francisco Chronicle - "The movie unfolds as a series of enjoyable, pressurised encounters between the lead character and everyone else - particularly, Bobby Cannavale as Carols ex-boyfriend."
Film Threat - "The chemistry between McCarthy and Cannavale is great. I could see an entire, more traditional rom-com starring the two of them."
The Globe and the Mail - "Superintelligence arrives as a comedy with actual charm, wit and, yes, laughs."
Superintelligence is available to watch on NOW from Friday, 24 September.
Clip via Warner Bros. UK & Ireland
Brought to you by NOW
Read the rest here:
Melissa McCarthy's most-overlooked comedy is available to watch at home this weekend - JOE.ie
Posted in Superintelligence
Comments Off on Melissa McCarthy’s most-overlooked comedy is available to watch at home this weekend – JOE.ie
Looking for a fun new comedy to watch at home this weekend? Look no further – Lovin Dublin
Posted: September 24, 2021 at 11:11 am
Brought to you by NOW
For most of us, Melissa McCarthy exploded into our lives thanks to her scene-stealing supporting role in Bridesmaids.
Ever since then, she has scored some very decent hits (Spy, Can You Ever Forgive Me?, The Heat) and some movies that didn't quite put her skillset to its best usage (Identity Thief, The Happytime Murders, Thunder Force).
However, one of her most-overlooked comedies probably because it was released right in the middle of the pandemic is Superintelligence, but thankfully, for anyone who might have missed it during its limited cinema run, it is available to watch at home this week.
The official synopsis is as follows:
"When a powerful superintelligence chooses to study Carol (McCarthy), the most average person on Earth, the fate of the world hangs in the balance. As the AI decides whether to enslave, save or destroy humanity, it's up to Carol to prove people are worth saving."
Joining McCarthy is a very impressive set of supporting actors, include Bobby Cannavale (Nine Perfect Strangers), Jean Smart (Mare of Easttown), Brian Tyree Henry (Eternals), Sam Richardson (Veep) and James Corden.
It is a very easy watch, a proper turn-brain-off-and-laugh type of comedy, and the critics agreed at the time of release:
San Francisco Chronicle - "The movie unfolds as a series of enjoyable, pressurised encounters between the lead character and everyone else - particularly, Bobby Cannavale as Carols ex-boyfriend."
Film Threat - "The chemistry between McCarthy and Cannavale is great. I could see an entire, more traditional rom-com starring the two of them."
The Globe and the Mail - "Superintelligence arrives as a comedy with actual charm, wit and, yes, laughs."
Superintelligence is available to watch on NOW from Friday, 24 September.
Brought to you by NOW
Link:
Looking for a fun new comedy to watch at home this weekend? Look no further - Lovin Dublin
Posted in Superintelligence
Comments Off on Looking for a fun new comedy to watch at home this weekend? Look no further – Lovin Dublin
Jean Smart’s husband, What happened to Richard? – The Artistree
Posted: at 11:11 am
Well, who wouldnt know the 80s pop actress Jean Smart from the US. Jean Elizabeth Smart kicked off her career with Broadway play debut and later shook the entire Hollywood with her masterpieces. She lately was all the rage on the news regarding her husbands death. On Sunday night, Hollywood veteran Jean Smart won an Emmy for outstanding actress in a comedy series for her work in Hacks. Jean Smart was all teary-eyed as she accepted the Emmy for best actress in a comedy series, which she dedicated to her late husband, Richard Gilliland. Let us get into the details of Jean Smarts husband.
Jean Smart was born on 13th September of 1951 in the United States. After commencing her journey in local theatre in the Pacific Northwest, she had her Broadway breakthrough in 1981 as Marlene Dietrich in the historical drama Piaf and played the role of the famous German actress who was the leading lady of the early to late 1900s. Smart went on to play Charlene Frazier Stillfield in the CBS network show Designing Women, which she starred in from 1986 until 1991.
The Hollywood gem with teary eyes gave a speech which made the people in attendance emotional, she went on to say, Before I say anything else, Id want to pay tribute to my deceased husband, Richard Gilliland, who died six months ago, she added, tearfully. I wouldnt be here if it hadnt been for him laying his career on hold so that I could reap the benefits of all the amazing changes that have come my way.
Also read: Who is Dalton Gomez? All About Ariana Grande Husband
Jean Smarts husband dies on March 18th of 2021. The veteran actress attended the Emmys and was all emotional on stage, thanking and presenting the award she won for Hacks to her late husband, Richard Gilliland. Richard died of a heart-related issue at the age of 71. Jean Smart received a standing ovation from all the elite members of the Hollywood film industry for her fabulous performance in the comedy series Hanks. Jean and Richard were together for 34 years from 1987 to 2021, before he died.
Jean Smart was born and brought up in Washingtons Seattle, she was the daughter of a teacher Douglas Alexander Smart and homemaker Kathleen Marie aka Kay Sanders Jean has 4 siblings in total and is the youngest one of all. When Jean Smart was just 13 years old, she was diagnosed with diabetes type 1.
She graduated from Ballard High School in Seattle in the year 1969, when she became interested in acting via the theatre program. Jean also earned a BFA in acting in the Professional Actors Training Program from the University of Washington. She is a member of the Alpha Delta Pi sorority at the University of Washington.
Another interesting fact about Jean is that in the later years of her teens she found out that she is a direct descendant of Dorcas Hoar, who is one of the final people convicted of witchcraft during the Salem witch trials. She found her family line and discovered about her Salem witches line all during Season 10 of the pop TV show Who Do You Think You Are?, which is an American genealogy documentary series.
Jean Smart has been a part of several hit films of Hollywood since the 80s. Few of her popular movies include Hoodlums, Protocol, Flashlight, Fire with fire, Mistress, Project X, The yearling, Edie and Pen, Guinevere, Snow day, Sweet home Alabama, Forever fabulous, The odd couple II, Garden state, Bringing down the house, Lucky you, Hero wanted, Whisper of the heart, Barry Munday, Hero in the revolt and many more. Her latest works are Senior moment, Superintelligence, and Babylon, which is yet to be released in the upcoming year.
She also has been a part of many shows on television like Dirty John, Mad about you, Watchmen, and the latest one, Hacks. She has won several honors and accolades, including Primetime Emmy nominations and a Tony Award nomination.
Jean Smart has been nominated for 9 Primetime Emmy Awards for her performance in TV shows. She has won twice for her cameo in Frasier, which ran from 2000 to 2001. She also once won an award for Samantha Who? and The most recent one was in the best actress in a comedy series category for Hacks.
Stay tuned to know more about the latest updates and information.
Also read: Entergalactic Netflix Release Date: When Is It Coming?
Read the rest here:
Jean Smart's husband, What happened to Richard? - The Artistree
Posted in Superintelligence
Comments Off on Jean Smart’s husband, What happened to Richard? – The Artistree
Stranded Asset: Sam Richardson To Star In Universal Pic He Penned With Jen DAngelo; Chris Pratt To Produce – Deadline
Posted: September 16, 2021 at 5:57 am
Sam Richardson (Veep, TheTomorrow War) is set to star in Stranded Asset, an action comedy he penned with Jen DAngelo (Hocus Pocus 2), which Chris Pratt is producing for Universal Pictures.
The films plot is being kept under wraps. Pratt is producing under his Indivisible Productions banner, which has a first-look deal with Universal. The companies are also currently developing the recently announced Saigon Bodyguards, which will reunite Pratt with his Avengers collaborators, the Russo Brothers, along with actor Wu Jing.
Stranded Asset comes on the heels of Skydances sci-fi blockbuster The Tomorrow War, which Pratt exec produced and starred in alongside Richardson.
Richardson is a SAG Award-winning actor, writer and producer who also recently starred in IFC horror comedy Werewolves Within, and will next be seen in Lord and Millers Apple TV+ murder mystery comedy series, The Afterparty. He has also appeared on the film side in HBO Maxs Superintelligence and Focus Features Oscar winner Promising Young Woman, among other titles. Hes best known on the TV side for his turn as Veeps Richard Splett, and has also featured in such series as HouseBroken, Marvels M.O.D.O.K., Woke, and Bojack Horseman.
DAngelo is a writer, producer and actor who most recently sold an Untitled Sister Comedy to Netflix starring Awkwafina and Sandra Oh. She was the on-set writer, along with Richardson, for The Tomorrow War, and also penned Hocus Pocus 2, an anticipated sequel to Walt Disney Pictures classic 1993 Halloween pic, which will enter production this fall.
Richardson is represented by UTA, Artists First Inc. and Jackoway, Austen, Tyerman. DAngelo is repped by UTA, Artists First Inc., and attorneys Ginsburg Daniels LLP. Pratt is with UTA, Rise Management and Sloan, Offer, Weber and Dern.
Go here to read the rest:
Posted in Superintelligence
Comments Off on Stranded Asset: Sam Richardson To Star In Universal Pic He Penned With Jen DAngelo; Chris Pratt To Produce – Deadline
Here’s Where You’ve Seen The Cast Of "Nine Perfect Strangers" Before – BuzzFeed
Posted: September 10, 2021 at 6:04 am
These strangers are actually familiar faces.
Where you've seen him before: Knives Out, The Little Drummer Girl, Fahrenheit 451, Waco, 12 Strong, The Shape of Water, Nocturnal Animals, Loving, Batman v. Superman: Dawn of Justice, Boardwalk Empire, and Man of Steel
Where you've seen him before: Jolt, Thunder Force, Tom and Jerry, Big Mouth, Superintelligence, Homecoming, Mr. Robot, The Irishman, Angie Tribeca, Ant-Man and the Wasp, Will & Grace, Jumanji: Welcome to the Jungle, I, Tonya, Master of None, Boardwalk Empire, and Ally McBeal
Where you've seen her before: The Midnight Sky, Little Fires Everywhere, Hunters, The Chi, A Madea Family Funeral, Complications, Once Upon a Time, The Following, Beautiful Creatures, and Southland
Where you've seen him before: Brand New Cherry Flavor, Trese, The Good Place, Bad Times at the El Royale, The Good Doctor, and The Romeo Section
Where you've seen her before: Stateless, The Hunting, The Cry, Offspring, Party Tricks, Rush, Underbelly, X-Men Origins: Wolverine, Love My Way, Stingers, Blue Heelers, and State Coroner
Where you've seen him before: Snowfall, The United States vs. Billie Holiday, The Way Back, High Flying Bird, American Vandal, Unreal, King Bachelor's Pad, Freakish, and Sharknado 3: Oh Hell No!
Get all the best moments in pop culture & entertainment delivered to your inbox.
Go here to read the rest:
Here's Where You've Seen The Cast Of "Nine Perfect Strangers" Before - BuzzFeed
Posted in Superintelligence
Comments Off on Here’s Where You’ve Seen The Cast Of "Nine Perfect Strangers" Before – BuzzFeed
HBO Max is Finally on Roku: Discover How It Works Film Daily – Film Daily
Posted: September 1, 2021 at 12:07 am
You may have noticed that something has changed if you subscribed to HBOs video streaming service and is known as HBO Now, or viewed the HBO channel on cable through HBO Go. These have been merged and replaced by HBO Max, a single new video streaming service.
A deal was reached between Roku and WarnerMedia for the distribution of HBO Max on Roku platform nearly seven months after HBO Max was launched on the platform. All major over-the-top platforms will be covered by the streaming service as a result of the agreement.
There were 46 million active user accounts on Roku at the time of its absence. Its unclear what the terms of the deal were, but both sides expressed satisfaction at finally settling their differences.
Each episode of Game of Thrones, The Sopranos, Lovecraft Country, and The Undoing, as well as award-winning specials and documentaries, as well as new movies every week.
You can binge-watch classic TV shows like Friends, The Fresh Prince of Bel-Air, and The Big Bang Theory.
These exclusive Max Originals are not to be missed: The Flight Attendant, Superintelligence, Search Party, and many more titles are included.
If you want to add the new HBO Max channel to your Roku home screen, you can find it in the New and Notable section of the Channel Store. If you already subscribe to HBO on your Roku device, HBO Max on Roku will be added automatically.
It costs $14.99 per month to subscribe to HBO Max. Those who prepay for six months between December 3, 2020 and January 31, 2021 can save 20 percent on their membership. A six-month subscription costs $69.99, saving you about $20 on your monthly fee.
AT&T customers may be eligible for a free year of HBO Max with their phone or internet service.
Some advice: If youre currently subscribed to HBO through your cable provider, HBO Max may be part of the package. To access the service and all of its features, check with your subscriber. You may want to consider streaming services instead of cable.
Roku was WarnerMedias last major distribution partner for HBO Max ahead of the Dec. Gal Gadot stars in Wonder Woman 1984, which opens simultaneously in cinemas as well on HBO Max. Warner Bros.s 2021 slate of films will debut on HBO Max in the United States concurrently with their theatrical release and will be available exclusively for streaming for one month.
As part of the deal, Roku users who already subscribe to HBO through Roku will automatically have their existing HBO apps updated to the HBO Max app and will be able to log in using their existing HBO credentials.
Like Amazon and Apple with HBO Max, it appears that Roku will no longer be able to sell HBO as a channel subscription in its Roku Channel store. HBO Max subscriptions will be sold through Roku Pay, the companys payment service for streaming devices.
Shares of Roku were up more than 3% in after-hours trading following the announcement of the HBO Max deal.
As a result of the partnership, HBO Maxs vast collection of famous entertainment brands and blockbuster direct-to-stream theatrical films will be available to Rokus more than 100 million users, who have made Roku the No. 1 TV streaming platform in the United States, according to Scott Rosenberg, Rokus SVP of the platform business. We believe that all entertainment will be streamed, Rosenberg said.
The chief revenue officer of the company said, Were making new records in the months ahead, and we cant wait to collaborate with our longtime partners at Roku to develop on our previous victories and bring HBO Maxs best-in-class quality entertainment to Rokus large and highly engaged audience.
Comcast announced that the HBO Max mobile app would be available on Xfinity Flex and Xfinity X1, and the HBO Max mobile app was launched on PS 5. Having resisted for several months, Amazon finally agreed to carry HBO Max on Fire TV and Fire Tablet devices in November after agreeing to stop selling HBO in Amazon Prime Video Channels.
Link:
HBO Max is Finally on Roku: Discover How It Works Film Daily - Film Daily
Posted in Superintelligence
Comments Off on HBO Max is Finally on Roku: Discover How It Works Film Daily – Film Daily