The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Singularity
At Today’s Riot, Trump’s Trolls Turned Their Violent Fantasies Into Reality – Mother Jones
Posted: January 7, 2021 at 5:39 am
Let our journalists help you make sense of the noise: Subscribe to the Mother Jones Daily newsletter and get a recap of news that matters.
On Wednesday, the United States arrived at peak disinformation singularity. The lines between right-wing conspiracy internet forums and physical reality disappeared. Pro-Trump extremists stormed the Capitol as online trolls who had spent years threatening violence fully realized themselves, making it clear that they had never really been just trolling.
The mob that stormed the capital manifested years worth of posts lodged into unhinged, far-right, conspiracy-laden corners of the internet. Such rhetoric crept toward the mainstream, crossing over into right-wing media, eventually coming out of Trumps own mouth. It won new converts and spread more widely. It finally broke loose on Wednesday, as they did what theyd always said they would.
Through their farcical but all-too-real siege, Trump extremists turned the Capitol into something indistinguishable from the wild plots envisioned on their forums and groups. In the days leading up to the riot, people claiming that they were coming to Washington on Wednesday posted on the thedonald.win like it was 1944 and they were about to get on landing craft headed for the shores of Normandy. Today I had the very difficult conversations with my children, that daddy might not come home from D.C. one wrote. My husbands not happy, and hes going to with me, but I told him that if I say go and leave me behind, that he must do it. No questions asked. I look forward to standing with you on the front lines, another wrote in response.
On similar forums, extremist Trump supporters have been publicly outlining such plans for years: that they will take justice into their own hands by trying to start a second civil war or carrying out citizens arrests.
Ahead of today, experts thought the convening of a broad range of Trump supporters, including extremist groups,would radicalize attendees evenfurther toward the hard right. Their storming of the Capitol all but ensures that will come true, while illustrating the power they have already amassed.
Read this article:
At Today's Riot, Trump's Trolls Turned Their Violent Fantasies Into Reality - Mother Jones
Posted in Singularity
Comments Off on At Today’s Riot, Trump’s Trolls Turned Their Violent Fantasies Into Reality – Mother Jones
Scientists Just Created a Catalyst That Turns CO2 Into Jet Fuel – Singularity Hub
Posted: at 5:39 am
Air travel is one of the worst contributors to global warming, burping out nearly a billion tons of CO2 a year. But what if we could close that circle by converting those greenhouse gases back into jet fuel?
In the face of phenomena like climate change, plastic pollution, deforestation, and land degradation people are increasingly questioning the short-term thinking that underpins our societies. Some have dubbed our current approach a linear economy where we extract raw materials, process them into products, and then dispose of them once theyve outlived their usefulness.
As the global population grows, this strategy is becoming increasingly unsustainable. Thats prompting growing interest in a different model known as the circular economy. Rather than simply discarding our waste, we find ways to reuse it or recycle it into something more useful.
For years now, chemists have been trying to apply this idea to one of the most environmentally damaging sectors of our economy: the aviation industry. Not only do planes emit huge amounts of CO2, they also pump other greenhouse gases like nitrogen oxide directly into the upper atmosphere, where their warming effect is greatly increased.
The fossil fuels they burn to create all these emissions are hydrocarbons, which means they are made up of a combination of carbon and hydrogen. Thats led some to suggest it might be possible to create synthetic versions of these fuels by capturing the CO2 planes produce and combining it with hydrogen extracted with water.
If the energy used to power these reactions came from renewable sources, their production wouldnt lead to any increase in emissions. And when these fuels were burned they would simply be returning CO2 captured from the atmosphere, making the fuel effectively carbon neutral.
Its a nice idea, but the process of turning CO2 into useful fuels is more complex than it might sound. Most efforts so far have required expensive catalystssubstances that boost the speed of a chemical reactionor multiple energy-intensive processing steps, which means the resulting fuel is far pricier than fossil fuels.
Now though, researchers from the University of Oxford have developed a new low-cost catalyst that can directly convert CO2 into jet fuel, which they say could eventually lay the foundation for a circular economy for aviation fuel.
Instead of consuming fossil crude oil, jet aviation fuels and petrochemical starting compounds are produced from a valuable and renewable raw material, namely, carbon dioxide, they write in a paper in Nature Communications.
Within a jet fuel CO2 circular economy, the goods (here the jet fuel) are continually reprocessed in a closed environment, they add. This would not only save the natural fossil resources and preserve the environment, but would also create new jobs, economies, and markets.
Creating jet fuel is particularly challenging because most routes for synthesizing hydrocarbons from CO2 tend to produce smaller molecules with only a few carbon atoms, like methane and methanol. Jet fuels are made up of molecules with many long chains of carbon atoms, and there have been few successful attempts to produce them directly from CO2 without extra processing.
But by combining findings from previous research, the group was able to create a low-cost iron-based catalyst that could produce substantial yields of jet fuel from CO2 and hydrogen. Iron is already commonly used in these kinds of reactions, but they combined it with manganese, which has been shown to boost the activity of iron catalysts, and potassium, which is known to encourage the formation of longer-chain hydrocarbons.
They prepared the catalysts using an approach known as the Organic Combustion Method (OCM), in which the raw ingredients are combined with citric acid to make a slurry that is then ignited at 662F and burned for four hours to create a fine powder. This is a much simpler processing technique than previous approaches, which means it holds promise for industrial applications.
Scaling up this process to meet the demands of the aviation industry wont be easy. Boosting the efficiency of the synthesis step is only one part of the puzzle. Collecting large amounts of CO2 from the air is very tricky, and splitting water to make hydrogen also uses a lot of power.
Plans are already afoot to build a pilot plant that will convert CO2 into jet fuel at Rotterdam Airport in the Netherlands, but as Friends of the Earth campaigner Jorien de Lege told the BBC, scaling up the technology will be a herculean task.
If you think about it, this demonstration plant can produce a thousand liters a day based on renewable energy. Thats about five minutes of flying in a Boeing 747, she said.
Nonetheless, developing a cheap, high-yield catalyst is a major step towards making the idea more feasible. Getting our planes to fly on thin air may sound like a wildly ambitious idea, but that goal has just come a little bit closer.
Image Credit: Free-Photos from Pixabay
See the rest here:
Scientists Just Created a Catalyst That Turns CO2 Into Jet Fuel - Singularity Hub
Posted in Singularity
Comments Off on Scientists Just Created a Catalyst That Turns CO2 Into Jet Fuel – Singularity Hub
Essay: How to read in a restless world – Hindustan Times
Posted: at 5:39 am
Did your New Years resolution include reading more? But do you sometimes find that you have trouble focusing on your reading, especially books in their entirety in ways you did not before? Given the times we live in, I have to say this is quite natural. We can still be passionate readers but our expectations from the practice of reading might just have to shift a little to get attuned to our reality.
I started thinking about this while chatting with one of my students. A first-year in college, born in the 21st century, she is a digital native, someone who has literally not seen a time when the internet wasnt around. She told me that she used to love reading, but in the last couple of years, shes found it hard to focus on her reading. The distractions always steal her away.
Once in a while, digital natives and digital immigrants have the same kinds of problems especially as we move through different phases in our relationship with technology.
Were still trying to read like we used to in the old times; this is true even of the digital natives who are beginning their reading lives in this new reality. There is an existing mode and discourse of reading the only one we have and everybody who reads, especially so-called serious readers, must get initiated into that mode. But what is that mode?
***
Reading:Transforming any space into a private place.(Shutterstock)
The social phenomenon of private reading, as we know it is fairly recent in the history of humanity. While printed texts played a role in ancient China, mainly as scripture and SparkNotes for their version of UPSC exams, ancient and medieval Europe only saw books as rare and precious handwritten manuscripts owned by churches, royalty, and wealthy aristocrats. Sustained reading on a mass scale would have to wait for the popular spread of printed books in the 18th century. Capitalism, by then, would expand to create the modern middle class, who had the literacy, leisure, and purchasing power to buy books on a large enough scale to create and sustain a publishing industry. Hence, was born a modern practice sitting in isolation and reading quietly for a long time, to finish a whole book.
Reading as entertainment reached its peak during the Victorian age especially of long books such as the novel, which eventually went from being a popular genre to the status of high art, unfortunately (but understandably) losing its popularity in the process. Excitement over novels reached a stage where people crowded the New York harbour to pounce on people arriving from England with questions about the next instalment of the serialised Dickens novel that had not yet reached America: Is Little Nell dead? It was the kind of popularity enjoyed by the soap opera in the 20th century and the web-series today. The first techno-generic challenge to the dominance of books as popular entertainment would come in the early 20th century, from the art form of cinema. Cinemas most direct threat was not to reading but to its performative precedent, theatre. But just as the newer art form of photography, with its superior capture of reality, drove the older art of painting to Impressionism, cinema inspired theatre to experimental forms such as epic and expressionism, drawing attention to the flesh-and-blood presence that made it unique.
The 20th-century challenge to the primacy of reading was different in one important way from what would come in the 21st. The former needed full attention, especially back in the days when the only way to watch a film was to enter a dark hall, leaving everything else behind. Real and metaphoric equivalence was offered by the single-screen cinema hall. Multiplex theatres inside shopping malls made movie watching one possibility among many, offering several movies from which one could choose.
That is, possibly, the defining character of culture in the digitized and disembodied 21st century: consuming multiple cultural experiences at the same time. I remember an editorial argument in n+1 magazine from a few years ago that said something similar: that we are now more likely to read something while also listening to music, enjoy a joke or a debate on social media in between a movie streaming on our iPad. Its a technologically-curated version of older pleasures such as enjoying fine wine with poetry, or mead with the minstrels. The print-era singularity of the artistic experience will be replaced by a more pluralized, fragmented, and differently fulfilling experience.
Author Saikat Majumdar(Tribuvan Tiwari)
That is what I told my student: that I, too, have lost the old-world concentration I had from the time multisensory digital distractions were in my case unavailable (as opposed to her situation of usage-restriction). My body is too restless. Its used to multiple activities, multiple buttons, multiple screens. But my personal solution to the problem has been lasting and effective. I walk around the house with my book or e-reader. The restless energy gets channelled in my movement while I keep my mind drowned in language. For those who are able to do this, I highly recommend it!
I think we need to accept that the way to read in an older world with long, undivided attention, as a singular activity wont be available to us most of the time. Its okay to read while finishing a 45-minute lap on your cross-trainer; its okay to read with music in the background, our mind swimming between sound and sentence. Its okay to love the beauty of physical books while actually doing most of your reading with e-readers, with multiple options stored in its screen. All of this not because were busier because thats always an excuse but because something fundamentally has shifted in our sensibility, and it needs to contain multiple energies even while it tends to a classic love, that of reading.
The modernity of print gifted us the singularity of a beautiful artistic and intellectual experience. To celebrate it in our restless present, we can turn that restlessness into multiple windows of experience, simultaneously enjoyable.
Saikat Majumdars books include the novels The Firebird and The Scent of God, and the nonfiction, College.
Read the rest here:
Posted in Singularity
Comments Off on Essay: How to read in a restless world – Hindustan Times
Music Critic Praises BTS Vs Voice + Says He is a Vital Part of The Groups Musical Identity – Kpopstarz
Posted: at 5:39 am
BTS member V is one of the most distinguishable voices in the group and has been praised by critics for his deep, husky baritone. Recently, a member of the Selection Committee for Korean Music Awards, Kim Young Dae, praised BTS for his amazing vocals.
(Photo : BTS Twiter)
In a detailed review that goes deep in-depth on the individual contribution of all seven members of BTS to their success, Kim Young Dae gushes about V's voice, claiming it is an integral part of the boy band's identity.
To start his comments on BTS member V, Kim Young Da e talks about wishing this period with the coronavirus would pass so that he could listen to his solo song, "Inner Child", in a large stadium with many fans. He claims that the song is a hymn of youth, a beautiful confession that he knows has to be satisfied listening through his headphones.
(Photo : BTS Twitter)
He goes on to say that while BTS's songs are not remembered due to one overpowering voice, V has something special about him. The idol has a gifted tone and vocal, solidifying his importance in BTS's musical identity. He goes on to say that he, as a man, envies V's deep baritone voice. His voice was described as low but too low, full of volume and texture that stuns people with its crisp clarity, and is undeniably soulful and is impossible to mimic.
It is difficult to find the words to explain V's voice, but it only takes seconds to identify the tone of his voice. V's solo song, "Singularity," is both sensual and captivating. It is a song that cannot be heard anywhere else. It is a neo-soul that is unconventional, eliciting a new interest in V as a vocal.
(Photo : BTS Twitter)
V's emotional vocals in "Epilogue: Young Forever", a song that is said to embody BTS's identity itself, and his fervent voice in "Save Me", are some of his remarkable moments in BTS as a vocalist. In songs like "DNA", V's low, stable, and pure voice became an important element to people listening to the song, as he perfectly practices musical narrations.
In a group, where each member has a limit and needs to take in their own part, it is difficult to know V's charms just from BTS's music. That is why it is vital to listen to V's solo song, which gives us a sense of winter. Songs like "Scenery" or "Winter Bear" show V's charm perfectly; not only his vocals or his deep baritone, it also shows how he effortlessly shows his feelings and his sensitivity.
(Photo : BTS Twitter)
Kim Young Dae praises V as having a voice that is perfect for a movie soundtrack, as he is able to calmy deliver the emotion of the song. "It is just like his personality", the music critic concludes.
Kim Young Dae is not the only critic who has praised V's vocals. Bianca Mndez praised V's solo song "Singularity", which was the opening track of BTS's "Love Yourself: Tear", saying that the song was a prominent "tone-setter" on the album. Katie Goh of VICE has also praised "Singularity", saying that it was one of V's best vocal performances to date.
Do you like V's voice? Tell us in the comments below!
For more K-Pop news and updates, always keep your tabs open here on KpopStarz.
KpopStarz owns this
Written by Alexa Lewis
Read the rest here:
Posted in Singularity
Comments Off on Music Critic Praises BTS Vs Voice + Says He is a Vital Part of The Groups Musical Identity – Kpopstarz
That’s all folks, the singularity is near. Elon Musk’s cyber pigs and brain computer tech – Toronto Star
Posted: September 7, 2020 at 2:26 am
Goodbye Dolly. Hello Gertrude and Dorothy.
Joining the first sheep that was ever cloned as a sign of our science fact future, this past week, celebrity entrepreneur Elon Musk gave a presentation about Neuralink, his company that is focusing on creating technology that links with brains. As part of it, he introduced pigs who had the prototype devices implanted in them. The internet dubbed them Cyber Pigs and portions of readings from Gertrudes brain were played.
Brain computer technology is at a point where the potential medical implications are so exciting many players are pursuing different approaches to the field. The ethics of using this technology are sometimes best explained in science fiction like Black Mirror and The Matrix.
To discuss the latest in brain computer technology and the Neuralink presentation, we are joined by Graeme Moffat. He is a Senior Fellow at the Munk School of Global Affairs and Public Policy, and also the Chief Scientist and cofounder of System 2 Neurotechnology. He was formerly Chief Scientist and Vice President of Regulatory Affairs with Interaxon, a Toronto-based world leader in consumer neurotechnology.
Listen to this episode and more at This Matters or subscribe at Apple Podcasts, Spotify, Google Podcasts or wherever you listen to your favourite podcasts.
Continue reading here:
Posted in Singularity
Comments Off on That’s all folks, the singularity is near. Elon Musk’s cyber pigs and brain computer tech – Toronto Star
Before or After the Singularity – PRESSENZA International News Agency
Posted: at 2:26 am
Scientific theories developed by independent and non-networked groups came to the following conclusion: Something will happen around the world that will change human history in a special way. While the predictions may not match exact dates, they all have one thing in common it will happen this century and within a few decades.
The event or the sum of the events per se was named SINGULARITY and has unique characteristics: The development of the events does not generally accelerate within the scope of their properties, but changes abruptly or collapses and starts again.
These predictions could be made on the basis of curves that encompass the development of natural ecosystems as well as the various significant milestones in the universal history of mankind from the beginning of time.
Researchers like Alexander Panov, Ray Kurzweil and many others were able to bring those considerations together by bringing together fundamentally different variables such as energy sources, automation, artificial intelligence, mode of production and consumption, etc., etc., etc.
However, the majority of theories portray science and technology as the creator of this future and not as a by-product of the evolution of our species.
We are of the opinion that the change takes place out of ones own awareness of humanity in its human and spiritual dimension, and that as a consequence of this change external changes also occur which technology, artificial intelligence and genetic engineering do not exclude, but them instead puts it in the foreground and makes it the vehicle and support for this change.
In summary, the SINGULARITY is a wonderful tool for theoretical analysis for us to imagine a world to which we are striving and also to prevent the dangers that such a change could bring.
In what other way could we seriously speak of this chaotic future? Its like were on a ship and were drawn to the enormous gravity of a black hole, a zone where time and space warp. Would we be able to know at what point in time or what distance we would reach the central vortex of the black hole? Were not trying to do futurology even less under these conditions.
But analyzing things from this point of view, with a warning in mind, is an excellent way of imagining the world that we may expect in the future.
Our area of interest focuses on human existence and this is the basis of our analysis, which of course does not claim to be scientific accuracy. We may also later be able to question current science with its alleged thoroughness and infallibility.
We strive for the evolution of mankind, we want a revolution in their consciousness and values. We reject the reification of the human being and the apocalyptic view of the future. We do not deny that machines are useful if they help to relieve people of work. We speak out against any kind of concentration of power and demand the expansion of human freedom, which can neither be restricted nor replaced by soulless algorithms.
As you can see, the future can hold many nuances Our goal is to exchange ideas with those who are interested in these topics.
What is your vision of the future?
Translation from German by Lulith V. by the Pressenza volunteer translation team. We are looking for volunteers!
Carlos Santos is a teacher and has been active in a humanist movement all his life. For the last decade he has devoted himself to audiovisual implementations as a director, producer and screenwriter of documentaries and feature films within his production company Esencia Humana Films. Email: escenariosfuturos21@gmail.com; Blog: escenariosfuturos.org
Go here to see the original:
Before or After the Singularity - PRESSENZA International News Agency
Posted in Singularity
Comments Off on Before or After the Singularity – PRESSENZA International News Agency
Neuralink’s Wildly Anticipated New Brain Implant: the Hype vs. the Science – Singularity Hub
Posted: at 2:26 am
Neuralinks wildly anticipated demo last Friday left me with more questions than answers. With a presentation teeming with promises and vision but scant on data, the event nevertheless lived up to its main goal as a memorable recruitment session to further the growth of the mysterious brain implant company.
Launched four years ago with the backing of Elon Musk, Neuralink has been working on futuristic neural interfaces that seamlessly listen in on the brains electrical signals, and at the same time, write into the brain with electrical pulses. Yet even by Silicon Valley standards, the company has kept a tight seal on its progress, conducting all manufacturing, research, and animal trials in-house.
A vision of marrying biological brains to artificial ones is hardly unique to Neuralink. The past decade has seen an explosion in brain-machine interfacessome implanted into the brain, some into peripheral nerves, or some that sit outside the skull like a helmet. The main idea behind all these contraptions is simple: the brain mostly operates on electrical signals. If we can tap into these enigmatic neural codesthe brains internal languagewe could potentially become the architects of our own minds.
Let people with paralysis walk again? Check and done. Control robotic limbs with their minds? Yup. Rewriting neural signals to battle depression? In humans right now. Recording the electrical activity behind simple memories and playing it back? Human trials ongoing. Linking up human minds into a BrainNet to collaborate on a Tetris-like game through the internet? Possible.
Given this backdrop, perhaps the most impressive part of the demonstration isnt lofty predictions of what brain-machine interfaces could potentially do one day. In some sense, were already there. Rather, what stood out was the redesigned Link device itself.
In Neuralinks coming out party last year, the company envisioned a wireless neural implant with a sleek ivory processing unit worn at the back of the ear. The electrodes of the implant itself are sewn into the brain with automated robotic surgery, relying on brain imaging techniques to avoid blood vessels and reduce brain bleeding.
The problem with that design, Musk said, is that it had multiple pieces and was complex. You still wouldnt look totally normal because theres a thing coming out of your ear.
The prototype at last weeks event came in a vastly different physical shell. About the size of a large coin, the device replaces a small chunk of your skull and sits flush with the surrounding skull matter. The electrodes, implanted inside the brain, connect with this topical device. When covered by hair, the implant is invisible.
Musk envisions an outpatient therapy where a robot can simultaneously remove a piece of the skull, sew the electrodes in, and replace the missing skull piece with the device. According to the team, the Link has similar physical properties and thickness as the skull, making the replacement a sort of copy-and-paste. Once inserted, the Link is then sealed to the skull with superglue.
I could have a Neuralink right now and you wouldnt know it, quipped Musk.
For a device that small, the team packed an admirable array of features into it. The Link device has over 1,000 channels, which can be individually activated. This is on par with Neuropixel, the crme de la crme of neural probes with 960 recording channels thats currently used widely in research, including by the Allen Institute for Brain Science.
Compared to the Utah Array, a legendary implant system used for brain stimulation in humans with only 256 electrodes, the Link has an obvious edge up in terms of pure electrode density.
Whats perhaps most impressive, however, is its onboard processing for neural spikesthe electrical pattern generated by neurons when they fire. Electrical signals are fairly chaotic in the brain, and filtering spikes from noise, as well as separating trains of electrical activity into spikes, normally requires quite a bit of processing power. This is why in the lab, neural spikes are usually recorded offline and processed using computers, rather than with on-board electronics.
The problem gets even more complicated when considering wireless data transfer from the implanted device to an external smartphone. Without accurate and efficient compression of those neural data, the transfer could tremendously lag, drain battery life, or heat up the device itselfsomething you dont want happening to a device stuck inside your skull.
To get around these problems, the team has been working on algorithms that use characteristic shapes of electrical patterns that look like spikes to efficiently identify individual neural firings. The data is processed on the chip inside the skull device. Recordings from each channel are filtered to root out obvious noise, and the spikes are then detected in real time. Because different types of neurons have their characteristic ways of spikingthat is, the shape of their spikes are diversethe chip can also be configured to detect the particular spikes youre looking for. This means that in theory the chip could be programmed to only capture the type of neuron activity youre interested infor example, to look at inhibitory neurons in the cortex and how they control neural information processing.
These processed spike data are then sent out to smartphones or other external devices through Bluetooth to enable wireless monitoring. Being able to do this efficiently has been a stumbling block in wireless brain implantsraw neural recordings are too massive for efficient transfer, and automated spike detection and compression of that data is difficult, but a necessary step to allow neural interfaces to finally cut the wire.
Link has other impressive features. For one, the battery life lasts all day, and the device can be charged at night using inductive charging. From my subsequent conversations with the team, it seems like there will be alignment lights to help track when the charger is aligned with the device. Whats more, the Link itself also has an internal temperature sensor to monitor for over-heating, and will automatically disconnect if the temperature rises above a certain thresholda very necessary safety measure so it doesnt overheat the surrounding skull tissue.
From the get-go of the demonstration, there was an undercurrent of tension between whats possible in neuroengineering versus whats needed to understand the brain.
Since its founding, Neuralink has always been fascinated with electrode numbers: boosting channel numbers on its devices and increasing the number of neurons that can be recorded at the same time.
At the event, Musk said that his goal is to increase the number of recorded neurons by a factor of 100, then 1,000, then 10,000.
But heres the thing: as neuroscience is increasingly understanding the neural code behind our thought processes, its clear that more electrodes or more stimulated neurons isnt always better. Most neural circuits employ whats called sparse coding, in that only a handful of neurons, when stimulated in a way that mimics natural firing, can artificially trigger visual or olfactory sensations. With optogeneticsthe technique of stimulating neurons with lightscientists now know that its possible to incept memories by targeting just a few key neurons in a circuit. Sticking a ton of wires into the brain, which inevitably causes scarring, and zapping hundreds of thousands of neurons isnt necessarily going to help.
Unlike engineering, the solution to the brain isnt more channels or more implants. Rather, its deciphering the neural codeknowing what to stimulate, in what order, to produce what behavior. Its perhaps telling that despite claims of neural stimulation, the only data shown at the event were neurons firing from a section of a mouse brainusing two-photon microscopy to image neural activationafter zapping brain tissue with an electrode. What information, if any, is really being written into the brain? Without an idea of how neural circuits work and in what sequences, zapping the brain with electricityno matter how cool the device itself isis akin to banging on all the keys of a piano at once, rather than composing a beautiful melody.
Of course, the problem is far larger than Neuralink itself. Its perhaps the next frontier in solving the brains mysteries. To their credit, the Neuralink team has looked at potential damage to the brain from electrode insertion. A main problem with current electrodes is that the brain will eventually activate non-neuronal cells to form an insulating sheath around the electrode, sealing it off from the neurons it needs to record from. According to some employees I talked to, so far, for at least two months, the scarring around electrodes is minimal, although in the long run there may be scar tissue buildup at the scalp. This may make electrode threads difficult to removesomething that still needs to be optimized.
However, two months is only a fraction of what Musk is proposing: a decade-long implant, with hardware that can be updated.
The team may also have an answer there. Rather than removing the entire implant, it could potentially be useful to leave the threads inside the brain and only remove the top capthe Link device that contains the processing chip. The team is now trying the idea out, while exploring the possibility of a full-on removal and re-implant.
As a demonstration of feasibility, the team trotted out three adorable pigs: one without an implant, one with a Link, and one with the Link implanted and then removed. Gertrude, the pig currently with an implant in areas related to her snout, had her inner neural firings broadcasted as a series of electrical crackles as she roamed around her pen, sticking her snout into a variety of food and hay and bumping at her handler.
Pigs came as a surprise. Most reporters, myself included, were expecting non-human primates. However, pigs seem like a good choice. For one, their skulls have a similar density and thickness to human ones. For another, theyre smart cookies, meaning they can be trained to walk on a treadmill while the implant records from their motor cortex to predict the movement of each joint. Its feasible that the pigs could be trained on more complicated tests and behaviors to show that the implant is affecting their movements, preferences, or judgment.
For now, the team doesnt yet have publicly available data showing that targeted stimulation of the pigs cortexsay, motor cortexcan drive their muscles into action. (Part of this, I heard, is because of the higher stimulation intensity required, which is still being fine-tuned.)
Although pitched as a prototype, its clear that the Link remains experimental. The team is working closely with the FDA and was granted a breakthrough device designation in July, which could pave the way for a human trial for treating people with paraplegia and tetraplegia. Whether the trials will come by end of 2020, as Musk promised last year, however, remains to be seen.
Rather than other brain-machine interface companies, which generally focus on brain disorders, its clear that Musk envisions Link as something that can augment perfectly healthy humans. Given the need for surgical removal of part of your skull, its hard to say if its a convincing sell for the average person, even with Musks star power and his vision of augmenting natural sight, memory playback, or a third artificial layer of the brain that joins us with AI. And because the team only showed a highly condensed view of the pigs neural firingsrather than actual spike tracesits difficult to accurately gauge how sensitive the electrodes actually are.
Finally, for now the electrodes can only record from the cortexthe outermost layer of the brain. This leaves deeper brain circuits and their functions, including memory, addiction, emotion, and many types of mental illnesses off the table. While the team is confident that the electrodes can be extended in length to reach those deeper brain regions, its work for the future.
Neuralink has a long way to go. All that said, having someone with Musks impact championing a rapidly-evolving neurotechnology that could help people is priceless. One of the lasting conversations I had after the broadcast was someone asking me what its like to drill through skulls and see a living brain during surgery. I shrugged and said its just bone and tissue. He replied wistfully it would still be so cool to be able to see it though.
Its easy to forget the wonder that neuroscience brings to people when youve been in it for years or decades. Its easy to roll my eyes at Neuralinks data and think well neuroscientists have been listening in on live neurons firing inside animals and even humans for over a decade. As much as Im still skeptical about how Link compares to state-of-the-art neural probes developed in academia, Im impressed by how much a relatively small leadership team has accomplished in just the past year. Neuralink is only getting started, and aiming high. To quote Musk: Theres a tremendous amount of work to be done to go from here to a device that is widely available and affordable and reliable.
Image Credit: Neuralink
More:
Neuralink's Wildly Anticipated New Brain Implant: the Hype vs. the Science - Singularity Hub
Posted in Singularity
Comments Off on Neuralink’s Wildly Anticipated New Brain Implant: the Hype vs. the Science – Singularity Hub
Microsoft’s New Deepfake Detector Puts Reality to the Test – Singularity Hub
Posted: at 2:26 am
The upcoming US presidential election seems set to be something of a messto put it lightly. Covid-19 will likely deter millions from voting in person, and mail-in voting isnt shaping up to be much more promising. This all comes at a time when political tensions are running higher than they have in decades, issues that shouldnt be political (like mask-wearing) have become highly politicized, and Americans are dramatically divided along party lines.
So the last thing we need right now is yet another wrench in the spokes of democracy, in the form of disinformation; we all saw how that played out in 2016, and it wasnt pretty. For the record, disinformation purposely misleads people, while misinformation is simply inaccurate, but without malicious intent. While theres not a ton tech can do to make people feel safe at crowded polling stations or up the Postal Services budget, tech can help with disinformation, and Microsoft is trying to do so.
On Tuesday the company released two new tools designed to combat disinformation, described in a blog post by VP of Customer Security and Trust Tom Burt and Chief Scientific Officer Eric Horvitz.
The first is Microsoft Video Authenticator, which is made to detect deepfakes. In case youre not familiar with this wicked byproduct of AI progress, deepfakes refers to audio or visual files made using artificial intelligence that can manipulate peoples voices or likenesses to make it look like they said things they didnt. Editing a video to string together words and form a sentence someone didnt say doesnt count as a deepfake; though theres manipulation involved, you dont need a neural network and youre not generating any original content or footage.
The Authenticator analyzes videos or images and tells users the percentage chance that theyve been artificially manipulated. For videos, the tool can even analyze individual frames in real time.
Deepfake videos are made by feeding hundreds of hours of video of someone into a neural network, teaching the network the minutiae of the persons voice, pronunciation, mannerisms, gestures, etc. Its like when you do an imitation of your annoying coworker from accounting, complete with mimicking the way he makes every sentence sound like a question and his eyes widen when he talks about complex spreadsheets. Youve spent hoursno, monthsin his presence and have his personality quirks down pat. An AI algorithm that produces deepfakes needs to learn those same quirks, and more, about whoever the creators target is.
Given enough real information and examples, the algorithm can then generate its own fake footage, with deepfake creators using computer graphics and manually tweaking the output to make it as realistic as possible.
The scariest part? To make a deepfake, you dont need a fancy computer or even a ton of knowledge about software. There are open-source programs people can access for free online, and as far as finding video footage of famous peoplewell, weve got YouTube to thank for how easy that is.
Microsofts Video Authenticator can detect the blending boundary of a deepfake and subtle fading or greyscale elements that the human eye may not be able to see.
In the blog post, Burt and Horvitz point out that as time goes by, deepfakes are only going to get better and become harder to detect; after all, theyre generated by neural networks that are continuously learning from and improving themselves.
Microsofts counter-tactic is to come in from the opposite angle, that is, being able to confirm beyond doubt that a video, image, or piece of news is real (I mean, can McDonalds fries cure baldness? Did a seal slap a kayaker in the face with an octopus? Never has it been so imperative that the world know the truth).
A tool built into Microsoft Azure, the companys cloud computing service, lets content producers add digital hashes and certificates to their content, and a reader (which can be used as a browser extension) checks the certificates and matches the hashes to indicate the content is authentic.
Finally, Microsoft also launched an interactive Spot the Deepfake quiz it developed in collaboration with the University of Washingtons Center for an Informed Public, deepfake detection company Sensity, and USA Today. The quiz is intended to help people learn about synthetic media, develop critical media literacy skills, and gain awareness of the impact of synthetic media on democracy.
The impact Microsofts new tools will have remains to be seenbut hey, were glad theyre trying. And theyre not alone; Facebook, Twitter, and YouTube have all taken steps to ban and remove deepfakes from their sites. The AI Foundations Reality Defender uses synthetic media detection algorithms to identify fake content. Theres even a coalition of big tech companies teaming up to try to fight election interference.
One thing is for sure: between a global pandemic, widespread protests and riots, mass unemployment, a hobbled economy, and the disinformation thats remained rife through it all, were going to need all the help we can get to make it through not just the election, but the rest of the conga-line-of-catastrophes year that is 2020.
Image Credit: Darius BasharonUnsplash
View original post here:
Microsoft's New Deepfake Detector Puts Reality to the Test - Singularity Hub
Posted in Singularity
Comments Off on Microsoft’s New Deepfake Detector Puts Reality to the Test – Singularity Hub
The world of Artificial… – The American Bazaar
Posted: at 2:26 am
Sophia. Source: https://www.hansonrobotics.com/press/
Humans are the most advanced form of Artificial Intelligence (AI), with an ability to reproduce.
Artificial Intelligence (AI) is no longer a theory but is part of our everyday life. Services like TikTok, Netflix, YouTube, Uber, Google Home Mini, and Amazon Echo are just a few instances of AI in our daily life.
This field of knowledge always attracted me in strange ways. I have been an avid reader and I read a variety of subjects of non-fiction nature. I love to watch movies not particularly sci-fi, but I liked Innerspace, Flubber, Robocop, Terminator, Avatar, Ex Machina, and Chappie.
When I think of Artificial Intelligence, I see it from a lay perspective. I do not have an IT background. I am a researcher and a communicator; and, I consider myself a happy person who loves to learn and solve problems through simple and creative ideas. My thoughts on AI may sound different, but Im happy to discuss them.
Humans are the most advanced form of AI that we may know to exit. My understanding is that the only thing that differentiates humans and Artificial Intelligence is the capability to reproduce. While humans have this ability to multiply through male and female union and transfer their abilities through tiny cells, machines lack that function. Transfer of cells to a newborn is no different from the transfer of data to a machine. Its breathtaking that how a tiny cell in a human body has all the necessary information of not only that particular individual but also their ancestry.
Allow me to give an introduction to the recorded history of AI. Before that, I would like to take a moment to share with you my recent achievement that I feel proud to have accomplished. I finished a course in AI from Algebra University in Croatia in July. I could attend this course through a generous initiative and bursary from Humber College (Toronto). Such initiatives help intellectually curious minds like me to learn. I would also like to express that the views expressed are my own understanding and judgment.
What is AI?
AI is a branch of computer science that is based on computer programming like several other coding programs. What differentiates Artificial Intelligence, however, is its aim that is to mimic human behavior. And this is where things become fascinating as we develop artificial beings.
Origins
I have divided the origins of AI into three phases so that I can explain it better and you dont miss on the sequence of incidents that led to the step by step development of AI.
Phase 1
AI is not a recent concept. Scientists were already brainstorming about it and discussing the thinking capabilities of machines even before the term Artificial Intelligence was coined.
I would like to start from 1950 with Alan Turing, a British intellectual who brought WW II to an end by decoding German messages. Turing released a paper in the October of 1950 Computing Machinery and Intelligence that can be considered as among the first hints to thinking machines. Turing starts the paper thus: I propose to consider the question, Can machines think?. Turings work was also the beginning of Natural Language Processing (NLP). The 21st-century mortals can relate it with the invention of Apples Siri. The A.M. Turing Award is considered the Nobel of computing. The life and death of Turing are unusual in their own way. I will leave it at that but if you are interested in delving deeper, here is one article by The New York Times.
Five years later, in 1955, John McCarthy, an Assistant Professor of Mathematics at Dartmouth College, and his team proposed a research project in which they used the term Artificial Intelligence, for the first time.
McCarthy explained the proposal saying, The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. He continued, An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.
It started with a few simple logical thoughts that germinated into a whole new branch of computer science in the coming decades. AI can also be related to the concept of Associationism that is traced back to Aristotle from 300 BC. But, discussing that in detail will be outside the scope of this article.
It was in 1958 that we saw the first model replicating the brains neuron system. This was the year when psychologist Frank Rosenblatt developed a program called Perceptron. Rosenblatt wrote in his article, Stories about the creation of machines having human qualities have long been fascinating province in the realm of science fiction. Yet we are now about to witness the birth of such a machine a machine capable of perceiving, recognizing, and identifying its surroundings without any human training or control.
A New York Times article published in 1958 introduced the invention to the general public saying, The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.
My investigation in one of the papers of Rosenblatt hints that even in the 1940s scientists talked about artificial neurons. Notice in the Reference section of Rosenblatts paper published in 1958. It lists Warren S. McCulloch and Walter H. Pitts paper of 1943. If you are interested in more details, I would suggest an article published in Medium.
The first AI conference took place in 1959. However, by this time, the leads in Artificial Intelligence had already exhausted the computing capabilities of the time. It is, therefore, no surprise that not much could be achieved in AI in the next decade.
Thankfully, the IT industry was catching up quickly and preparing the ground for stronger computers. Gordon Moore, the co-founder of Intel, made a few predictions in his article in 1965. Moore predicted a huge growth of integrated circuits, more components per chip, and reduced costs. Integrated circuits will lead to such wonders as home computers or at least terminals connected to a central computerautomatic controls for automobiles, and personal portable communications equipment, Moore predicted. Although scientists had been toiling hard to launch the Internet, it was not until the late 1960s that the invention started showing some promises. On October 29, 1969, ARPAnet delivered its first message: a node-to-node communication from one computer to another, notes History.com.
With the Internet in the public domain, computer companies had a reason to accelerate their own developments. In 1971, Intel introduced its first chip. It was a huge breakthrough. Intel impressively compared the size and computing abilities of the new hardware saying, This revolutionary microprocessor, the size of a little fingernail, delivered the same computing power as the first electronic computer built in 1946, which filled an entire room.
Around the 1970s more popular versions of languages came in use, for instance, C and SQL. I mention these two as I remember when I did my Diploma in Network-Centered Computing in 2002, the advanced versions of these languages were still alive and kicking. Britannica has a list of computer programming languages if you care to read more on when the different languages came into being.
These advancements created a perfect amalgamation of resources to trigger the next phase in AI.
Phase 2
In the late 1970s, we see another AI enthusiast coming in the scene with several research papers on AI. Geoffrey Hinton, a Canadian researcher, had confidence in Rosenblatts work on Perceptron. He resolved an inherent problem with Rosenblatts model that was made up of a single layer perceptron. To be fair to Rosenblatt, he was well aware of the limitations of this approach he just didnt know how to learn multiple layers of features efficiently, Hinton noted in his paper in 2006.
This multi-layer approach can be referred to as a Deep Neural Network.
Another scientist, Yann LeCun, who studied under Hinton and worked with him, was making strides in AI, especially Deep Learning (DL, explained later in the article) and Backpropagation Learning (BL). BL can be referred to as machines learning from their mistakes or learning from trial and error.
Similar to Phase 1, the developments of Phase 2 end here due to very limited computing power and insufficient data. This was around the late 1990s. As the Internet was fairly recent, there was not much data available to feed the machines.
Phase 3
In the early 21st-century, the computer processing speed entered a new level. In 2011, IBMs Watson defeated its human competitors in the game of Jeopardy. Watson was quite impressive in its performance. On September 30, 2012, Hinton and his team released the object recognition program called Alexnet and tested it on Imagenet. The success rate was above 75 percent, which was not achieved by any such machine before. This object recognition sent ripples across the industry. By 2018, image recognition programming became 97% accurate! In other words, computers were recognizing objects more accurately than humans.
In 2015, Tesla introduced its self-driving AI car. The company boasts its autopilot technology on its web site saying, All new Tesla cars come standard with advanced hardware capable of providing Autopilot features today, and full self-driving capabilities in the futurethrough software updates designed to improve functionality over time.
Go enthusiasts will also remember the 2016 incident when Google-owned DeepMinds AlphaGo defeated the human Go world-champion Lee Se-dol. This incident came at least a decade too soon. We know that Go is considered one of the most complex games in human history. And, AI could learn it in just 3 days, to a level to beat a world champion who, I would assume must have spent decades to achieve that proficiency!
The next phase shall be to work on Singularity. Singularity can be understood as machines building better machines, all by themselves. In 1993, scientist Vernor Vinge published an essay in which he wrote, Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Scientists are already working on the concept of technological singularity. If these achievements can be used in a controlled way, these can help several industries, for instance, healthcare, automobile, and oil exploration.
I would also like to add here that Canadian universities are contributing significantly to developments in Artificial Intelligence. Along with Hinton and LeCun, I would like to mention Richard Sutton. Sutton, Professor at the University of Alberta, is of the view that advancements in the singularity can be expected around 2040. This makes me feel that when AI will no longer need human help, it will be a kind of specie in and of itself.
To get to the next phase, however, we would need more computer power to achieve the goals of tomorrow.
Now that we have some background on the genesis of AI and some information on the experts who nourished this advancement all these years, it is time to understand a few key terms of AI. By the way, if you ask me, every scientist who is behind these developments is a new topic in themselves. I have tried to put a good number of researched sources in the article to generate your interest and support your knowledge in AI.
Big Data
With the Internet of Things (IoT), we are saving tons of data every second from every corner of the world. Consider, for instance, Google. It seems that it starts tracking our intentions as soon as we type the first alphabet on our keyboard. Now think for a second how much data is generated from all the internet users from all over the World. Its already making predictions of our likes, dislikes, actionseverything.
The concept of big data is important as that makes the memory of Artificial Intelligence. Its like a parent sharing their experience with their child. If the child can learn from that experience, they develop cognizant abilities and venture into making their own judgments and decisions. Similarly, big data is the human experience that is shared with machines and they develop on that experience. This can be supervised as well as unsupervised learning.
Symbolic Reasoning and Machine Learning
The basics of all processes are some mathematical patterns. I think that this is because math is something that is certain and easy to understand for all humans. 2 + 2 will always be 4 unless there is something we havent figured out in the equation.
Symbolic reasoning is the traditional method of getting work done through machines. According to Pathmind, to build a symbolic reasoning system, first humans must learn the rules by which two phenomena relate, and then hard-code those relationships into a static program. Symbolic reasoning in AI is also known as the Good Old Fashioned AI (GOFAI).
Machine Learning (ML) refers to the activity where we feed big data to machines and they identify patterns and understand the data by themselves. The outcomes are not as predicted as here machines are not programmed to specific outcomes. Its like a human brain where we are free to develop our own thoughts. A video by ColdFusion explains ML thus: ML systems analyze vast amounts of data and learn from their past mistakes. The result is an algorithm that completes its task effectively. ML works well with supervised learning.
Here I would like to make a quick tangent for all those creative individuals who need some motivation. I feel that all inventions were born out of creativity. Of course, creativity comes with some basic understanding and knowledge. Out of more than 7 billion brains, somewhere someone is thinking out of the box, verifying their thoughts, and trying to communicate their ideas. Creativity is vital for success. This may also explain why some of the most important inventions took place in a garage (Google and Microsoft). Take, for instance, a small creative tool like a pizza cutter. Someone must have thought about it. Every time I use it, I marvel how convenient and efficient it is to slice a pizza without disturbing the toppings with that running cutter. Always stay creative and avoid preconceived ideas and stereotypes.
Alright, back to the topic!
Deep Learning
Deep Learning (DL) is a subset of ML. This technology attempts to mimic the activity of neurons in our brain using matrix mathematics, explains ColdFusion. I found this article that describes DL well. With better computers and big data, it is now possible to venture into DL. Better computers provide the muscle and the big data provides the experience to a neuron network. Together, they help a machine think and execute tasks just like a human would do. I would suggest reading this paper titled Deep Leaning by LeCun, Bengio, and Hinton (2015) for a deeper perspective on DL.
The ability of DL makes it a perfect companion for unsupervised learning. As big data is mostly unlabelled, DL processes it to identify patterns and make predictions. This not only saves a lot of time but also generates results that are completely new to a human brain. DL offers another benefit it can work offline; meaning, for instance, a self-driving car. It can take instantaneous decisions while on the road.
What next?
I think that the most important future development will be AI coding AI to perfection, all by itself.
Neural nets designing neural nets have already started. Early signs of self-production are in vision. Google has already created programs that can produce its own codes. This is called Automatic Machine Learning or AutoML. Sundar Pichai, CEO of Google and Alphabet, shared the experiment in his blog. Today, designing neural nets is extremely time intensive, and requires an expertise that limits its use to a smaller community of scientists and engineers. Thats why weve created an approach called AutoML, showing that its possible for neural nets to design neural nets, said Pichai (2017).
Full AI capabilities will also trigger several other programs like fully-automated self-driving cars, full-service assistance in sectors like health care and hospitality.
Among the several useful programs of AI, ColdFusion has identified the five most impressive ones in terms of image outputs. These are AI generating an image from a text (Plug and Play Generative Networks: Conditional Iterative Generation of Images in Latent Space), AI reading lip movements from a video with 95% accuracy (LipNet), Artificial Intelligence creating new images from just a few inputs (Pix2Pix), AI improving the pixels of an image (Google Brains Pixel Recursive Super Resolution), and AI adding color to b/w photos and videos (Let There Be Color). In the future, these technologies can be used for more advanced functions like law enforcement et cetera.
AI can already generate images of non-existing humans and add sound and body movements to the videos of individuals! In the coming years, these tools can be used for gaming purposes, or maybe fully capable multi-dimensional assistance like the one we see in the movie Iron Man. Of course, all these developments would require new AI laws to avoid misuse; however, that is a topic for another discussion.
Humans are advanced AI
Artificial Intelligence is getting so good at mimicking humans that it seems that humans themselves are some sort of AI. The way Artificial Intelligence learns from data, retains information, and then develops analytical, problem solving, and judgment capabilities are no different from a parent nurturing their child with their experience (data) and then the child remembering the knowledge and using their own judgments to make decisions.
We may want to remember here that there are a lot of things that even humans have not figured out with all their technology. A lot of things are still hidden from us in plain sight. For instance, we still dont know about all the living species in the Amazon rain forest. Astrology and astronomy are two other fields where, I think, very little is known. Air, water, land, and celestial bodies control human behavior, and science has evidence for this. All this hints that we as humans are not in total control of ourselves. This feels similar to AI, which so far requires external intervention, like from humans, to develop it.
I think that our past has answers to a lot of questions that may unravel our future. Take for example the Great Pyramid at Giza, Egypt, which we still marvel for its mathematical accuracy and alignment with the earths equator as well as the movements of celestial bodies. By the way, we could compare the measurements only because we have already reached a level to know the numbers relating to the equator.
Also, think of Indias knowledge of astrology. It has so many diagrams of planetary movements that are believed to impact human behavior. These sketches have survived several thousand years. One of Indias languages, Vedic, is considered more than 4,000 years old, perhaps one of the oldest in human history. This was actually a question asked from IBM Watson during the 2011 Jeopardy competition. Understanding the literature in this language might unlock a wealth of information.
I feel that with the kind of technology we have in AI, we should put some of it at work to unearth our wisdom from the past. It is a possibility that if we overlook it, we may waste resources by reinventing the wheel.
See more here:
Posted in Singularity
Comments Off on The world of Artificial… – The American Bazaar
‘The World To Come’: Review | Reviews – Screen International
Posted: at 2:26 am
Dir. Mona Fastvold. US. 2020. 98 mins.
It would be easy to sell The World to Come as the female Brokeback Mountain, but that would be to traduce the richness, singularity and command of Mona Fastvolds beautifully executed and acted drama. The story of female friendship blossoming into passionate love in a severe 1850s American rural setting, this is an austere but lyrical piece underwritten by a complex grasp of emotional and psychological nuance, and a second feature of striking command by Norwegian-born director Mona Fastvold, following up her 2014 debut The Sleepwalker (she has also collaborated as writer on Brady Corbets features).
Understatement and interiority are the watchwords for a film which uses suggestion and period language very subtly
Scripted with heightened literary cadences by Ron Hansen and Jim Shepard, the film is well crafted in every respect, and marks an acting career high for Katherine Waterston, as well as a fine showcase for the ever more impressive Vanessa Kirby. Fastvolds maturely satisfying piece, picked up internationally by Sony Pictures, should find acclaim on the festival circuit, and upmarket distributors will hopefully find a way to highlight its appeal to discerning audiences on the big screen, where its stark elegance will truly flourish.
The film is framed with handwritten date captions as a diary kept in the 1850s in rural upstate New York by Abigail (Waterstone), the young wife of farmer Dyer (Casey Affleck). Their relationship lies under the shadow of the recent death of their young daughter, and grief along with the normal rigours of life in the remote countryside is keeping them emotionally apart, with the thoughtful Abigail and the gentle but taciturn Dyer unable to communicate their feelings, as seems par for the course in a rural marriage at this period. One day, however, Abigail exchanges glances with a new neighbour, Tallie (Kirby), in a subtle hint of what could be classified love at first sight. When Tallie pays a neighbourly visit, the two instantly bond, exchanging confidences, with Abigails reserve gradually conquered up by Tallies candour and ironic knowingness about womens domestic lot something she is familiar with, being married to the possessive Finney (Christopher Abbott).
Working over the seasons, beginning with a descent into a harshly forbidding winter, Fastvold teases out the shifts in the characters lives, at first establishing a tone of pensive reserve, then setting a note of heightened peril (mortality, after all, really means something in this environment), notably in an extraordinary blizzard sequence. As the action enters another year, warmth comes into the two womens lives; at last their slow-simmering romance catches fire in tentative declarations followed by a first kiss, and the fond words, You smell like a biscuit. There are flashes of overt sexual content, but used extremely sparingly and telegraphically towards the end, while Fastvold shows the meaning of Abigails passion in subtle touches like a moment where she lies back on a table, fully dressed, in a quiet swoon of rapture.
Acted with finely calibrated subtlety, the film uses close-ups sparingly but to resonant effect, contrasting the cautiousness with which Abigail reveals her self and the warmer, more openly expressive face of Tallie. Waterstone and Kirby pull off something very finely balanced, conveying the enormity of their characters emotions while speaking a stylised, formal, sometimes playful language: the script will be music to lovers of 19th-century American writing (Hawthorne, Emily Dickinson, Edith Wharton). As the two husbands, Affleck and Abbott contrast sharply both playing deeply enclosed, solemn men, but of different emotional literacy, one with a capacity for moral generosity, the other shockingly without.
Understatement and interiority are the watchwords for a film which uses suggestion and period language very subtly. Poetry plays a part in the central relationship, but theres a poetic ring to the prose too, both in the dialogue and in Abigails journal (both screenwriters are novelists, Ron Hansen having explored this period in The Assassination of Jesse James by the Coward Robert Ford, the film of which starred Casey Affleck as Ford). This is also very much a film about the power and necessity of writing, as suggested by a line that compares ink to fire: a good servant and a hard master.
Ink on paper is also sometimes suggested by the look of the winter sequences, colours bled to monochrome. Shot on 16mm by Andr Chemetoff, the film at once captures the look of period photography and establishes a feeling of contemporary realism, with no alienating sense of historical distance. The grainy texture of the images, combined with Jean Vincent Puzoss meticulous design, somewhat recalls the American period films (Meeks Cutoff, First Cow) of Kelly Reichardt, with something of the severe grace of Terence Daviess best work.
There is also a distinctive score by David Blumberg, foregrounding woodwinds - notably in the blizzard sequence, which has a feel of free jazz without being incongruous for the period (improvising legend Peter Brtzmann is featured on bass clarinet). The closing song, featuring singer Josephine Foster, catches the period feel perfectly over manuscript-style end credits.
Production companies: Seachange Media, Killer Films, Hype Films
International sales: Charades, sales@charades.eu
Producers: Casey Affleck, Whitaker Lader, Pamela Koffler, David Hinojosa, Margarethe Baillou
Screenplay: Ron Hansen, Jim Shepard
Based on the story by Jim Shepard
Cinematography: Andre Chemetoff
Editor: David Jancso
Production design: Jean Vincent Puzos
Music: David Blumberg
Main cast: Katherine Waterston, Vanessa Kirby, Casey Affleck, Christopher Abbott
Read the original here:
'The World To Come': Review | Reviews - Screen International
Posted in Singularity
Comments Off on ‘The World To Come’: Review | Reviews – Screen International