The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Mind Uploading
Do We Have Minds of Our Own? – The New Yorker
Posted: December 5, 2019 at 1:46 pm
In order to do science, weve had to dismiss the mind. This was, in any case, the bargain that was made in the seventeenth century, when Descartes and Galileo deemed consciousness a subjective phenomenon unfit for empirical study. If the world was to be reducible to physical causation, then all mental experiencesintention, agency, purpose, meaningmust be secondary qualities, inexplicable within the framework of materialism. And so the world was divided in two: mind and matter. This dualistic solution helped to pave the way for the Enlightenment and the technological and scientific advances of the coming centuries. But an uneasiness has always hovered over the bargain, a suspicion that the problem was less solved than shelved. At the beginning of the eighteenth century, Leibniz struggled to accept that perception could be explained through mechanical causeshe proposed that if there were a machine that could produce thought and feeling, and if it were large enough that a person could walk inside of it, as he could walk inside a mill, the observer would find nothing but inert gears and levers. He would find only pieces working upon one another, but never would he find anything to explain Perception, he wrote.
Today we tend to regard the mind not as a mill but as a computer, but, otherwise, the problem exists in much the same way that Leibniz formulated it three hundred years ago. In 1995, David Chalmers, a shaggy-haired Australian philosopher who has been called a rock star of the field, famously dubbed consciousness the hard problem, as a way of distinguishing it from comparatively easy problems, such as how the brain integrates information, focusses attention, and stores memories. Neuroscientists have made significant progress on the easier problems, using fMRIs and other devices. Engineers, meanwhile, have created impressive simulations of the brain in artificial neural networksthough the abilities of these machines have only made the difference between intelligence and consciousness more stark. Artificial intelligence can now beat us in chess and Go; it can predict the onset of cancer as well as human oncologists and recognize financial fraud more accurately than professional auditors. But, if intelligence and reason can be performed without subjective awareness, then what is responsible for consciousness? Answering this question, Chalmers argued, was not simply a matter of locating a process in the brain that is responsible for producing consciousness or correlated with it. Such a discovery still would fail to explain why such correlations exist or why they lead to one kind of experience rather than anotheror to nothing at all.
One line of reductionist thinking insists that the hard problem is not really so hardor that it is, perhaps, simply unnecessary. In his new book, Rethinking Consciousness: A Scientific Theory of Subjective Experience, the neuroscientist and psychologist Michael Graziano writes that consciousness is simply a mental illusion, a simplified interface that humans evolved as a survival strategy in order to model the processes of the brain. He calls this the attention schema. According to Grazianos theory, the attention schema is an attribute of the brain that allows us to monitor mental activitytracking where our focus is directed and helping us predict where it might be drawn in the futuremuch the way that other mental models oversee, for instance, the position of our arms and legs in space. Because the attention schema streamlines the complex noise of calculations and electrochemical signals of our brains into a caricature of mental activity, we falsely believe that our minds are amorphous and nonphysical. The body schema can delude a woman who has lost an arm into thinking that its still there, and Graziano argues that the mind is like a phantom limb: One is the ghost in the body and the other is the ghost in the head.
I suspect that most people would find this proposition alarming. On the other hand, many of us already, on some level, distrust the reality of our own minds. The recent vogue for mindfulness implies that we are passive observers of an essentially mechanistic existencethat consciousness can only be summoned fleetingly, through great effort. Plagued by a midday funk, we are often quicker to attribute it to bad gut flora or having consumed gluten than to the theatre of beliefs and ideas.
And what, really, are the alternatives for someone who wants to explain consciousness in strictly physical terms? Another option, perhaps the only other option, is to conclude that mind is one with the material worldthat everything, in other words, is conscious. This may sound like New Age bunk, but a version of this concept, called integrated information theory, or I.I.T., is widely considered one of the fields most promising theories in recent years. One of its pioneers, the neuroscientist Christof Koch, has a new book, The Feeling of Life Itself: Why Consciousness Is Widespread but Cant Be Computed, in which he argues that consciousness is not unique to humans but exists throughout the animal kingdom and the insect world, and even at the microphysical level. Koch, an outspoken vegetarian, has long argued that animals share consciousness with humans; this new book extends consciousness further down the chain of being. Central to I.I.T. is the notion that consciousness is not an either/or state but a continuumsome systems, in other words, are more conscious than others. Koch proposes that all sorts of things we have long thought of as inert might have a tiny glow of experience, including honeybees, jellyfish, and cerebral organoids grown from stem cells. Even atoms and quarks may be forms of enminded matter.
Another term for this is panpsychismthe belief that consciousness is ubiquitous in nature. In the final chapters of the book, Koch commits himself to this philosophy, claiming his place among a lineage of thinkersincluding Leibniz, William James, and Alfred North Whiteheadwho similarly believed that matter and soul were one substance. This solution avoids the ungainliness of dualism: panpsychism, Koch argues, elegantly eliminates the need to explain how the mental emerges out of the physical and vice versa. Both coexist. One might feel that aesthetic considerations, such as elegance, do not necessarily make for good science; more concerning, perhaps, is the fact that Koch, at times, appears motivated by something even more elementala longing to renchant the world. In the books last chapter, he confesses to finding spiritual sustenance in the possibility that humans are not the lone form of consciousness in an otherwise dead cosmos. I now know that I live in a universe in which the inner light of experience is far, far more widespread than assumed within standard Western canon, he writes. Koch admits that when he speaks publicly on these ideas, he often gets youve-got-to-be-kidding-stares.
It is an irony of materialist theories that such efforts to sidestep ghostly or supernatural accounts of the mind often veer into surreal, metaphysical territory. Graziano, in a similarly transcendent passage in his book, proposes that the attention-schema theory allows for the possibility of uploading ones mind to a computer and living, digitally, forever; in the future, brain scans will digitally simulate the individual patterns and synapses of a persons brain, which Graziano believes will amount to subjective awareness. Like Koch, Graziano, when entertaining such seemingly fanciful ideas, shifts into a mode that oddly mixes lyricism and technical rigor. The mind is a trillion-stranded sculpture made of information, constantly changing and beautifully complicated, he writes. But nothing in it is so mysterious that it cant in principle be copied to a different information-processing device, like a file copied from one computer to another.
The strangeness of all this does not mean that such speculations are invalid, or that they undermine the theories themselves. While reading Koch and Graziano, I recalled that the philosopher Eric Schwitzgebel, in 2013, coined the term crazyism to describe the postulate that any theory of consciousness, even if correct, will inevitably strike us as completely insane.
If the current science of consciousness frequently strikes us as counterintuitive, if not outright crazy, its because even the most promising theories often fail to account for how we actually experience our interior lives. The result, Tim Parks writes in his new book, Out of My Head: On the Trail of Consciousness, is that we regularly find ourselves signing up to explanations of reality that seem a million miles from our experience. In 2015, Parks, a British novelist and essayist, participated in a project funded by the German Federal Cultural Foundation which put writers in conversation with scientists. The initiative led Parks to meet with a number of neuroscientists and observe their research on consciousness. Parks finds that most of the reigning theories upend his intuitive understanding of his own mind. Truth, these experts tell him, lies not in our fallible senses but in the bewildering decrees of science. Our minds, after all, are unreliable gauges of the objective world.
Parks takes a different approach: mental experience lies at the core of Out of My Head, not only as subject but as method. For Parks, our subjective understanding of our minds is trustworthy, at least to a degree; he admonishes the reader to weigh every scientific theory against their knowledge of what its really like being alive. Throughout his account of his travels, he dramatizes his inner life: he notices how time seems to slow down at certain moments and accelerate at others, and how the world disappears entirely when he practices meditation; he describes his fears about his girlfriends health and his doubts about whether he can write the book that we are reading.
Most of the neuroscientists whom Parks meets believe that consciousness can be reduced to neuronal activity, but Parks begins to doubt this consensus view. As a novelist, attentive to the nuances of language, he notices that these theories rely a great deal on metaphor: the literature of consciousness often refers to the brain as a computer, chemical activity as information, and neuronal firing as computation. Parks finds it puzzling that our brains are made up of thingscomputersthat we ourselves only recently invented. He asks one neuroscientist how electrical impulses amount to information, and she insists that this is just figurative language, understood as such by everyone in the field. But Parks is unconvinced: these metaphors entail certain theoretical assumptionsthat, for instance, consciousness is produced by, or is dependent upon, the brain, like software running on hardware. How are these metaphors coloring the parameters of the debate, and what other hypotheses do they prevent us from considering?
Parkss skepticism stems in part from his friendship with an Italian neuroscientist named Riccardo Manzotti, with whom he has been having, as he puts it, one of the most intense and extended conversations of my life. Manzotti, who has become famous for appearing in panels and lecture halls with his favorite prop, an apple, counts himself among the externalists, a group of thinkers that includes David Chalmers and the English philosopher and neuroscientist Andy Clark. The externalists believe that consciousness does not exist solely in the brain or in the nervous system but depends, to various degrees, on objects outside the bodysuch as an apple. According to Manzottis version of externalism, spread-mind theory, which Parks is rather taken with, consciousness resides in the interaction between the body of the perceiver and what that perceiver is perceiving: when we look at an apple, we do not merely experience a representation of the apple inside our mind; we are, in some sense, identical with the apple. As Parks puts it, Simply, the world is what you see. That is conscious experience. Like Kochs panpsychism, spread-mind theory attempts to recuperate the centrality of consciousness within the restrictions of materialism. Manzotti contends that we got off to a bad start, scientifically, back in the seventeenth century, when all mental phenomena were relegated to the subjective realm. This introduced the false dichotomy of subject and object and imagined humans as the sole perceiving agents in a universe of inert matter.
Manzottis brand of externalism is still a minority position in the world of consciousness studies. But there is a faction of contemporary thinkers who go even furtherwho argue that, if we wish to truly understand the mind, materialism must be discarded altogether. The philosopher Thomas Nagel has proposed that the mind is not an inexplicable accident of evolution but a basic aspect of nature. Such theories are bolstered, in part, by quantum physics, which has shown that perception does in some cases appear to have real causal power. Particles have no properties independent of how you measure themin other words, they require a conscious observer. The cognitive scientist Donald Hoffman believes that these experimental observations prove that consciousness is fundamental to reality. In his recent book The Case Against Reality: Why Evolution Hid the Truth from Our Eyes, he argues that we must restart science on an entirely different footing, beginning with the brute fact that our minds exist, and determining, from there, what we can recover from evolutionary theory, quantum physics, and the rest. Theories such as Hoffmans amount to a return of idealismthe notion that physical reality cannot be strictly separated from the minda philosophy that has been out of fashion since the rise of analytic philosophy, in the early twentieth century. But if idealism keeps resurfacing in Western thought, it may be because we find Descartes and Galileos original dismissal of the mind deeply unsatisfying. Consciousness, after all, is the sole apparatus that connects us to the external worldthe only way we know anything about what we have agreed to call reality.
A few years before Parks embarked on his neuro-philosophical tour, he and his wife divorced, and many of his friends insisted that he was having a midlife crisis. This led him to doubt the reality of his own intuitions. It seems to me that these various life events might have predisposed me to be interested in a theory of consciousness and perception that tends to give credit to the senses, or rather to experience, he writes.
By the end of the book, its difficult to see how spread mind offers a more intuitive understanding of reality than other theories do. In fact, Parks himself frequently struggles to accept the implications of Manzottis ideas, particularly the notion that there is no objective world uncolored by consciousness. But perhaps the virtue of a book like Parkss is that it raises a meta-question that often goes unacknowledged in these debates: What leads us, as conscious agents, to prefer certain theories over others? Just as Parks was drawn to spread mind for personal reasons, he invites us to consider the human motivations that undergird consensus views. Does the mind-as-computer metaphor appeal to us because it allows for the possibility of mind-uploading, fulfilling an ancient, religious desire for transcendence and eternal life? Is the turn toward panpsychism a kind of neo-Romanticism born of our yearning to renchant the world that materialism has rendered mute? If nothing else, these new and sometimes baffling theories of consciousness suggest that science, so long as it is performed by human subjects, will bear the fingerprints of our needs, our longings, and our hopesfalse or otherwise.
See the original post here:
Posted in Mind Uploading
Comments Off on Do We Have Minds of Our Own? – The New Yorker
Abandoning Earth: Personhood and the Techno-Fiction of Transhumanism – Patheos
Posted: at 1:46 pm
by Jens Zimmermann, Project Director, Human Flourishing; Canada Research Professor for Interpretation, Religion, and Culture at Trinity Western University; Visiting Professor for Philosophy, Literature, and Theology at Regent College; Visiting Fellow of the British Academy at the University of Oxford; Research Associate at the Center for Theology and Modern European Thought in Oxford. Read more about Dr. Zimmermann.
One of the most important contemporary issues is our relation to technology. To be sure, technology is nothing new but has always been integral to human evolution; never before, however, has technology suffused every area of life or shaped human self-understanding to the extent it does today. Consequently, debates about the benefits and possible drawbacks of technology currently dominate all crucial, formative arenas of human existence: work, education, healthcare, social development, and even religion. Critical voices are not lacking in these discussions but, on the whole, we increasingly place our future hopes for society in technological enhancements. Transhumanism, in its pursuit of a humanly engineered evolution that will eventually leave the body behind by uploading our digitized brains to computing platforms, a vision that includes merging human with artificial machine intelligence, is merely the extreme edge of a techno-reasoning that increasingly forms our collective social imaginary.
How is one to assess this development? I suggest that the most effective assessment of techno-reasoning is to probe the range of its imagination. After all, how we perceive the world, others, and ourselves is principally a matter of the imagination. As the well-known Canadian literary critic Northrop Frye put it in The Educated Imagination:
we use our imagination all the time: it comes into all our conversation and practical life: it even produces dreams when we are asleep. Consequently we only have the choice between a badly trained imagination and a well trained one, whether we ever read a poem or not.[1]
Fryes reference to poetry indicates his view that literature best exemplifies the language of the imagination, of how we perceive the world in all its semantic complexity: our use of metaphors and choice of words in everyday speech reveals the vision of society, and indeed of reality that underlies our thoughts and actions. Equally important, the fundamental job of the imagination in ordinary life, then, is to produce out of the society we have to live in, a society we want to live in.[2] We need fiction to envision reality differently. We often use the word fiction to refer to what is untrue or false, but the word actually means creative invention and describes our capacity for understanding and shaping reality meaningfully through narrative. Hence reimagining society differently depends in turn on the sources that train our imagination to produce narratives for our self-understanding.
What should concern us is that Transhumanisms imagination runs only along engineering and computational lines. Transhumanists like to call themselves critical rationalists,[3] but the fact is that this critical aspect is limited to a techno-reasoning that produces a narrative of techno-fiction. When we examine the current techno-reasoning of transhumanism, we will find a strongly diminished view of human identity that reduces consciousness to the activity of neuronal networks we can detach from the body and transferable to a computing platform.[4]
It is generally known that transhumanism denigrates the human body as rather primitive biological form of existence that requires perfection through nano- and computing technologies. Ultimately, as Ray Kurzweil argued in his book How to Build a Human Mind: The Secret of Human Thought Revealed (2012), the brain is a complex biological machine in which human ideas, feelings, and intentions are ultimately tied to neuronal functions of the brain. Kurzweil imagines that the imminent completion of mapping this biological machine anatomically will allow us to digitize its functions and thus transpose human thinking into computational format, permitting in turn the uploading of ones mind (of consciousness, self, or personality) to a data cloud storage. This transhumanist vision indicates a breathtaking ignorance of human cognition and its dependence on biology for a human consciousness. For one, aside from being technologically unfeasible, the computational model of the brain and its possible detachment from the body is flatly contradicted by recent neuroscience and its insistence on embodied cognition.
For example, the well-known neuroscientist Antonio-Damasio breaks with the traditional cognitivist view of human beings as rational minds inhabiting insentient bodies.[5] In his book The Self Comes to Mind (2010), Damasio reintroduces the body as essential for structuring the brain, albeit still based on a representational view of cognition: Because of this curious arrangement, the representation of the world external to the body can come into the brain only via the body itself, namely via its surface. The body and the surrounding environment interact with each other, and the changes caused in the body by that interaction are mapped in the brain. It is certainly true that the mind learns of the world outside via the brain, but it is equally true that the brain can be informed only via the body.[6] You may not consider this concession very great, but eight years later, Damasio rejects the Cartesian mind-body dualism behind traditional neuroscience, arguing that a new, biologically integrated position is now required.[7]
This new position leaves behind a computational model of the mind, rejecting the dried-up mathematical description of the activity of the neurons because it disengaged neurons from the thermodynamics of life.[8] New brain science acknowledges, according to Damasio, that the body as organism, for example through our nervous and immune systems, possesses a kind of perception conveyed through feelings that are registered in turn as complex mental experiences that help us navigate life. Damasio concludes that neural and non-neural structures and processes are not just contiguous [i.e. adjacent, sharing a common boarder] but continuous partners, interactively. They are not aloof entities, signaling each other like chips in a cell phone. In plain talk, brains and bodies are in the same mind-enabling soup.[9] On the basis of this new insight (new to brain scientists at any rate), Damasio rejects the reductive, but sweepingly common notion in the worlds of artificial intelligence, biology, and even neuroscience, that natural organisms would somehow be reducible to algorithms.[10]
Damasios new insights from Neuroscience are a welcome antidote to the severely stunted imagination of the Transhumanists. Even so, neuroscience in general, and transhumanism in particular, suffer from a striking lack of philosophical reflection on the historical origins of the naturalist and functionalist view of organic life that still forms the imaginative framework of cognitive science. Natural scientists, along with all those who pursue their research into human perception in the investigative mode of the natural sciences, still have a hard time with admitting that metaphysics is always at play when imagining what it means to be human. How many scientists (and indeed philosophers) are fully conscious of the historical developments that made possible a purely materialist view of reality?
The philosopher Hans Jonas offers a superb philosophical analysis of this development and its effects on the study of human nature in The Phenomenon of Life: Approaches to a Biological Philosophy (1994). He describes how the duality of mind and spirit of the ancient world was reified into a mind-body dualism by Descartess division of reality into the two spheres of timeless mental ideas on the one hand, and spatio-temporal mechanisms of material stuff on the other hand. Leaving the side of mental ideas to religion and philosophy, he reduced nature (including animals and the human body) to an inert machine running on functional, mathematical principles, wholly explorable through quantifiable data. The legacy of Cartesian dualism was the modern conception of nature without soul or spirit.[11] Encouraged by the enormous success of the scientific method, it was only a matter of time until a secularist science, eager to do away with Descartes God, also claimed the mental sphere for its mechanistic understanding of reality.
This mechanistic monism was further aided by Darwins theory of evolution. Naturalistic evolution exploded Cartesian dualism or a separate mental realm by integrating human beings into a general developmental process. Jonas argues that even though evolution raised once again the problem of how the transcendent freedom and intentionality of consciousness could arise from such a process, the functionalist bias of naturalism closed the door to any arguments that may have led out of the reductionist dead-end of materialist monism. Early evolutionary theory dogmatically adhered to a mechanistic view of causality that tried to explain organic life analogously to complex machines, declaring consciousness to an epiphenomenon, a random side-effect of an essentially material process. This view, argues Jonas, inverts how organic life forms, and in particular human beings, actually function. Human thought and action originate from an intentional center and exercise volitional freedom in their striving to accomplish goals. While we are certainly able to automate strategies for accomplishing goals, this ability does not warrant reducing our humanity to the workings of a complex machine.
Jonas work himself has helped inspire profound changes in evolutionary theory, including the growing conviction among evolutionary psychology that an embodied intentionality or consciousness is intrinsic to organic life itself. The phenomenon of organic life is impossible to describe, let alone understand, without recognizing that a minimal form of intentionality, individuation, and indeed freedom is evident in even the most primitive living organisms striving to survive.
Neither transhumanism, however, nor the AI research that fuels transhumanists hopes for melding human and machine intelligence, have followed this trend of evolutionary biology. Instead, the transhumanists and AI researchers remain beholden to the basic premise of cybernetics that human life and thought boil down to mechanisms controlled by the exchange of information and are therefore amenable to transposition into algorithms so that the essence of human thought and emotion can be digitized and replicated on computational platforms.
This brief historical sketch shows us that transhumanisms abandoning of the earth by leaving behind the body constitutes not a neutral fact based on scientific progress but is indeed a historically conditioned choice. This choice takes one particular aspect of human perception, namely our ability to abstract material from the rich flow of experience to objectify and quantify it for better understanding, and the re-imagines all of reality in these terms. This reductionist ontology ignores the organic and especially the personal aspects characteristic of human life.
It is worth reiterating that the materialist, functionalist premise of transhumanism (and much AI research) is neither empirically convincing nor in any way morally neutral. From a historical point of view, it is actually astonishing how beholden the field of techno-science still is to scientistic attitudes originating in the scientific revolution and the European Enlightenment.
For example, the well-known AI researcher Marvin Minsky (d. 2016), equated belief in consciousness with the kind of religious mumbo jumbo science is supposed to combat.[13] For Minsky, there is no such thing as consciousness, there is no such thing as understanding.[14] Those who believe in such silly superstitions ignorantly hold to this religious idea that there is magic understanding: there is a magic substance that is responsible for understanding and for consciousness, and that there is a deep secret here.[15] For Minsky, the problem of consciousness and understanding with regard to AI simply doesnt exist because he has a thoroughly mechanical, functionalist view of the human mind. For this reason, he looks to Freud as an important figure because hes the first one to consider that the mind is a big complicated kludge of different types of machinery which are specialized for different functions.[16] While most of psychology and other sciences have moved on from Freuds nave mechanical view of the psyche, transhumanism and much popular opinion has not.
One cannot blame transhumanists for wanting to improve human life, but a sober, historical-philosophical analysis of transhumanism exposes it as delusive and naive. The whole idea of engineering a post-human existence by abandoning the organic body is based on an untenable materialist metaphysics. As Hans Jonas perceptively put it, materialistic biology (its armory recently strengthened by cybernetics) is the attempt to understand life by eliminating what actually enables this attempt in the first place: the authentic nature of consciousness and purpose.[17] Only because they suppress the basic structure of organic life and reduce consciousness to an epiphenomenon of materialist functions can transhumanists propose their futuristic vision. Only because they have already reduced life to a machine, however complex, can they imagine a post-humanist future of immortality through technology. The transhumanist imagination concerning our humanity is deceived by the strange proclivity of human reason to interpret human functions by the categories of the artifacts created to replace them, and to interpret artifacts by the categories of the human mind that created them.[18]
Given that transhumanism is driven by this historically conditioned reductionist view of human life, I am less worried about the question whether transhumanism functions as Ersatzreligion, though the growing number of Christian transhumanists is somewhat alarming. Their belief in technology as providential means for procuring god-likeness and immortality makes one wonder about the efficacy of the incarnation. Why did God bother to become a human being rather than a cyborg? Only an imagination already hooked on techno-fiction could suggest that the divine transformation of biological matter is inferior to, or even akin to a man-made metamorphosis through technology.
From a traditional Christian perspective at least, techno-fiction that deems the body to be optional ranks among gnostic heresies. As the German theologian Dietrich Bonhoeffer explained, from an incarnational point of view, we dont have bodies but we are our bodies, and are thus rooted in the earth. Abandoning the earth, he declared, therefore means also to lose touch with our fellow human beings and with God who created us as embodied souls. Bonhoeffer concluded that the man who would leave the earth, who would depart from the present distress, loses the power which still holds him by eternal, mysterious forces. The earth remains our mother, just as God remains our Father, and our mother will only lay in the Fathers arms him who remains true to her.[19]
However, what is of greater concern than grouping transhumanism among gnostic heresies is that the movement perpetuates the pervasive techno-reasoning in our culture by glorifying the functionalist image of human existence that continues to enthral the public social imaginary by means of social media and AI research. Transhumanism is just one example, perhaps the most glamorous one, of our current collective culture delusion that the human mind, human language, and human relations boil down to functions that computers will eventually master in far better ways.
We would do well to listen to critical voices of those well familiar with the computing industry like Jaron Lanier. Lanier, credited with inventing virtual reality, exposes the false and dangerous presuppositions of techno-fictions. For example, he debunks the delusion that AI has anything to do with computers gaining intelligence, let alone sentience. AI, he reminds us, is nothing but a story we tell about our code.[20] This story, he confesses, was originally invented by tech engineers to procure funding from government agencies. AI, in short, does not exist if one implies that machines actually think or feel with even the lowest form of consciousness we know from organic life.
Lanier warns that current techno-fiction and our use of technology are deeply dehumanizing. Social media apps are designed to manipulate users into addiction to exploit their consumer habits. Moreover, the whole gamut of computing technology erodes our self-understanding of what it means to be truly human. Lanier worries that if you design a society to suppress belief in consciousness and experienceto reject any exceptional nature to personhoodthen maybe people can become like machines. The greatest danger, he concludes, is the loss of what sets us apart from all other entities, the loss our personhood. His warning echoes the prophetic voices of other critics like the former software coder Steve Talbot, or the late philosopher Hubert Dreyfus, who also worried that instead of adapting technology to human intelligence we slowly conform human consciousness to the functional logic of machines.
These thinkers show us that one does not have to be a luddite or religious zealot to reject transhumanism or entertain a critical attitude towards the nave embracing of current technologies. What is at stake in the discussion about technology and transhumanism is nothing less than our true humanity. Now, it is certainly the case, in my view, that the more holistic approach to human existence offered by religions, and in particular the Christian teaching that God became a human being, provide better anthropological frameworks for approaching technology than secularist or naturalist approaches; however, the time may be ripe for all those concerned about losing our true humanity to come together in exposing the dehumanizing misconceptions put forward by transhumanists, no matter how much these are presented in the radiant, Luciferian promises of divinity. Sicut eritis deus . . . .
[1] 134-135.
[2] 140.
[3] Max More, The Philosophy of Transhumanism in Transhumanist Reader (Oxford: Wiley-Blackwell, 2013, 1-17), 6.
[4] Martin Rothblatt, Mind is Deeper than Matter, in Transhumanist Reader, (317-326).
[5] Economist John Greys endorsement of Damasios recent book The Strange Order of Things (2018).
[6] The Self Comes to Mind, 97.
[7] The Strange Order of Things, 240.
[8] Ibid.
[9] Ibid.
[10] Ibid., 200. Damasion recognizes that the worlds of artificial intelligence, biology, and even neuroscience are inebriated with this notion. It is acceptable to say, without qualification, that organisms are algorithms and that bodies and brains are algorithms. This is part of an alleged singularity enabled by the fact that we can write algorithms artificially and connect them with the natural variety, and mix them, so to speak. In this telling, the singularity is not just near: it is here. For Damasio, these common notions are not scientifically sound because they discount the essential role of the biological, organic substrate from which feelings arise through the multidimensional and interactive imaging of our life operations with their chemical and visceral components (201).
[11] Jonas, Phenomenon of Life, 140.
[12] Das Prinzip Leben, 219.
[13] Why Freud was the First good AI Theorist in Transhumanist Reader, 169.
[14] Ibid., 172.
[15] Ibid., 170.
[16] Ibid., 169.
[17] Das Prinzip Leben, 230.
[18] Prinzip Leben, 199.
[19] Dietrich Bonhoeffer Works English, 10, 244-45.
[20] Ten Arguments for Deleting Your Social Media Accounts Right Now
More:
Abandoning Earth: Personhood and the Techno-Fiction of Transhumanism - Patheos
Posted in Mind Uploading
Comments Off on Abandoning Earth: Personhood and the Techno-Fiction of Transhumanism – Patheos
4 Ways to Amplify Your Job Search at the End of the Year – BioSpace
Posted: at 1:46 pm
As many people prepare for the holiday season, they stop to reflect on their goals for the year. Would you like a new job in the new year? Landing a new job is often desirable for many life sciences professionals who are interested in making more money, working on innovative projects and/or finding a better company culture. With that in mind, it can be difficult to focus on a job search because there can be a variety of competing priorities including working to reach your end of the year goals and planning for the holidays. Here are four ways to amplify your job search at the end of the year!
Tweak your social media profiles
Most professionals create their social media profiles one time, and rarely update or edit them. Often the information you see on social media for individuals is outdated and incomplete. While youre going through the job search and interviewing process, the majority of recruiters and hiring managers will search for you on social media to find additional information. You want to make sure that youre putting your best foot forward based on your current situation. Tweaking and reviewing all of your profiles can help you see the image youre projecting and ensure that it lines up with the positions youre targeting.
Update your resume
The majority of job seekers do not have an updated resume that they feel confident with. They tend to think that content should be added, highlighted, and/or removed. Submitting yourself for positions with a subpar resume can do more harm than good. Prior to applying, read over the job posting youre interested in, and then immediately read your resume. Do you sound like a good fit for the role? Are the words and phrases in your resume speaking directly to what the job posting is asking for? If you dont see a clear correlation with relevant contributions and accomplishments, then updating your resume is vital.
Create profiles on different job sites
There are countless job sites online. Some are specific to a certain industry, while others are more general and feature positions in all industries and levels. Many of these job boards will allow you to create a profile that is hidden from public search. These profiles are exclusively for recruiters and hiring managers that utilize that job site. Uploading your resume to the profiles on specific job boards is another way to get more visibility and speed up your search. The key is to find the job sites that are most relevant to the positions you are looking for. You dont want to waste time, trying to show up everywhere.
Network at professional end of the year / holiday events
Despite all of the technology we have today, networking in person is still highly effective. In fact, meeting the right person and cultivating a relationship can help you by pass the traditional application process. Attending professional conferences, meetings and association events can be instrumental in positioning you as a serious candidate for a job. Towards the end of the year, many groups have holiday events or other celebrations. These gatherings can be great for meeting new people who could potentially help with your job search.
If youre trying to find a new job, you dont have to put the process on hold during the holidays. These are a few ways to amplify your job search that can help you make significant progress. What could you do towards the end of the year to stay active in your job search?
Porschia Parker is a Certified Coach, Professional Resume Writer, and Founder of Fly High Coaching. (https://www.fly-highcoaching.com) She empowers ambitious professionals and motivated executives to add $10K on average to their salaries.
Excerpt from:
4 Ways to Amplify Your Job Search at the End of the Year - BioSpace
Posted in Mind Uploading
Comments Off on 4 Ways to Amplify Your Job Search at the End of the Year – BioSpace
Match your broadband usage with the perfect broadband type – Flux Magazine
Posted: at 1:46 pm
words Al Woods
Choosing a broadband package is a great responsibility. There are so many variables to consider. While most people will focus on price, data allowance limits, connection speeds and uptimes, very few take the time to match their package to their usage style.
Matching your package to your usage style ensure that you get the best value for your money. You will neither pay for more than you need or end up with a slow connection that irritates you every time you fire up your computer to browse or try to watch some videos on your smart TV.
The first time getting the match right is understanding what type of user you are.
Very few people can describe their internet use. This makes it hard to categorize themselves and estimate the right package for their home. Here are the main categories users fall into.
A light user will spend most of their online time reading news, corresponding on email, catching up on social media and occasionally watching videos.
If you are not heavy on the videos, you can make do with a data capped package, which is cheaper. However, if you will need an unlimited plan if you stream online videos more often or host other people often as this could easily max a capped plan.
As for speeds, you can make do with 5 Mbps or less.
Moderate users are either single or multiple occupancy homes that channel their entertainment over the internet in addition to social media, general browsing and mail corresponding.
Medium users will be more comfortable with a plan that gives them between 15 and 100 Mbps depending on how many people use the connections simultaneously.
Heavy users download and upload huge chunks of data. You could be working from home, uploading videos to YouTube, watching entire TV series on Netflix and so forth.
Complete families with all parents and a couple of kids are also classified as heavy user homes. You will find it impossible to keep within the limits of a capped plan while a lower bandwidth will render your network painfully slow when different devices connect simultaneously.
In this case, you will need Fibre connections that give you speeds of 40/10 Mbps and above. The faster the connection the better your streaming quality and lesser the time your household will spend uploading or downloading huge chunks of data.
This is an emergent breed of users whose needs are almost similar to those of small businesses. You belong here if you work from home and are not a writer. If you run a YouTube or Twitch and similar channels, are a designer who uploads bulky concepts, a professional or hobbyist gamer or a person who deploys peer-to-peer software often, then you will need a very fast connection.
Go for fibre plans that give you up to 200 Mbps download and at least 100 Mbps upload speeds. You cannot survive with anything slower than that.
Once you understand your user classification, it will be easier to find the perfect broadband type for you. The good thing is you rarely have to worry about connection types like Fibre, VDSL and ADSL since most providers will give away their delivery technology on the possible speeds for its package. The only time you should care is when you are a gamer and are worried about ping times and TTLs.
Ensure that you get the right download and upload bandwidth. Dont forget to audit the companys uptimes and customer service to guarantee your peace of mind.
Visit link:
Match your broadband usage with the perfect broadband type - Flux Magazine
Posted in Mind Uploading
Comments Off on Match your broadband usage with the perfect broadband type – Flux Magazine
Hazards of sharenting – Daily Times
Posted: at 1:46 pm
Uploading snaps of our kids on social media is commonplace in our society. We frequently find parents sharenting photos as well as personal data on their children on Facebook without making any effort to learn what may be the consequences of such sharing on the lives of their loved ones. There is an ongoing debate as to how parents can balance their right to share with their childs interest in privacy.
Pediatricians have now started to consider how sharenting affects childhood well-being and family life. It is commonly observed that in extreme forms, parental sharing of their childrens information has led to a phenomenon labeled digital kidnapping, whereby childrens photos and details have been appropriated by others who promote such kids as being their own children. Research has shown that millions of innocent photographs end up on pedophilic and hebephilic websites.
This piece is exclusively aimed at highlighting the hazards of reckless sharenting on our kids safety and security.
Sharenting is a term that denotes the overuse of social media by parents to share content, based on their children, such as baby pictures or details of their childrens activities.
The Wall Street Journal pertaining to the perils attached to sharenting quoted psychiatry professor Elias Aboujaoude, who said that sharenting can turn parenthood into a competition for attention. The practice has also been linked to online predators, who could use the information for child grooming. Childrens self-esteem can also be affected by negative online reactions and they may have trouble forming their self-identity- separate from the online persona created by parents.
Inter alia, there are myriad perils attached to sharenting which every parent must be cautious of before sharing the images and personal information of their loved ones on internet. Some of these are highlighted hereunder.
Bullying: sharing your child photos on social media can be used for bullying your children. You must be concerned about how others would react to the stuff that you share about your kids. Whether your child cares about old photos and stories about them on social media, others may be able to use that information to make fun of, insult, and even bully your child as he or she grows older.
Whats to stop a peer from sharing a photo that your child finds embarrassing with his or her own networks? What if that share catches on? It doesnt take much for a photo to go from an inside family joke to gossip fodder for an entire school. Moreover, while posting embarrassing photos of your children on Facebook might seem like harmless fun, it can expose them to bullying and intimidation. If someone distributes these photos to online forums and websites as a joke it can cause a lot of emotional trauma for your child. In some severe cases, teens have committed suicide after threats and bullying online.
Digital kidnapping: Digital kidnapping is a type of identity theft. It occurs when someone takes photos of a child from social media and repurposes them with new names and identities, often claiming the child as their own. There have been numerous examples of this in recent years, including a 2015 incident in which a stranger took a photo of an 18-month old baby from a mommy bloggers Facebook page and posted it on her own Facebook profile, acting like he was her son.
Sharenting is a term that denotes the overuse of social media by parents to share content, based on their children, such as baby pictures or details of their childrens activities
Your childs photos can also be kidnapped for baby role-playing. If youre unfamiliar with baby role-playing, search for #BabyRP, #AdoptionRP, and #KidRP on social media sites. Baby role players create accounts on social media sites to post stolen photos along with captions that give false details about the child in the photos. Sometimes the stranger impersonates the child by responding to comments as the child or from the childs point-of-view. These comments can be disturbing, though not all are malicious. Baby role-playing accounts appear to be created by people who appear to want to be a parent or a child. They are, however, another example of how you can easily lose control over your childs identity when you publish information about them online.
Sets a bad example: Young children should be taught from an early age about the dangers of revealing too much information to strangers. With smartphones and other electronic devices making it easy to post photos online, but it also is important that children understand the dangers of uploading the wrong kind of pictures. If you upload lots of photos of your children to Facebook, they may draw the conclusion that there is nothing wrong with sharing images online. For example, many parents post photos of their children in the bath or in their swimwear. Unless children are taught boundaries about sharing personal photos such as these, it can have a negative effect on them later in life.
Real Life Stalking: Sexual predators can use social media sites to physically stalk their intended victims. When users post information about themselves and their activities, stalkers can take that information to learn about their targets interests and schedules. Predators can then use this information to locate and stalk their victims in the real world, not just online. According to an article in the Journal of Adolescent Health quoted by enough is enough, 65 percent of online sex offenders used social media sites to collect home and school information about their victims. Social posts that specify a users precise location make it particularly easy for online predators to locate and stalk victims.
Sextortion: Sexual predators dont have to find users in real life to victimize them. If a predator hacks into a victims account and finds intimate or compromising pictures or other sensitive information, he can use that material to blackmail the user. For example, in 2010, the FBI arrested a man on charges of hacking and blackmailing over 200 victims. After stealing private pictures to use as blackmail material, he coerced his victims into providing him with inappropriate pictures and videos of themselves. This practice of stealing material to blackmail victims into providing the predator with such material is known as sextortion.
Impact on a childs future: Its difficult, if not impossible, to control information once its posted online. You cant prevent anyone from taking a screenshot of your post and disseminating it beyond your reach. Your deleted posts, while apparently gone from your social media profile, may still live on in Internet archive websites and on the social media servers themselves. With that in mind, you should consider how your photos and stories may impact your child when hes much older, even an adult.
The reality is that the data shared by parents could be revealed by Google search algorithms for years to come. And we dont know what our childrens goals might be when they get older.
To cape it all, sharenting should be a matter of great deliberation for parents for it may jeopardize the safety and security of their kids. They must ask their children what theyre comfortable with and take some precautions. They also must pay close attention to privacy settings on their social media pages. They should choose photos carefully and watermark the ones that they post publicly. They should also ask friends and family to refrain from posting photos or videos of their children. They should also start involving their children in deciding what is appropriate to share with others. Such conversation can help ward off bad feelings in the future and are useful for preparing children for living in a digital age.
Writer is legal practitioner-cum columnist based in Quetta
Originally posted here:
Posted in Mind Uploading
Comments Off on Hazards of sharenting – Daily Times
Three New Books on Human Consciousness to Blow Your Mind – The Wire
Posted: November 17, 2019 at 2:01 pm
At the moment, youre reading these words and, presumably, thinking about what the words and sentences mean. Or perhaps your mind has wandered, and youre thinking about dinner, or looking forward to bingeing the latest season of The Good Place. But youre definitely experiencing something.
How is that possible? Every part of you, including your brain, is made of atoms, and each atom is as lifeless as the next. Your atoms certainly dont know or feel or experience anything, and yet you a conglomeration of such atoms have a rich mental life in which a parade of experiences unfolds one after another.
The puzzle of consciousness has, of course, occupied the greatest minds for millennia. The philosopher David Chalmers has called the central mystery the hard problem of consciousness. Why, he asks, does looking at a red apple produce the experience of seeing red? And more generally: Why do certain arrangements of matter experience anything?
Anyone who has followed the recent debates over the nature of consciousness will have been struck by the sheer variety of explanations on offer. Many prominent neuroscientists, cognitive scientists, philosophers, and physicists have put forward solutions to the puzzle all of them wildly different from, and frequently contradicting, each other.
Lets begin with what might be called the standard view: The brain is extraordinarily complex, containing some 100 billion neurons, each of them capable of forming connections with (and exchanging signals with) 10,000 other neuronal units. Though the details are far from clear, it is presumed that neuronal activity gives rise to the mind. This is what Francis Crick famously called the astonishing hypothesis (in his 1994 book of the same name): You, your joys and your sorrows, your memories and ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.
Also read |Book Excerpt: A Simple Test to Reveal the Central Mystery of Quantum Physics
Much else is open to debate. Does the brain function like a computer, by processing information and if so, does it mean that machines could one day be conscious? Depends on who you ask. How widespread is consciousness within the animal kingdom, and when did it evolve in our own lineage? Depends on who you ask.
There isnt even unanimous agreement that the hard problem is the stumper Chalmers makes it out to be; cognitive scientist Daniel Dennett and philosopher Patricia Churchland, for example, have argued that the neuronal ebb and flow inside a healthy human brain simply is consciousness. (Churchland offers an analogy from physics: Though it took centuries to understand light, we now realise that light simply is an oscillating electromagnetic field.) Contrast that with philosopher Colin McGinns claim that humans might not have the cognitive wherewithal to comprehend their own minds; The puzzle of consciousness, he believes, is here to stay.
OK, lets dive in. Christof Koch is one of todays leading thinkers on the problem of consciousness. He was a long-time collaborator of Francis Crick, taught for many years at the California Institute of Technology, and is now president and chief scientist at the Allen Institute for Brain Science in Seattle. In his new book, The Feeling of Life Itself, Koch advocates for integrated information theory, or IIT, developed by Giulio Tononi, a neuroscientist at the University of WisconsinMadison. IIT doesnt ask how matter gives rise to consciousness rather, it takes as a given certain attributes of consciousness, and asks what kinds of physical systems would be needed to support them. And its quantitative: The theory purports to measure the amount of consciousness in a physical system (denoted by the Greek letter phi,) by linking specific physical states to specific conscious experiences.
Theres some degree of experimental support for this: Tononi has devised a sort of consciousness meter that attempts to measure in humans. (Or does it? Koch confesses that it actually measures something called the perturbational complexity index, which is related to traditional electroencephalograms, which track electrical activity in the brain which Koch says is correlated with .) The device gives a low reading for those who are in a deep sleep, or under anesthetic, and a higher value for those who are wide awake.
More sophisticated versions of this device may be of great value, Koch suggests (in dealing with patients with various kinds of brain damage, for example), by distinguishing those in minimally conscious states from those in so-called vegetative states, or in a coma.
While this is laudable, its not immediately clear that it addresses the hard problem. As Koch is well aware, a critic would naturally ask why this integrated information should feel like anything; couldnt you have the same flow of information but without consciousness? His answer is that the axioms at the heart of IIT fully delimit any experience so that nothing is left out; any system that obeys the axioms of IIT, he says, must be conscious. I didnt find this fully convincing, and I suspect Chalmers wouldnt, either. But at least it attempts to study consciousness quantitatively, which is a start.
And what of intelligent machines? A computer at least anything that functions like todays digital computers could, at best, mimic consciousness; it wouldnt actually be conscious, Koch argues, because it would lack the brains intrinsic causal powers; he argues that the brain as hardware, mind as software analogy has been wildly oversold.
And then we come to the whopper: Koch argues that everything is a little bit conscious, a view known to philosophers as panpsychism. This, in Kochs view, gets rid of the puzzle of how consciousness emerges from non-conscious neurons (or atoms); if hes right, consciousness was there all along.
Also read |People in Vegetative States Often Lead Complicated Mental Lives
As Koch is aware, panpsychism by itself leaves many questions unanswered. Why, for example, is this arrangement of matter more conscious than that arrangement of matter? But he believes that panpsychism and IIT, taken together, are the most promising path toward an answer.
If Kochs book had me occasionally wearing my skeptical-emoji face, Donald D. Hoffmans latest, The Case Against Reality, had me doing the head-exploding emoji. Hoffman, a cognitive scientist at the University of California, Irvine, starts with perception rather than consciousness, but hes clearly hunting the same prey as Koch. The main thing he wants you to know about your perceptions is that theyre wrong theyre not veridical, in his preferred language.
Its not that everything is an illusion; he believes there is such a thing as objective reality but he says our perceptions cant lead us toward that reality. His argument is rooted in a combination of Darwinian natural selection and game theory known as the interface theory of perception.
He offers an analogy with a computer screen: We can move an icon shaped like a file folder into the trash, but we dont really believe the two-dimensional pixel-arrays actually contain files or trash. Instead, theyre conveniences; theyre representations that are useful in achieving goals. Similarly, we perceive the world around us through the interface of our senses. (This is not a brand-new idea; Kant suggested something similar almost 250 years ago, as did Plato in his allegory of the cave some two millennia earlier.)
But surely our perceptions map in a mostly true way onto the real world, right? No, Hoffman says: He argues that Darwinian evolution would favour an organism with less-accurate perceptions over one that perceived the world as it really is. He calls this wildly counter-intuitive proposition, on which the rest of the book rests, the fitness-beats-truth (FBT) theorem; he says it can be proven through computer simulations.
And he goes further, arguing that neither objects nor the spacetime that they appear to inhabit is real. Same goes for neurons, brains, and bodies: Our bodies are messages about fitness that are coded as icons in a format specific to our species, Hoffman writes. When you perceive yourself sitting inside space and enduring through time, youre actually seeing yourself as an icon inside your own data structure. No wonder he frequently refers to The Matrix. This book offers you the red pill, he writes.
I have a number of problems with this. Lets start with the most obvious objection: If nothing is real, why not go play on the freeway? After all, imaginary vehicles cant hurt imaginary-you. Hoffmans reply is that he takes his perceptions seriously but not literally. But this, I think, is having it both ways: If you admit that speeding cars can harm you, thats pretty much admitting theyre real.
And what about spacetime? He says that eminent physicists admit that space, time, and objects are not fundamental; theyre rubbing their chins red trying to divine what might replace them.
Also read |Do You Split the Person When You Split the Brain?
I think hes at most half right. Yes, many of todays leading physicists believe that space and time arent fundamental but so what? Weve known for some 200 years that matter is made of atoms (and the ancient Greeks had guessed as much) but that doesnt make matter less real. It just means that, depending on the problem at hand, sometimes describing the world in terms of atoms is helpful, and sometimes its not. But it would be bizarre to discount cars and tables and people just because we know theyre made of smaller stuff. And if space and time turn out to be some sort of approximation to a more fundamental entity, that will be a fascinating step forward for physics but even that wont render the stuff of everyday life less real.
OK, so if space and time and objects arent fundamental, what is? Toward the end of the book, Hoffman lays out the case that conscious minds are the fundamental entities that the rest of reality is made from; its minds all the way down. He calls this the conscious agent thesis. Objects dont exist, he says, unless theyre perceived by minds.
This sounds a bit like Kochs panpsychism, but Hoffman says its different; he calls his philosophical outlook conscious realism. Unlike old-school panpsychism, conscious realism offers hope for a mathematical theory of conscious experiences, conscious agents, their networks, and their dynamics. From such a theory, he hopes, all of physics including quantum theory and general relativity will eventually be derived.
I suspect it may be a long wait. I also think its a bit of a stretch to imagine that physicists, having given up on space and time, are ready to subscribe to this minds first world-view. Physicist Sean Carroll, for example, has made it clear that he doesnt see this as a fruitful approach. On the other hand, physicist Lee Smolin, in his most recent book, puts forward what he calls his causal theory of views, in which the universe is described in terms of how it appears from the point of view of each individual event; he hopes to derive space and time and the rest of physics from these views. Maybe some lucky convergence of thought will illuminate a link between Smolins views and Hoffmans conscious agents. Im not holding my breath, but its not the craziest idea out there.
Meanwhile, Hoffman hints at other payoffs for those who venture down the rabbit hole with him like a new view of God, for example. (This did not come as a complete shock, given that one of the books endorsements is from Deepak Chopra.) The research program that Hoffman envisions can foster what might be called a scientific theology, in which mathematically precise theories of God can be evolved, sharpened, and tested with scientific experiments.
As an alternative to the red pill, I picked up Michael S.A. Grazianos Rethinking Consciousness. His approach is different from that of both Koch and Hoffman, and at least superficially more in line with Dennett and Churchland. Graziano, a psychologist and neuroscientist at Princeton, spent much of his career developing something called the attention schema theory, which attempts to show how consciousness arises from attention and from the brains ability to keep track of what its attending to. Attention schema theory doesnt pretend to be a solution to Chalmers hard problem, but it explains why people might mistakenly think that there is a hard problem to begin with, Graziano writes.
The idea is that the brains of certain creatures are able to model the world around them an ability that Graziano believes evolved around 350 million years ago. This is a purely physical phenomenon, corresponding to specific brain activity that can be fully explained (at least in principle) at the level of neurons and neural connections. But the brain also performs a kind of meta processing of this information, keeping tabs of what the lower levels are doing, not in detail but in broad brush-strokes.
Also read |Shared False Memories: Whats Behind the Mandela Effect?
As Graziano sees it, this meta-level tally of what our brains are paying attention to simply is consciousness; it explains why looking at a red apple also feels like having such an experience. This extra layer of processing the attention schema seems like such a small addition, Graziano writes, and yet only then does the system have the requisite information to lay claim to a subjective experience.
There is no ghost in the machine, but attention schema theory offers an explanation for why we imagine that there is.
Such a system need not be biological. Unlike Koch, Graziano believes that conscious machines ought to be possible, and more provocatively that uploading of minds onto machines may one day be a reality as well. (He figures well achieve uploading before we achieve interstellar travel; many scientists, I suspect, believe the reverse.)
Theres more, of course; Graziano spells out the many ways in which truly intelligent artificial intelligence will change our lives (mostly for the better, he believes). And theres a great deal about evolution, and the evolution of brains in particular. But the real achievement here (assuming we buy into it) is that it takes the wind out of Chalmers hard problem by reducing it to a kind of meta-problem. (Graziano points out that Chalmers himself has considered this approach.)
Attention schema theory doesnt live in a vacuum; Graziano notes that it has some elements in common with Tononis integrated information theory, and Dennetts own preferred model, known as the global workspace theory. These should all be investigated in parallel, Graziano suggests, in the hope that our final theory of consciousness will draw on each of them.
I have no idea if or when a consensus will emerge. But it is one of the compelling scientific problems of our time, and one that demands continuing inquiry. Crick put it eloquently in the last sentence of The Astonishing Hypothesis, a quarter century ago: We must hammer away until we have forged a clear and valid picture not only of this vast universe in which we live but also of our very selves.
Dan Falk (@danfalk) is a science journalist based in Toronto. His books include The Science of Shakespeareand In Search of Time.
This article was originally published on Undark. Read the original article.
Continued here:
Three New Books on Human Consciousness to Blow Your Mind - The Wire
Posted in Mind Uploading
Comments Off on Three New Books on Human Consciousness to Blow Your Mind – The Wire
Friends reunion special with cast ‘in the works at HBO Max’ – Metro.co.uk
Posted: November 14, 2019 at 2:45 pm
The cast of Friends in talks for a reunion (Picture: NBC)
A Friends reboot is officially back on the table with cast members Jennifer Aniston, Lisa Kudrow, Courteney Cox, David Schwimmer, Matt LeBlanc and Matthew Perry.
According to reports, the creators and stars of the hit comedy are in talks to reunite on HBO Max.
Yep, Christmas has well and truly come early!
Talks are currently underway for an unscripted reunion special, an insider told The Hollywood Reporter however, lets not all get carried away just yet.
The publication also made it clear that a deal is far from done and agreements with cast and creatives still need to be hammered out.
After breaking the internet by uploading a picture of herself and her co-stars back together, Jennifer got us all excited again when she told Ellen DeGeneres that the crew is working on something.
While it seemed to be a joke at first and Jennifer completely ruled out a full reboot, she later admitted that on the shows 25th anniversary, there might be something in the making.
Jennifer said: We would love for there to be something, but we dont know what that something is. So were just trying.
Were working on something, she then teased.
However, in September, Friends co-creator and executive producer Marta Kauffman ruled out any chance of a reunion or reboot entirely.
We will not be doing a reunion show, we will not be doing a reboot, Kauffman said.
The show was about that time in life when friends are your family. On the subject of a reboot, Kauffman added: Its not going to beat what we did.
Looks like she might have changed her mind though
Metro.co.uk has contacted Warner Bros and HBO for comment.
Friends is available to stream on Netflix in the UK.
If you've got a story, video or pictures get in touch with the Metro.co.uk Entertainment team by emailing us celebtips@metro.co.uk, calling 020 3615 2145 or by visiting our Submit Stuff page - we'd love to hear from you.
MORE: Jennifer Aniston dubs Friends the gift of a lifetime in epic Peoples Choice Awards Icon acceptance speech
MORE: Jennifer Aniston shares ultimate Friends throwback to wish stylist happy birthday
Link:
Friends reunion special with cast 'in the works at HBO Max' - Metro.co.uk
Posted in Mind Uploading
Comments Off on Friends reunion special with cast ‘in the works at HBO Max’ – Metro.co.uk
Is the International Marketplace Network the best way to expand? – Tamebay
Posted: at 2:45 pm
Last week, Real, CDiscount, EPrice and EMAG announced that their one account to rule them all model is now ready for users. Whilst it hasnt been a secret that they were planning to work together, they have pressed the magic button which allows online sellers to connect all their accounts through the International Marketplace Network.
Today, Jesse Wragg, Managing Director at eCommeleon takes a look at the pros and cons of the International Marketplace Network. On the one hand if you already sell on either Real, CDiscount, EPrice or EMAG it enables you to easily list on the other three marketplaces extending your reach, but equally its unlikely to be as effective as translating and localising your listings for each marketplace on an individual basis.
If you want to really do justice in each country youll want to tweak your feed for each marketplace. (eCommeleon enables you to upload your product data and it can then help you map, convert and optimise it for each of these channels and validate it with each channels rules and requirements).
My interpretation of the IMN is that its the cross-channel equivalent of Amazons build international listings tool. That is, sign up in one locale/channel, press the right button and pow; youre listing and selling across multiple marketplaces in multiple countries.
In theory, it sounds great. The practice remains to be seen, particularly as Amazon has plenty of issues between its own marketplaces (attributes getting confused, EANs not working properly etc.), so it will be interesting to see how this cross-platform technology works when having to deal with very different systems.
In general, the IMN seems to be a bit of a no-brainer, as long as you dont mind exporting the CSV with your orders and uploading it into your system. What they dont say is how this can be managed alternatively. For example, Real & Cdiscount both integrate well with Linnworks for order management, ePrice & Cdiscount with ChannelAdvisor. Does that mean that you could now view your orders from Real in ChannelAdvisor via the Cdiscount connection? If so, thats something to write home about. If not, sellers using a system such as this will just need to make sure that theyve got a decent buffer on their inventory in order to avoid over-selling.
Machine translation. Its getting better, theres no arguments about that. But when you look at the trouble Amazon has with converting listings internationally via the build international listings tool, its easy to see that machines cant yet be trusted to automate the process of converting data from one marketplace to another; even marketplaces under the same hat. Whilst its nice to see that Real, Cdiscount, EMAG & ePrice are working together to take on Amazon, sellers should still remember that these are still different beasts unto themselves and whilst the technical integration is now alive and well, were working with multiple channels which each have their own requirements.
Lets imagine that Cdiscount allows you to use 250 characters on titles in your category, but Real only allows 200. Sure, the system can machine translate from French to German, but what will your German title look like on Real after its been machine translated and had 50 characters sliced off the end?
If youve got an account with any of the four International Marketplace Network partner marktplaces, youre now able to sell on any of the rest, quickly, simply and at the click of a button Its a quick and easy way to expand your international reach and ideal if you want to test the waters on a new marketplace in a new country.
But, if you are serious about listing on all four channels channel you might as well do it properly. Just as you tailor your feed for each international Amazon or eBay marketplace why wouldnt you spend the time to do the same for Real, CDiscount, EPrice and EMAG?
See the original post here:
Is the International Marketplace Network the best way to expand? - Tamebay
Posted in Mind Uploading
Comments Off on Is the International Marketplace Network the best way to expand? – Tamebay
What Do Teenagers Learn Online Today? That Identity Is a Work in Progress – The New York Times
Posted: at 2:45 pm
On April 26, 2018, a 15-year-old named Antonio Garza sat in front of her mirror in her bedroom in Austin, Tex., turned on her phone and started talking to herself as she did her makeup. She had long, curly brown hair that she wished were straight, almost no nails at all and a red, irritated upper lip she didnt really know why her lip was red, it just was and she displayed all this for the camera.
Im dying finally! Antonio said, and for the next 20 minutes she oscillated like the teenager that she is, a kind of excited electron, bouncing between one shell of identity and the next: pimply high school freshman, femme glam diva, vulnerable diarist, camera-ready ham. Throughout, she flicked across her face dozens of makeup brushes and powders, creating cut-creases on her eyelids and contours on her cheekbones and nose, then applying fake eyelashes fluffy as kittens. As she did all this, she monologued about how she thought she was going to do a makeup channel but decided she didnt want to do a makeup channel. I need to not be annoying, and Im not doing a good job at that, Antonio said, cheerfully. I also look ugly, and Im really depressed!
Forty-five minutes later, tender, glittering and shellacked in cosmetics, she was ready for school.
A year earlier, in eighth grade, Antonio spent most of her time playing oboe and Fortnite, hanging out alone in her room and making mistakes with her eyebrows. This sounds as if it would be lame, and it was, in fact, lame. But being a depressed kid alone in your room is not what it used to be. Its one thing to be depressed and listen to the Smiths in your oversize Champion sweatshirt and write in your journal and then hide that journal away and come out and pretend you have your act together. Its another thing to be a depressed kid alone in your room in your oversize Champion sweatshirt and then make some videos that toggle back and forth, back and forth, between Im not thriving at all right now and Actually I just slayed; between using Final Cut Pro to distort your face into the shape of a waterlogged Mr. Potato Head and creating a military-grade defense shield of foundation and bronzer. That is to say, to be a gender-bending kid alone in your room making videos that capture exactly what it feels like to be a teenager right now, the whole multipolar mess of humanity deep inside your own brain, and then post those videos to YouTube even though what youve just expressed to your smartphone you probably would not say to your mother in the kitchen and definitely would not say to your classmates, all of whom (you believe, wrongly) think youre really weird.
Later that afternoon, her hair now up in a ponytail, Antonio returned to her room and apologized to her 30 YouTube subscribers for being such a failure because shed forgotten to vlog at school. Or really, she said, she hadnt forgotten to vlog at school. She just hadnt wanted to vlog at school because in every situation in her seven-hour school day, she felt uncomfortable. Also, she said, staring at the floor, she looked terrible. Which maybe you couldnt tell in this lighting. But if you could tell and you were thinking of writing Youre ugly in the comments below
Antonio paused and stared into the camera, open as a wound. Dont, she said. Please.
She titled the video Ninth Grade Makeup Transformation and uploaded it to YouTube.
Then nothing happened. Or at least nothing happened for a few days. Then right before finals week of her freshman year, Antonios phone start blowing up with notifications. On Sunday, May 13, 2018, Antonio reached 10,000 YouTube subscribers. By May 22, she had more than 100,000, and by May 31, 300,000. Kids started sending her screenshots of her face on their YouTube recommendation pages, saying: What the heck! I had no idea! Congrats!
By then, thank God, school was out for summer vacation. Antonio hung out in her room making even more videos digitally widening her brow and narrowing her chin until she looked like the alien E.T.; installing over her dark irises a pair of light blue contacts; slowing down her voice to lower it three octaves; taking mortifying trips to Target, where everybody stared at her; sharing how to get un-ugly tutorials that include so many steps you might want to die; saying: Im funny, Im not funny at all. So thats funny. Its actually Im funny though. By the time school reopened for Antonios sophomore year, she had a million YouTube subscribers, and it was basically impossible for her to attend. Because you cant be a YouTuber and go to a big public school. You just cant. Emma Chamberlain, who is 18 and who some say created the jumpy editing style Antonio now does better, dropped out. The Dolan twins, who are now 19 and who after five years of filming every aspect of their lives, including wisdom-teeth extractions, just announced that they were going to stop uploading weekly for the sake of their mental health, dropped out before her. As Antonio explained to me, to high schoolers YouTubers are the equivalent of mainstream celebrities. Each morning when she walked on campus, the same kids who ignored her for years suddenly wanted to take selfies with her in the courtyard. And the same kids shed known forever, kids who had been in her gym class or played with her in middle-school band, were like, Do you remember me? You probably dont remember me.
And Antonio was like, Um, Ive known you since we were 6.
One day this past spring, I entered my home office to find that my 14-year-old had written ANTONIO GARZA on my whiteboard in small red letters. I texted her at school to ask why.
I thought it would be funny for u to watch one of her videos, she wrote back. Cuz ur old. (She added, jk, I love you. Nice.)
My daughters entertainment philosophy not incorrect was, and still is, that TV is primarily made by old people for old people and thus is irrelevant to her. YouTube, on the other hand, at least the part of it she sees, is made by teenagers for teenagers. And not just that, its made by teenagers talking to themselves in private, broadcasting their boredom-laced secret diaries, promising (and at moments, delivering) remote intimacy. This made watching those videos low-key compelling, good background for texting your friends or doing homework (if you were not inclined just to cue up a four-hour video of a person studying, which is now a thing). My daughter also found Antonio relatable, a huge buzzword among teenage YouTubers these days.
Is this what your brain feels like? I asked.
She said yes. Then she added, more knowingly than I might have liked: You dont think in one constant line of thought. She meant both me specifically and humans in general. My brain could never.
Shutterstock
Meme accounts have taken over Instagram. According to Instagram, meme content is shared seven times more than nonmeme content on the service. The pages have grown so widespread that Instagram is hiring a meme-account liaison to work with such accounts. In 2016, Samir Mezrahi founded @KaleSalad, a tremendously popular meme account with more than 3.4 million followers. If youre looking to start an account of your own, he has a few tips.
1. No interest is too niche.
Youll be more successful if you make memes that are related to your interests. That could be your high school, your high school teams, cars, travel, you can focus on a movie, or show, or character like Shrek.
2. Straightforward handles are best.
You can have your interest and add memes at the end of the handle; try to keep it as clear as possible. A lot of accounts have the words zar, plug, memes: You can add variations if your original name isnt available. Kale Salad was a meme in itself back when I got it. Now its an old meme and kind of a dead meme, but it still works.
3. Fake tweets and viral images are eternal.
A good genre of meme is when you tap into whatever the viral image of the moment is and make it your thing. One genre of meme uses the tweet format. Thats caption on the top and one or two images below; sometimes theres text on the images. People digest that format well.
4. Be shameless.
After you post your first meme, you have to promote it. Every time you post, you should share it to your Instagram Stories. You need to tell people to follow you and check out your account. You can use some hashtags to start. Try to get anyone in your existing network to check out your account. And be consistent. A lot of people dont post enough; you can post a few times a day.
5. Go private.
Going private is a fun growth hack. You set your account to private because D.M.s are a popular thing on Instagram. Friends D.M. each other memes, and when your account is set to private, the person you send it to cant see them, so it forces them to follow you to see the meme.''
Read More
Later in the summer, this same child introduced me to JoJo Siwa, another 16-year-old denizen of the internet. JoJo achieved some fame as a star on the reality-TV show Dance Moms but became a megastar online, just as herself. Or really an avatar of herself. Her persona is deliriously, dementedly plastic and flat, a 6-year-old beauty-pageant princess rolled out in a pasta machine. While Antonio circumscribes vast swaths of the human experience pinballing around her six-dimensional universe guileless/sophisticated; earnest/glib; confident/troubled; boyish/girlish; accomplished/bumbling; thriving/morose JoJo seems to emerge from a deep youth internet mind-set that were all Bitmoji-ed, filtered and served up on tiny screens anyway, so why even bother pretending youre warmblooded and whole? The result is horrifying to just about everybody who is not a child or a teenager, including Justin Bieber (who in the not-so-distant past committed his own teenage culture crimes). Last December, JoJo posted on Instagram an image of herself with a new car: a white BMW with a custom paint job that featured rainbows, sparkles, hearts and a gigantic airbrushed photo of her face. Bieber commented, Burn it. (Later, he apologized, and JoJo asked him to play at her 16th-birthday party; he didnt.)
My 17-year-old daughter, in contrast, watched Jeffree Star, a sort of adult male goth drag version of JoJo who joined YouTube in 2006 and currently sells $100 million worth of his cosmetics annually, and whose own identity is such a deliberately plastic work of self-creation that he has had injections, surgery and other procedures on his lips, forehead, nose, teeth and hair. She even watched videos of a fellow YouTube megacelebrity, Shane Dawson, interviewing Jeffree Star while made up and dressed up as Jeffree Star, the two of them in platinum blond wigs and matching $1,850 Gucci tracksuits.
I found this vertigo-inducing unless I viewed it as campy, Escher-inspired video art. But to my daughter it was a relief: Dawson and Star offered their fans the knowledge that you could put on just about any persona you wanted with little or no risk, as with a costume change and some cosmetics-removing wipes, you could simply make that persona go away. You could test out look after look after look, possibility after possibility. One minute your inner diva; the next, your inner nerd. You could treat the whole exercise as ironic. Or experiment with radical sincerity, knowing you could retreat into the safe waters of cynicism if the embarrassment grew too hot. My daughter found this comforting, even counterintuitively stabilizing. Every morning I go in the bathroom at school, and everybody is putting on makeup, and everybody is changing out of the clothes they wore for their mom, she told me. Like literally every morning. And everyone is saying, This is the ugliest Ive ever been, or, I look like Im in a punk-rock band, or, I look like Im an e-boy. Its just like cognitive dissonance.
It is also just like the human experience. Especially at age 16.
Antonio always knew a few things about herself. She knew she wanted to have long, straight hair, and she knew she wanted to wear so much makeup and just be like, essentially, perfect. She didnt want to look like her mom she loves her mom, and her mom is beautiful, but no, absolutely not, she told me. But neither did she want to be like her dad or her older brother. When she was younger, she was into princess dresses, art and eye shadow, and she knew she expressed herself differently from really, like, everybody I went to school with. She didnt cohere in one tidy prefab identity package. And she didnt really care.
Antonio also always knew she wanted to be a YouTuber and do makeup tutorials. But when she actually started doing them, she also realized she wanted to do makeup tutorials and not talk about makeup. By 2018, this was a viable, scrutable social enterprise, in the same way that it became a viable enterprise for Jon Stewart to make comedy shows that were no longer focused on comedy. People will look at this and say, Oh, this is beauty content like, the purpose of this content is for people to learn beauty tips, Kevin Allocca, YouTubes head of culture and trends, told me. But the truth is, once people who grew up watching makeup tutorials began creating their own makeup tutorials, the form started to morph. You peel back a layer, and the beauty stuff becomes this convention that allows you to have another set of interactions and discussions.
In response to her sudden fame, Antonio gave up a little on looks, on life and started wearing almost no makeup to school except fake eyelashes (and yes, she knows some people think its a big deal to wear lashes to school, but they take her only three minutes and they make her look more presentable, so). Her grades also fell into the dumpster. At that point, her mother relented and let her enroll in online school.
Holed up in her room for even more hours a day, Antonio kept making videos. In them, her mother occasionally reaches an arm into Antonios bedroom and deposits a mug of tea on her bureau. Or Antonio orders Taco Bell. Or Antonio rotates through Wagners entire Ring Cycle of emotions while dying her hair pink. The dramas are grand, banal, earnest and unapologetically boring. Antonio then edits them with frantic cuts, subtitles introducing bald sincerity (I am not O.K.), X-ray filters, horror genre music and satirical product placement that is also product placement. Im like, Oh, this is what happens when someone is raised on the internet, one 23-year-old friend said to me when I asked him to explain, like this person grew up thinking in GIFs.
The result is sui generis relatable and a relief, yes, to the young, but also panic-inducing. We all know were disjointed schizoid selves deep inside, yet arent we supposed to leave these selves in the bathroom mirror?
Antonio insists that shes the same in person as she is online, and its true. On a Thursday morning this fall when she opened the front door to her very decorated-for-Halloween house, she was dressed in an oversize T-shirt that covered her shorts, and we sat and talked in her bedroom while she did her makeup.
Im like, Oh, this is what happens when someone is raised on the internet.
Antonio was feeling a little overwhelmed. She was behind in physics, and behind in calculus, and she knew she needed to grind so she did not fail her classes and get kicked out of online school. But at the same time she needed to post to Instagram. This was a professional, not emotional, need: That day she was releasing merch, big black cotton sweatshirts with a single small orange pumpkin embroidered on the front. But posting on Instagram was stressful, like really so stressful, because if a post didnt receive that many likes which for Antonio was between zero and 600,000 then Antonio felt bad about it and wanted to delete it. But deleting a post also looked bad, because it revealed that you cared a pathetic amount about what other people thought. (If a post receives between 650,000 and 900,000 likes, Antonio will keep it up; she can always tell within 10 minutes, honestly five minutes, how a post is going to perform.)
So after Antonio finished her look pink half cut-creases around her eyes, glinting contours on her cheeks, nose and upper lip and misted her face with setting spray, she moved over to her unmade bed to Facetune her photos. I know some YouTubers who will post pictures of themselves taken straight from their phones, and theyre doing fine, Antonio said. Theyre probably mentally doing amazing. But she has a process: First, she gets rid of her acne. Then she color-corrects her neck, removes flyaway hairs, narrows her jawline, straightens her nose, plumps her lips digital, as opposed to analog, cosmetics, essentially. Then she moves the image over to the VSCO app to use its filters. Im not a VSCO girl, she said. Well, maybe I am.
Part of the terror of the internet, for the olds, is that this technology exploits flaws in our thinking. Pre-internet, the prevailing belief was that we had real selves and fake selves, and we cast judgment on the fakes. We took for granted that we should at least try to present ourselves to the world as coherent people with unified personalities. An avatar could only mean trouble (and often did): an alter ego, an outlet, for the excised bits; a convenient, nearly irresistible portal for the parts of ourselves we had repressed.
This foundational (maybe Puritan?) belief in the integrated self has been helpful, even necessary, in real life, because in real life we need to deal with one another in time and space. Thus its nice if our fellow humans are predictable, and you have some idea of what youll be dealing with when a person shows up. There are whole branches of psychology dedicated to trying to help us keep ourselves together. And, of course, rafts of diagnoses bipolar, schizophrenia, multiple personality, borderline personality for those of us who fail to do this well.
And yet, at the same time, we know its a ruse. We are, all of us, deeply, inalienably contradictory and chaotic. In the practical world, we pretend its not true. But in art, if people capture this multidimensionality beautifully enough Do I contradict myself? Very well then, I contradict myself we herald their genius and praise them for it.
This chaos this cubism, this unleashing of our multiple selves is a feature, not a bug, of the online world. Its arguably its defining characteristic for those who grew up there. You could attribute all the jump cuts, all the endlessly iterating memes, to a destroyed attention span. But its also evidence of something deeper, a mind-set people are just trying to name. The Dutch cultural theorists Timotheus Vermeulen and Robin van den Akker settled on the term metamodernism, a cultural position they claim is defined by neither modernisms essential optimism nor postmodernisms irony and mistrust, but as an oscillation, an unsuccessful negotiation, between two opposite poles. The sensibility is, as Luke Turner, a British artist and the author of The Metamodernist Manifesto, puts it, a kind of informed navet, a pragmatic idealism, a moderate fanaticism, oscillating between sincerity and irony, deconstruction and construction, apathy and affect, attempting to attain some sort of transcendent position, as if such a thing were within our grasp. It is nostalgic and cynical, knowing and nave; manipulative, manipulated and spontaneous. Arguably it is the dominant postapocalyptic vision of our digital times, the internets McLuhan moment, brought to us by teenagers who, as such, spend their days feeling like 10 different people at once and believe they can, and should, express them all. We all contain multitudes. The kids seem to know thats all right.
Elizabeth Weil is a writer at large for the magazine. She last wrote a profile of Venus Williams. Maurizio Cattelan is an Italian artist whose work has been the subject of numerous solo exhibitions, including at the Guggenheim Museum in New York and the Centre Georges Pompidou in Paris. Pierpaolo Ferrari is an Italian photographer and, along with Cattelan, is a founder of the magazine Toiletpaper, known for its surreal and humorous imagery.
Additional design and development by Jacky Myint.
Read the original here:
What Do Teenagers Learn Online Today? That Identity Is a Work in Progress - The New York Times
Posted in Mind Uploading
Comments Off on What Do Teenagers Learn Online Today? That Identity Is a Work in Progress – The New York Times
Simplify Your Application Process & Improve Time-to-Fill – Built In
Posted: at 2:45 pm
A candidate's experience with and impression of your company are formed very early in the recruitment process. And if a candidate has a negative experience at any step along the way, nearly halfwill terminate their relationship with the company entirely. Not only that, but if you're not constantlyoptimizing your recruitment process, your recruitment metrics, like applicants per opening, application completion rate and time-to-fillwill only decline.
While there are a number of tech recruiter resources out there to help your teamimprove the efficiency of your recruitment strategy, simplifying your application process is critical to getting candidates in your pipeline early on.
This article walks you through the different factors that contribute to a successful application process with tips on how you can improve yoursand reduce time-to-fill.
To get a sense of where your application processneeds improvement, ask several members on your team to apply for an open role. Their feedback will giveyou a good sense of where in the process applicants become frustrated orconfused, causingthem to leave the application uncompleted. From there, consider the following tips for ways to improve your application processand reduce time-to-fill.
Job descriptions are a critical part of your application process because they ultimately convincecandidates to apply for an open role or not. Creating a candidate persona before you write your job description will help you determine exactly what is required for a candidate to excel in the role. Additionally, the requirements on your job post and how they are written play a significant role in who will apply for a job.
Believe it or not, the language you use in job descriptions attracts or deters candidates based on their gender. A study found that job descriptions with more masculine language attract fewer female job applicants. There are simple tools available to help reduce biased language in job descriptions, like this Gender DecoderorTextio, whichoffers a service that makes job description language gender neutral. You can also remove gendered pronouns when describing the prospective candidate.
In addition to language, job requirements also affect which candidates apply. More specifically, men are significantly more likely to apply for any given role because men apply to roles when they meet 60% of the requirements listed in a job description.Women, on the other hand, only apply to jobs for which they meet 100% of the requirements. In general, research has found that men and women interact with the recruiting lifecycles differently, so consider thiswhen creating your application process.
If you havent already, invest in an Applicant Tracking System (ATS). An ATS will help your team track candidates throughoutthe hiring process from start to finish. Utilizing an ATS will not only save your team a wealth of time throughout the process, but it can also help improve the process for job applicants.
48% of candidates drop out of the application process due to a confusing or complicated user experience on the front end of an ATS. These systems are often customizable, so your team can create an application form that includes all of the information you need with a user-friendly experience. They also have automation options, so your team can spend less time sending follow-up emails and scheduling phone interviews and more time creating close connections with top candidates.
Once youve perfected your job description and started gathering applications, ensure your application process is less than 15 minutes total. Why? Because 60% of Gen Z job seekerswont spend longer than that on a job application. Additionally, if a job application is confusing, complicated or takes too long, more than half of all candidates will abandon the application altogether.
Out of all of the complaints candidates have about the recruitment process, the second most commonis a lengthyapplication process. Alright, you get the point. Having a shorter application process is key to getting candidates further in your recruitment process, but how are you supposed to actually shorten it?
For starters, you need to strip all of the questions and information that are absolutely not necessary. At such an early stage in the process, limit the candidate information you collect to the following:
Note: Anything with an asterisk (*) is required for the candidate to submit the application.
If your team isnt planning on reading a resume or cover letter for every application you receive, dont require it. If you need that information from candidates who make it past the initial review stage, you can ask for a cover letter, resume or recommendation letters at a later point.
Attract Passive Candidates. Build Your Talent Pipeline.Recruit with Built In
The application process is a tricky balance of asking for all of the critical information needed for your team to determine a candidates potential at your company without creating an unnecessarily extensive process. From our example in the last section, you can see that we ask for minimal information from candidates.
Read up on laws regarding personal information requests such as asking about disability status, race, ethnicity, citizenship status, age, gender, sex, education, income, geographical location, marital status, parental status, military experience or criminal background before creating your application. These may be discriminatory inquiries and legal stipulations on the matter vary by state and county.
Education is a common requirement, but in most casesits more of an arbitrary checkbox thannecessary experience for a job.It also eliminates perfectly qualified candidates who may nothave had the same resources and opportunities as others. In fact, some of the most in-demand roles out there are filled by people who did not complete a bachelors degree. In 2015, only 37.7% of software developers had completed a bachelors degree. Additionally,17.3% of IT technicians, 13.4% of technical support specialists, 10.8% of customer service representatives, 10.7% of network technological coordinators, 9.7% of IT coordinators and 9.4% of marketing representatives did not complete a bachelors degrees.
Instead, companies should care more about experiences and soft skills, in addition to a few necessary hard skills, to qualify candidates. Plus, requiring fewer education requirements help companies attract more diverse candidateswithnon-traditional backgrounds.
If there is a certain hard or soft skillthat is necessary for a candidate to even be considered for the role, implement a pre-screen survey or assessment tool that tests an applicants skills. Such surveys can test for skills in coding, aptitude and personality as well as a number of industry-specific skill sets.
No matter how simple and clear your application process is, there are bound to be applicants who have questions. In order to prevent candidates from leaving the application,provide them with information they need while they are applying. You can do this in the form of a simple F.A.Q.on your career page that answers common inquirieslike:
Another optionis to incorporate a recruiting chat bot that interacts with candidates in real time to answer their questions. And if the botis unable to answer certain questions, it can gather their name and email address so a recruiter at your company can respond to them during work hours the following day. This also prevents your team from having to work off the clock 24/7 to provide a great experience for candidates.
While candidates really want a short and simple application process, they also care about having a personalized experience. Some critics are concerned that removing too much human contact from the application process may leave candidates dissatisfied and frustrated.
However, now that youve invested in an applicant tracking system, shortened your application process and automated at least some of your workload, your team will have more time to provide personal connectionsby focusing on candidates who have high potential and reaching out to them individually.
User experience is critical to ensuring a candidates experience is nothing short of spectacular. However, in order to improve your user experience, you may need the help of an ATS or from your internal developers and designers all of which are difficult resources to access and afford. With that in mind and depending on your budget and resources, here are a few areas of your application that could be improved:
Set critical fields as required so candidates cant accidentally submit their application without filling out every field.
Improve your application page load time. This will require developer assistance, but if a page takes too long to load, candidates are more likely to leave.
Allow candidates to save the application and return to finish it later. It can be difficult to complete an application in a single sitting, and candidates want to submit their best work. However, do send a reminder email so candidates dont forget about the applicationif they leave it unfinished.
Utilize multiple selection or drop down tabs to reduce information candidateshave to type out.
Cut out the amount of clicks a job seekerhas to make to submit the application. Try gettingyour application onto one page toreduce clicks.
Give progress cueslike icons for attaching files, uploading content and submitting application so candidates know the system is in the process of completing a task.
Indicate when the system completed a task with icons that say download complete or a landing page that says application submitted so candidates confidently know their application was submitted in full.
Send a follow-up email to inform candidates that their application was submitted successfully and provide them with information on what to expect moving forward.
Traditionally, candidates look for and submit job applications on a computer. However, nowadays,58% of job seekers find open roles on mobile devices and 35% actually prefer submitting their application on their phones.
Knowing this, you cant just simplify your application process for desktop users. You also need to simplify and optimize your application forone-third ofjob seekers who are applying on their phones.
For mobile applications, reduce the amount of information that needs to be typed or entered directly because that is difficult to complete on a phone's tiny keyboard. Having drop down options or allowing applicants to link to their LinkedIn profiles, websites and portfolios makes it significantly easier for people to apply on their phones.
Instead of sending follow-up emails or phone calls, you can send follow-up texts. Text recruiting is also becoming more popularas people are constantly on their phones. A lot of people spend their commute looking for jobs, and its much easier to respond to messages via text than it is over the phone or even email.
If youre planning on hosting any recruitment events, utilize this as an opportunity for candidates to apply to roles in-person. Bring printed applications or computers or tablets socandidates can apply directly online and are immediately entered into your ATS. This is a great opportunity for your team to answer any questions about your team or the application process in-person and ensure that the applications are submitted in full.
Not only that, but its much easier to make the application process more personal when youre able to chat with candidates in-person. Then, when you send a follow-up email, the candidate will know exactly who is behind the email and trust that their application is in good hands.
If you really want to get ahead of the recruitment game, test out alternative application processes. But wait, isnt mobile the future of recruiting? While mobile recruiting and applications will continue to grow in popularity, other outlets for applications are already being explored.
Exhibit A: voice assistants. So, far there is only one example of a company utilizing a voice assistant to encourage candidates to apply for open roles, and it is none other than McDonald's. Earlier this yearMcDonald's implemented its new McDonalds Apply Thru integration that allows job seekers to apply to jobs through Amazon Alexa by simply asking Alexa, help me get a job at McDonalds, and the device will ask them a few questions to begin the application process.
No matter the state of your application process, there are a number of avenues for your team to consider improving or expanding upon in order to provide candidates with a sweet and simple experience. Making these improvements will not only get candidates in your pipeline early on, but you will simultaneously improve your applicants per opening, application completion rate and most importantly time-to-fill. And if you're lookingfor more ways to improve your recruitment process,check out these related articles.
View original post here:
Simplify Your Application Process & Improve Time-to-Fill - Built In
Posted in Mind Uploading
Comments Off on Simplify Your Application Process & Improve Time-to-Fill – Built In