The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: August 14, 2021
Artificial Intelligence as the Inventor of Life Sciences Patents? – JD Supra
Posted: August 14, 2021 at 1:30 am
The question whether an artificial intelligence (AI) system can be named as an inventor in a patent application has obvious implications for the life science community, where AIs presence is now well established and growing. For example, AI is currently used to predict biological targets of prospective drug molecules, identify candidates for drug design, decode genetic material of viruses in the context of vaccine development, determine three-dimensional structures of proteins, including their folding form, and many more potential therapeutic applications.
In a landmark decision issued on July 30, 2021, an Australian court declared that an AI system called DABUS can be legally recognized as an inventor on a patent application. It came just days after the Intellectual Property Commission of South Africa granted a patent recognizing DABUS as an inventor. These decisions, as well as at least one other pending case in the U.S. concerning similar issues, have generated excitement and debate in the life sciences community about AI-conceived inventions.
The AI system involved in these legal battles across the globe is called Device for Autonomous Bootstrapping of Unified Sentience aka DABUS developed by Missouri physicist Dr. Stephen Thaler (Thaler). In 2019, two patent applications naming DABUS as the inventor were filed in more than a dozen countries and the European Union. Both applications listed DABUS as the sole inventor, but Thaler remains the owner of any patent rights stemming from these applications. The first application is directed to a design of a container based on fractal geometry. The second application is directed to a device and method for producing light that flickers rhythmically in a specific pattern mimicking human neural activity. In addition, an international patent application combining the subject matter of both applications was filed under the Patent Cooperation Treaty (PCT).
The South African patent based on the PCT application issued without debate about the inventions nonhuman origin. In contrast, during prosecution of the PCT application in Australia, the Deputy Commissioner of Patents of the Australian Intellectual Property Office took the position that the Australian Patents Act requires the inventor to be human and allowed Thalers non-compliant application to lapse. Thaler subsequently sought judicial review, asserting that the relevant Australian patent provisions do not preclude an AI system from being treated as an inventor, and that the Deputy Commissioner misconstrued these provisions. The court agreed, finding that the statues do not expressly exclude an inventor from being an AI system. In its decision, the court describes in detail the many benefits of AI in pharmaceutical research, ranging from identifying molecular targets to development of vaccines. In view of these contributions, the court cautioned that no narrow view should be taken to the concept of inventor. To do so would inhibit innovation in all scientific fields that may benefit from the output of an AI system. The court further opined that the concept of inventor should be flexible and capable of evolution. In the same vein, the relevant patent statutes should be construed in line with the objective of promoting economic wellbeing through technological innovation. Thus, while stopping short of allowing a non-human from being named a patent applicant or grantee, the Australian court permitted inventorship in the name of an AI system under Australian statutory provisions.
To date, the U.S. has not acknowledged the legality of nonhuman inventorship. In response to the filing of two U.S. patent applications in 2019 identifying DABUS as the sole inventor on each application, the U.S. Patent and Trademark Office (USPTO) issued a Notice to File Missing Parts for each application requiring Thaler to identify an inventor by his or her legal name. Upon several petitions by Thaler requesting reconsideration of the notice for each application, the USPTO last year rejected the idea that DABUS, or any other AI systems, can be an inventor on a patent application. The USPTO found that since the U.S. statutes consistently refer to inventors as natural persons, interpreting inventor broadly to encompass machines would contradict the plain reading of the patent statues. In reaching this decision, the USPTO also cited earlier Federal Circuit decisions which found that state governments and corporations could not be listed as inventors because conception of an invention needs to be a formation in the mind of the inventor and a mental act by a natural person. In response, Thaler sued Andrei Iancu, in his capacity as Under Secretary of Commerce for Intellectual Property and Director of the USPTO as well as the USPTO itself in Virginia federal court.
In that pending action, Thaler argued that the USPTOs decisions in both applications effectively prohibit patents on all AI-generated inventions, producing the undesirable outcome of discouraging innovation or encouraging misrepresentations by individuals claiming credit for work they did not perform. In addition, according to Thaler, there is no statute or case in the U.S. holding that an AI cannot be listed as an inventor. Accordingly, he urged the court to undertake a dynamic interpretation of the law. Furthermore, Thaler claimed that a conception requirement should not prevent AI inventorship because the patent system should be indifferent to the means by which invention comes about. For these reasons, Thaler sought reinstatement of both patent applications and a declaration that requiring a natural person to be listed as an inventor as a condition of patentability is contrary to law. While the court has not yet ruled on the issues presented, presiding Judge Leonie Brinkema remarked in a summary judgment hearing held in April of this year that the issue seemed to be best resolved by Congress.
Even if nonhuman inventorship becomes widely recognized, other important questions of AI and patent law will remain. Among these is the issue of ownership. In most jurisdictions, in cases where the applicant is different from the inventor, the applicant needs to show it properly obtained ownership from the inventor. The obvious question that arises is how can a machine like DABUS, which cannot hold title to an invention, pass title to an applicant like Thaler under the current patent system. The likely answer is that legislative changes in the U.S. and around the world are needed to expand the limits of patent inventorship and ownership to accommodate such arrangements. When and if that will happen is unclear, but the decisions from Australia and South Africa have certainly raised the profile of the debate surrounding inventorship and ownership of AI conceived inventions.
[View source.]
The rest is here:
Artificial Intelligence as the Inventor of Life Sciences Patents? - JD Supra
Posted in Ai
Comments Off on Artificial Intelligence as the Inventor of Life Sciences Patents? – JD Supra
Army Futures Command outlines next five years of AI needs – DefenseNews.com
Posted: at 1:30 am
WASHINGTON Army Futures Command has outlined 11 broad areas of artificial intelligence research its interested in over the next five years, with an emphasis on data analysis, autonomous systems, security and decision-making assistance.
The broad agency announcement from the Austin, Texas-based command comes as the service and the Defense Department work to connect sensors and shooters across the battlefield. Artificial intelligence will be key in that effort by analyzing data and assisting commanders in the decision-making process.
The announcement, released by the commands Artificial Intelligence Integration Center, said the service is particularly interested in AI research of autonomous ground and air platforms, which must operate in open, urban and cluttered environments. The document specifically asks for research into technologies that allow for robots or autonomous systems to move in urban, contested environments, as well as technologies that reduce the electromagnetic profile of the systems. It also wants to know more about AI that can sense obscure targets and understand terrain obstacles.
The document identifies several needs pertaining to data analysis over the next five years. The Army is interested in human-machine interfacing research and needs additional research in ways it can predict an adversarys intent and behavior on the battlefield. In the same category, the Army wants to be able to fuse data from disparate sources and have analytical capabilities to exploit classified and unclassified sources to make enhanced intelligence products.
The Army also wants to be able to combine human insight with machine analysis and develop improved ways of efficiently conveying analytics results to humans.
The Army is interested in AI/ML research in areas which can reduce the cognitive burden on humans and improve overall performance through human-machine teaming, the announcement read.
Similarly, the Army needs more research over the next five years into how to better display data to humans. Data must be presented clearly to users through charts or graphs, for example so they can understand what the information means.
The Army is interested in research that enables improved situational awareness and the visualization and navigation of large data sets to enhance operational activities and training and readiness, the announcement read. Along that same vein, the service is also seeking novel ways of visualizing sensor data and large data sets with multiple sources.
Sign up for our Early Bird Brief Get the defense industry's most comprehensive news and information straight to your inbox
Subscribe
Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe
Thanks for signing up!
By giving us your email, you are opting in to the Early Bird Brief.
The service also wants more research into AI for sensing on the battlefield, including detecting people, equipment and weapons, even when obscured. It wants to sense these targets based on physical, behavioral, cyber or other signatures. Additionally, the Army wants AI-enabled sensors and processors that can detect chemical, biological, radiological, nuclear and explosive threats.
Network and communications security is another area in which the Army wants more research. The service is seeking more research into autonomous network defense and AI-based approaches to offensive cyber capabilities. It also wants novel cyber protection technologies and methods.
Additionally, to prepare for potential GPS-denied environments of the future, the Army is interested in research into algorithms and techniques to fuse sources of position, navigation and timing to provide robust capabilities.
The Internet of Things, or the massive network of devices connected to the internet, presents more artificial intelligence needs for the Army. According to the solicitation, the service is interested in AI research into new approaches to enable secure, resilient, and automatically managed IoT networks in highly complex, mixed cooperative/adversarial, information-centric environments.
The Army needs to better integrate a wide range of capabilities and equipment and capitalize on commercial developments in industrial and human IoT, the solicitation said.
Excerpt from:
Army Futures Command outlines next five years of AI needs - DefenseNews.com
Posted in Ai
Comments Off on Army Futures Command outlines next five years of AI needs – DefenseNews.com
AI ethics in the real world: FTC commissioner shows a path toward economic justice – ZDNet
Posted: at 1:30 am
The proliferation of artificial intelligence and algorithmic decision-making has helped shape myriad aspects of our society: From facial recognition to deep fake technology to criminal justice and health care, their applications are seemingly endless. Across these contexts, the story of applied algorithmic decision-making is one of both promise and peril. Given the novelty, scale, and opacity involved in many applications of these technologies, the stakes are often incredibly high.
This is the introduction to FTC Commissioner Rebecca Kelly Slaughter's whitepaper:Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission. If you have been keeping up with data-driven and algorithmic decision-making, analytics, machine learning, AI, and their applications, you can tell it's spot on. The 63-page Whitepaper does not disappoint.
Slaughter worked on the whitepaper with her FTC colleagues Janice Kopec and Mohamad Batal. Their work was supported by Immuta, and it has just been published as part of theYale Law School Information Society Project Digital Future Whitepapers series. The Digital Future Whitepaper Series, launched in 2020, is a venue for leading global thinkers to question the impact of digital technologies on law and society.
The series aims to provide academics, researchers, and practitioners a forum to describe novel challenges of data and regulation, to confront core assumptions about law and technology, and to propose new ways to align legal and ethical frameworks to the problems of the digital world.
Slaughter notes that in recent years, algorithmic decision-making has produced biased, discriminatory, and otherwise problematic outcomes in some of the most important areas of the American economy. Her work provides a baseline taxonomy of algorithmic harms that portend injustice, describing both the harms themselves and the technical mechanisms that drive those harms.
In addition, it describes Slaughter's view of how the FTC's existing tools can and should be aggressively applied to thwart injustice, and explores how new legislation or an FTC rulemaking could help structurally address the harms generated by algorithmic decision-making.
Slaughter identifies three ways in which flaws in algorithm design can produce harmful results: Faulty inputs, faulty conclusions, and failure to adequately test.
The value of a machine learning algorithm is inherently related to the quality of the data used to develop it, and faulty inputs can produce thoroughly problematic outcomes. This broad concept is captured in the familiar phrase: "Garbage in, garbage out."
The data used to develop a machine-learning algorithm might be skewed because individual data points reflect problematic human biases or because the overall dataset is not adequately representative. Often skewed training data reflect historical and enduring patterns of prejudice or inequality, and when they do, thesefaulty inputs can create biased algorithms that exacerbate injustice, Slaughter notes.
She cites some high-profile examples of faulty inputs, such asAmazon's failed attempt to develop a hiring algorithm driven by machine learning, and theInternational Baccalaureate'sandUK's A-Level exams. In all of those cases, the algorithms introduced to automate decisions kept identifying patterns of bias in the data used to train them and attempted to reproduce them.
A different type of problem involves feeding data into algorithms that generate conclusions that are inaccurate or misleading -- perhaps better phrased as "data in, garbage out." This type of flaw, faulty conclusions, undergirds fears about the rapidly proliferating field of AI-driven "affect recognition" technology and is often fueled by failures in experimental design.
Machine learning often works as a black box, and as applications are becoming more impactful, that can be problematic. Image: Immuta
Slaughter describes situations in which algorithms attempt to find patterns in, and reach conclusions based on, certain types of physical presentations, and mannerisms. But, she notes, as one might expect, human character cannot be reduced to a set of objective, observable factors. Slaughter highlights the use of affect recognition technology in hiring as particularly problematic.
Some more so than others, such as a company that purports to profile more than sixty personality traits relevant to job performance -- from "resourceful" to "adventurous" to "cultured" -- all based on an algorithm's analysis of an applicant's 30-second recorded video cover letter.
Despite the veneer of objectivity that comes from throwing around terms such as "AI" and "machine learning," in many contexts, the technology is still deeply imperfect, and many argue that its use is nothing less than pseudo-science.
But even algorithms designed with care and good intentions can still produce biased or harmful outcomes that are unanticipated, Slaughter notes. Too often, algorithms are deployed without adequate testing that could uncover these unwelcome outcomes before they harm people in the real world.
Slaughter mentions bias in search results uncovered when testing withGoogle'sandLinkedIn'ssearch but focuses on the health care field. Arecent studyfound racial bias in a widely used machine learning algorithm intended to improve access to care for high-risk patients with chronic health problems.
The algorithm used health care costs as a proxy for health needs, but for a variety of reasons unrelated to health needs, white patients spend more on health care than their equally sick Black counterparts do. Using health care costs to predict health needs, therefore, caused the algorithm to disproportionately flag white patients for additional care.
Researchers estimated that as a result of this embedded bias, the number of Black patients identified for extra care was reduced by more than half. The researchers who uncovered the flaw in the algorithm were able to do so because they looked beyond the algorithm itself to the outcomes it produced and because they had access to enough data to conduct a meaningful inquiry.
When the researchers identified the flaw, the algorithm's manufacturer worked with them to mitigate its impact,ultimately reducing bias by 84%-- exactly the type of bias reduction and harm mitigation that testing and modification seeks to achieve, Slaughter notes.
Not all harmful consequences of algorithms stem from design flaws. Slaughter also identifies three ways in which sophisticated algorithms can generate systemic harm: by facilitating proxy discrimination, by enabling surveillance capitalism, and by inhibiting competition in markets.
Proxy discrimination is the use of one or more facially neutral variables to stand in for a legally protected trait, often resulting in disparate treatment of or disparate impact on protected classes for certain economic, social, and civic opportunities. In other words, these algorithms identify seemingly neutral characteristics to create groups that closely mirror a protected class, and these "proxies" are used for inclusion or exclusion.
Slaughter mentions some high-profile cases of proxy discrimination: The Department of Housing and Urban Development allegations against Facebook's tool called "Lookalike Audiences,"showings of job openings to various audiences, andFinTech innovations that can enable the continuation of historical biasto deny access to the credit system or to efficiently target high-interest products to those who can least afford them.
An additional way algorithmic decision making can fuel broader social challenges is the role it plays in the system ofsurveillance capitalism, which Slaughter defines as a business model that systematically erodes privacy, promotes misinformation and disinformation, drives radicalization, undermines consumers' mental health, and reduces or eliminates consumers' choices.
AI Ethics has very real ramifications that are getting increasingly more widespread and important
Through constant, data-driven adjustments, Slaughter notes, algorithms that process consumer data, often in real-time, evolve, and "improve" in a relentless effort to capture and monetize as much attention from as many people as possible. Many surveillance capitalism enterprises are remarkably successful at using algorithms to "optimize" for consumers' attention with little regard for downstream consequences.
Slaughter examines the case ofYouTube content addressed at children and how it's been weaponized. TheFTC has dealt with this, and Slaughter notes that YouTube announced they will use machine learning to actively search for mis-designated content and automatically apply age restrictions.
While this sounds like the technological backstopSlaughter requestedin that case, she notes two major differences: First, it is entirely voluntary, and second, both its application and effectiveness are opaque. That, she argues, brings up a broader set of concerns about surveillance capitalism -- one that extends beyond any single platform.
The pitfalls associated with algorithmic decision-making sound most obviously in the laws the FTC enforces through its consumer protection mission, Slaughter notes. But the FTC is also responsible for promoting competition, and the threats posed by algorithms profoundly affect that mission as well.
Moreover, she goes on to add, these two missions are not actually distinct, and problems -- including those related to algorithms and economic justice -- need to be considered with both competition and consumer protection lenses.
Slaughter examines topics including traditional antitrust fare such as pricing and collusion, as well as more novel questions such as the implications of the use of algorithms by dominant digital firms to entrench market power and to engage in exclusionary practices.
Overall, the whitepaper seems well-researched and shows a good overview of the subject matter. While the paper's sections on using the FTC's current authorities to better protect consumers and proposed new legislative and regulatory solutions refer to legal tools we do not feel qualified to report on, we encourage interested readers to read them.
We would also like to note, however, that while it's important to be aware ofAI ethics and the far-reaching consequences of data and algorithms, it's equally important to maintain a constructive and unbiased attitude when it comes to issues that are often subjective and open to interpretation.
Overzealous attitude in debates that often take place on social media, where context and intent can easily be misinterpreted and misrepresented, may not be the most constructive way to make progress. Case in point, AI figureheadsYann LeCunandPedro Domingo'smisadventures.
When it comes to AI ethics, we need to go beyond sensationalism and toward a well-informed and, well, data-driven approach. Slaughter's work seems like a step in that direction.
Originally posted here:
AI ethics in the real world: FTC commissioner shows a path toward economic justice - ZDNet
Posted in Ai
Comments Off on AI ethics in the real world: FTC commissioner shows a path toward economic justice – ZDNet
Music streaming service uses AI to make up music on the spot – CNET
Posted: at 1:30 am
AiMi
Streaming service AiMi wants to take on Spotify and Apple Music with an entirely AI-generated music subscription for 10 dollars a month.
The new AiMi Pluswill combine artist-submitted samples with AI music for extended, seamless mixes based on a series of moods.
Become a home entertainment expert with our handpicked tips, reviews and deals. Delivered Wednesdays.
CEO Edward Balassanian told CNET that the AI listens to examples of similar music and each app preset "shapes the space that the AI will take the user through."
Balassanian said that the mixes are completely new each time and will feature samples submitted by over 100 artists and DJs. The artist is then paid each time a sample features in a stream.
The service is "artist invite only" at the moment, but the company plans to make the service publicly available at the end of 2021.
Meanwhile, the $10 monthly music subscription landscape is exceptionally crowded with Spotify currently the pack leading at 158 million paid userswhile Apple Music, which doesn't disclose its numbers, had 60 million subscribers in 2019.
See the article here:
Music streaming service uses AI to make up music on the spot - CNET
Posted in Ai
Comments Off on Music streaming service uses AI to make up music on the spot – CNET
Seed grant to explore using AI to model subsurface rock formations | Penn State University – Penn State News
Posted: at 1:30 am
UNIVERSITY PARK, Pa. It is difficult for geoscientists to map sedimentary rocks' compositional and mechanical properties at high resolution, according to Yashar Mehmani, assistant professor in the John and Willie Leone Family Department of Energy and Mineral Engineering. He recently received a seed grant from the Institute for Computational and Data Sciences (ICDS) to investigate using artificial intelligence (AI) to develop a new method to model the Earths subsurface.
The ICDS seed grant program is designed to help Penn State scientists use the latest computational technology and cutting-edge data science techniques to deepen understanding and develop innovation across fields and disciplines. Mehmani received the grant for his proposal, "Using AI to Map Infrared Spectra to Geomechanical Properties from the Micron to Meter Scale."
"I am super excited," said Mehmani, who also is a co-funded faculty member of the Institutes of Energy and the Environment. "This seed grant is significant because the underlying idea is experimental to the point that there is a finite probability of failure. But if successful, the rewards are really high because they could potentially change how geoscientists model subsurface formations.
"What is also exciting is the promise of machine learning in this specific problem, which I have not so far formally applied in my research. The potential lies in extrapolating data from small to large and translating 'cheap but less useful' information to 'expensive but more useful' information. The speed with which this could be done opens up extraordinary possibilities," said Mehmani.
According to Mehmani, it is difficult to map sedimentary rocks' compositional and mechanical properties at high resolution because the instruments available either lack resolution or are too expensive to use on new, previously unobserved sections of a subsurface formation.
Determining the formation's mechanical properties requires drilling 100-meter-long cores of rock and then extracting smaller sample for testing. While indispensable, the approach is time-consuming, leaves gaps between measurements and must be repeated whenever a new section needs analyzing even from the same formation. Mehmani proposes a new approach that would expose sedimentary rocks to infrared light and record its reflections. His team will then analyze the reflections at multiple wavelengths to understand the compositional makeup of minerals and organics within the rock. The compositional information would then be related to mechanical properties measured on lab samples using AI.
According to Mehmani, the proposed approach only needs to occur once to build the initial database for the formation. The entire process of producing the infrared spectra and mapping them to a high-resolution mechanical property could take only a few hours. This reduction of time and cost could dramatically change how subsurface formations are analyzed.
"When deployed, the AI would instantaneously translate data from a few lab samples into meter-scale information," said Mehmani."AI is that bridge. You train it on a few small samples and when you deploy it, you get something that no instrument can measure on its own."
The use of infrared imaging builds on Mehmani's previous research, which successfully used near-infrared spectra to develop models of organic-rich shales from the Green River Formation.
Read the original here:
Posted in Ai
Comments Off on Seed grant to explore using AI to model subsurface rock formations | Penn State University – Penn State News
Instant message: Readers ponder the future of artificial intelligence – The Herald-Times
Posted: at 1:30 am
This week's Instant Message question: IU's Luddy Center for Artificial Intelligence opens this month. What concerns you about artificial intelligence and machine learning?
Artificial intelligence and machine learning can be done badly (for example, some
programs discriminate against non-whites) or well. Ensure they are done well. There's
no magic way to do that be ethical, careful, and professional.
Marvant Duhon,Monroe County, Just outside Bloomington
As with all technology, especially that potentially most powerful in terms of replicating and/or replacing human activity, the danger lies in misappropriation, or relinquishing control.We must remember that AI is "artificial" intelligence an artifact of human design, which can be used for nefarious aswell as beneficial purposes.
Byron Bangert, Bloomington
Sloth.
Don Geyra, Bloomington
While AIs benefits are immense in countless contexts, I have two concerns. First, that more data on computers invites greater dangers via hacking. Second, that humans arent built for spending so much time on computers it damages our physical alignment and impedes our most creative neurological functions.
Diane Legomsky, Bloomington
Absolutely nothing! There is an intelligence deficit in the USA today. So artificial is better than none.
Dave Burnworth, Bloomington
Jeff Bezos.
Zac Huneck, Bloomington
I think we need more and better artificial intelligence in areas like science, technology and medicine etc., and less intrusion/data mining of our personal lives.
Clark Brittain, Monroe County
In the long run, the computers will probably end up being as dumb as the humans.
Guy Loftman, Bloomington
My main concern is the response by those who are unfamiliar with the technology, particularly politicians (lawyers) who are inclined to make laws and regulations in areas where they have no expertise, nor are inclined to seek (and follow) knowledgeable advice.
George Korinek, Bloomington
The human race has all the technology, intelligence, resources and vision needed to turn this planet into a paradise, yet it chooses not to, in the service of greed. What could go wrong?
Robin Harper, Bloomington
How far can artificial intelligence and machine learning go? Can it take over and out smart humans? We really, truly, do not know, but as with all new "things,"shall we say, we're going to find out.
Denise Riffel, Morgan County
Nothing.
Jacque Kubley, Unionville
Primarily, that the people doing the work will exaggerate the abilities of AI and machine learning. Im also concerned about the interpretability problem we dont really understand how these brains work. Are they as susceptible to misinformation and propaganda as human brains are?
Thomas Gruenenfelder, Bloomington
They are undoubtedly going to produce lifelike artificial hummingbirds that will fly at us and stab us in the head and lobotomize us.Then we will be easily controllable.
Jose Bonner, Santa Fe
It's not Artificial Intelligence I'm worried about, it's Artificial Stupidity.Surely all of the dumbness that's been floating around for the past few years isn't real.My guess is a lot of those people are faking it.
Dan Combs,Ramp Creek
I have to wonder what will happen when the machines become self-aware. Will they be like Data on Star Trek or more like the Terminator? Will humans become obsolete? Just saying you never know the result until it happens. Think about it!
Jerree Richardson,Bloomington
AIs impact on labor markets is uncertain. At a minimum it will cause substantial temporary displacement of workers requiring maintenance and retraining; at a maximum it will cause permanent displacement of workers requiring a change in societys organization around work. Either way income inequality will increase, possibly threatening democracy.
Ken Dau-Schmidt, Bloomington
When I see self-driving cars crashing into New Jersey barriers, driving on bikeways, not yielding to oncoming traffic, and disobeying ONE WAY signs, I see very little intelligence.
Larry Robinson, Bloomington
Robottobor is spelled the same forward and backwards. I'm not concerned about them one way or another ... until they are issued birth and death certificates, and bumping them off for entertainment is considered murder.But that's far into the future ... say circa
2024.
Lee Nading, Bloomington
AI isfascinating and I enjoy following research developments. The benefits especially in technology and medicine far outweigh any public or personal security threats. I trust the researchers, designers and operators will follow an ethical and moral code to benefit society in its entirety.
Helen Harrell, Bloomington
Runaway AI is our greatest existential risk … far out-stripping seas rising 1 cm/yr, for example. With computational speed doubling every year or so, were facing machine intelligence billions of times our own within a few decades. Its not slowing down, folks. Whats most troubling is practically nobody seems concerned.
John Linnemeier,Reykjavik
Most everything!
Rose Stewart, Bloomington
As a dedicated idolator of Trump and Ron DeSantis, I want to go on record as declaring that I am 100% against intelligence in any way, shape, or form!God bless America!(Except for the Blue states.)
Dennis J. Reardon, Monroe County
As a retired educator I have seen the merits ofnew technology found in today's classroom. However I have also seen how this new technologycan create an isolated learning environment.We need to make sure that we still include collaborative learning in our classrooms to help with the socialization process.
Mike Stanley, Ellettsville
After seeing the movie, "2001, A Space Odyssey" and "Terminator,"I've had a little more concern as to what machines can potentially do.While some advances will be beneficial, I have a concern that thinking machines might ultimately take over.
J Winckelbach, Unionville
A hammer can build or it can kill. AI has aided in the development of new vaccines yet its facial recognition has incorrectly identified people of color.As with any tool, be it the wheel or the atom, benefit or harm lies in how it is used.
Michael Fields, Bloomington
It is not so much as a concern as an acceptance that most of it will pass me by due to my age and general incompetence with anything remotely technical.
Linda Harl, Ellettsville
See the original post here:
Instant message: Readers ponder the future of artificial intelligence - The Herald-Times
Posted in Ai
Comments Off on Instant message: Readers ponder the future of artificial intelligence – The Herald-Times
15 AI Ethics Leaders Showing The World The Way Of The Future – Forbes
Posted: at 1:30 am
NEW YORK, NY - NOVEMBER 01: A view of a poster about ethical AI during the 2018 New York Times ... [+] Dealbook on November 1, 2018 in New York City. (Photo by Michael Cohen/Getty Images for The New York Times)
Forget the negative comments, the unsure colleagues, and general science fiction painted pictures of AI that you have been fed your entire life. From my personal experience as an AI proponent for the past three decades I know that the power AI has for doing good is exponentially better than anything negative.
There is only one thing that causes AI to do harm. People.
Fortunately for all of us there are leaders and visionaries across the globe that are paving the way and setting examples for every business to follow when implementing and leveraging the massive power that AI possesses. These people are AI Ethicists.
Qualifications of an AI Ethicist
To start, an AI ethicist generally should have an understanding of AI tools and technology of the business and the industry and the specific AI ethical traps that exist in them, good communication skills and the ability to work across organizational boundaries and regulatory, legal, and policy knowledge.
Beyond this skill set, the ideal AI ethicist for a company would possess a diverse perspective, have in their background interdisciplinary work experience, deep understanding of processes and policies both internal and governmental and be an excellent public speaker with the ability to project confidence when training or presenting to both internal stakeholders and external partners or clients.
This position currently exists at many companies under different names such as Data Privacy and Ethics Lead (Qantas), Chief AI Ethics Lead (US Army Artificial Intelligence Task Force), Director of Responsible Innovation & Responsible Innovation Manager (Facebook) and several others.
Though across the industry there are countless people filling the role of an AI ethicist, here is a list of fifteen you should study and follow as your companys use of AI evolves. They set the proper example of how to implement and scale the use of AI in a safe and ethical manner while simultaneously positively affecting the bottom line.
AI Ethics text on document above brown envelope and stethoscope. Healthcare or medical concept
COL David Barnes
Professor, United States Military Academy (USMA) and Deputy Head of the Department of English and Philosophy, Chief AI Ethics Officer for the US Army's Artificial Intelligence (AI) Task Force
In addition to his life of service to his country Col. Barnes' work focuses on how to include more diverse, rigorous, and meaningful conversation surrounding the responsible design, development, and deployment of AI systems among government, industry and academia.
Haniyeh Mahmoudian
Global AI Ethicist at DataRobot
BRAZIL - 2021/05/11: In this photo illustration the DataRobot logo seen displayed on a smartphone ... [+] screen. (Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images)
Recently appointed to her position at DataRobot Dr. Mahmoudian stated that When used properly, AI can be a force for good and help contribute solutions to some of societys most pressing issues, such as access to equitable healthcare, ...The COVID-19 pandemic has inspired unprecedented interest in AI. However, to accomplish those goals we must ensure machine learning systems have trustworthy and ethical parameters built in from the start.
Will Griffin
Chief Ethics Officer at Hypergiant
In a recent interview with Apogeo Spatial Griffin discussed that at Hypergiant their process is to evangelize to the developers and the designers the burden of proof ... is on the developers and the designers to be creative and imagine all of the impacts on...society and to create a technology in a way that minimizes those impacts and maximizes the benefits.
Francesca Rossi
IBM fellow and AI Ethics Global Leader
YORKTOWN HEIGHTS, NY - JANUARY 13: A general view of IBM's 'Watson' computing system at a press ... [+] conference to discuss the upcoming Man V. Machine "Jeopardy!" competition at the IBM T.J. Watson Research Center on January 13, 2011 in Yorktown Heights, New York. (Photo by Ben Hider/Getty Images)
In her work at IBM she is striving to provide consumer and industrial users of cognitive systems a vital voice in the advancement of the defining technology of this century one that will foster collaboration between people and machines to solve some of the worlds most enduring problems in a way that is both trustworthy and beneficial.
Paula Goldman
Chief Ethical and Humane Use Officer at Salesforce
Photo by: STRF/STAR MAX/IPx 2021 3/23/21 Salesforce Tower is seen in New York City.
Working for a data gathering and organizing giant like Salesforce ethical use of AI and data must be paramount. In her role, Goldman is working daily to make sure that the solutions that we develop are developed inclusively and with those populations (communities of color) top of mind, with and for those populations...And so we're actively involving diverse experts...And we're looking out for ways in which products could be unintentionally misused.
Steve Mills
Chief AI Ethics Officer and Managing Director & Partner at BCG
BRAZIL - 2019/07/11: In this photo illustration a Boston Consulting Group (BCG) logo seen displayed ... [+] on a smartphone. (Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images)
In his work Mills shows an excellent example to try and emulate in his insistence that An AI product is never just an algorithm. It's a full end-to-end system and all the [related] business processes...You could go to great lengths to ensure that your algorithm is as bias-free as possible but you have to think about the whole end-to-end value chain from data acquisition to algorithms to how the output is being used within the business."
Marian Croak
VP of Engineering Google
A man walks past the logo of the US multinational technology company Google during the VivaTech ... [+] trade fair ( Viva Technology), on May 24, 2018 in Paris. (Photo by ALAIN JOCARD / AFP) (Photo credit should read ALAIN JOCARD/AFP via Getty Images)
Dr. Croaks work at Google is integral to the way AI affects nearly everyones life, especially all of us with a smartphone or a Gmail address. A tremendous question she asks that I agree with is around the fact that theres a lot of dissension, a lot of conflict in terms of trying to standardize on normative definitions of these principles. Whose definition of fairness, or safety, are we going to use? ...what Id like to do is have people have the conversation in a more diplomatic way, perhaps, than were having it now, so we can truly advance this field.
Elizabeth Adams
Chief AI Ethics Advisor at Paravison
Adams work around Face recognition technology is forced on its development and deployment around ethical intentions and safeguards. Her expertise in addressing AI racial bias will be integral in a future of ethical use of AI that contains no inherent bias.
Alka Patel
Chief, Responsible AI at Department of Defense, Joint AI Center
Patels' work at the DoD focuses on how to operationalize the five DoD AI Ethics Principles; Responsible, Equitable, Traceable, Reliable and Governable. This includes the mission ofputting these into practice in the design, development, deployment, and use of AI-enabled capabilities. Knowing our Department of Defense is taking the subject of AI Ethics this seriously gives me even more confidence that this is an issue that will continue to get the support it needs at the highestlevelsfor the foreseeable future.
Linda Leopold
Head of Responsible AI & Data at H&M Group
(Original Caption) New York City: H&M is a Swedish clothing store on Fifth Avenue. (Photo by Erik ... [+] Freeland/CORBIS SABA/Corbis via Getty Images)
Leopolds mission at H&M is to leverage AI to achieve a climate positive value chain by 2040. Through their use of AI-driven demand prediction is at the heart of their mission to optimize the supply chain to eliminate waste and redundancies. This mission will not only help the bottom line, but also do a part to save the environment.
Natasha Crampton
Microsoft Chief Responsible AI Officer
REDMOND, UNITED STATES - 2021/04/27: A logo marking the edge of the Microsoft corporate campus in ... [+] Redmond, United States. The company announced its Q2 earnings on 27th Apr 2021. (Photo by Toby Scott/SOPA Images/LightRocket via Getty Images)
Ms. Crampton is leading Microsofts mission in the field of responsible AI to put their principles into practice by taking a people-centered approach to the research, development, and deployment of AI. As is the case at another tech giant Google, Microsoft is dedicating itself to the ethical use of AI is paramount in ensuring an ethical future for AI globally.
Ilana Golbin
Director, Responsible AI Leader, PwC
PricewaterhouseCoopers offices are shown, Thursday, May 3, 2018 in New York. (AP Photo/Mark ... [+] Lennihan)
Golbin believes in an approach to AI of your use of AI living up to your companys core values. Building trust from all stakeholders and the public requires an organization to use all data and tech responsibility. By staying true to your values not only in sales and customer service, but in all use of technology guarantees ethical use of AI.
Myrna Macgregor
Lead, Responsible AI/Machine Learning, Acting Head of ML Strategy, BBC
BBC Television Centre
Myrna is focused on developing the right tools and resources to incorporate the BBC's values and mission into the technology that it builds. A massive media company like the BBC that is respected worldwide committing the ethical use of AI sets a standard in their industry that puts all their peers on notice.
Marisa Tricarico
Accenture North America Practice Lead for Responsible AI
The Accenture logo is displayed at the Mobile World Congress (MWC) in Barcelona on February 26, ... [+] 2019. - Phone makers will focus on foldable screens and the introduction of blazing fast 5G wireless networks at the world's biggest mobile fair as they try to reverse a decline in sales of smartphones. (Photo by Pau Barrena / AFP) (Photo credit should read PAU BARRENA/AFP via Getty Images)
When working with their clients Accenture under Tricaricos guidance focuses on on guiding (their) clients to more safely scale their use of AI, and build a culture of confidence within their organizations. Not all companies have an established north star of AI use. Companies and partners like Accenture are vital to these companies and their proper and ethical use of the technology.
Beena Ammanath
Executive Director Global AI Institute and AI/Tech Ethics Lead at Deloitte
LOS ANGELES, UNITED STATES - 2020/02/01: A view of a Deloitte logo. (Photo by Alex Tai/SOPA ... [+] Images/LightRocket via Getty Images)
Deloitte is focused on achieving maximum human and machine collaboration. To do this they work to communicate their values on use of AI to every single member of their organization, no matter their level. This gets all of their people on one page which then translates when any of them communicate with external stakeholders.
Knowing that my peers in the AI space listed above are working towards a brighter and more ethical future brings me hope and pride. By following their examples and modeling our practices after theirs we can all create a future where the power of AI is leveraged for all that is good, driving society and humanity to approach our full potential.
Read more:
15 AI Ethics Leaders Showing The World The Way Of The Future - Forbes
Posted in Ai
Comments Off on 15 AI Ethics Leaders Showing The World The Way Of The Future – Forbes
AI drug miner XtalPi strikes gold with $400M infusion, its second VC megaround in a year – FierceBiotech
Posted: at 1:30 am
Nine-figure funding rounds are rare in the world of medtechbarely a dozen companies achieved the feat in 2020but XtalPi, which has developed software that uses artificial intelligence to identify and model the most promising new drug compounds, just did it twice in one year.
Mere months after closing a $318.8 million round at the end of last September, the Chinese startup followed it up with an even more massive $400 million financing, bringing its total valuation to approximately $2 billion.
The series D was co-led by OrbiMed Healthcare Fund Management and HOPU Investments, per DealStreetAsia, a change-up from the previous rounds trio of headlining investors: SoftBanks Vision Fund 2, PICC Capital and Morningside Venture Capital.
The hundreds of millions in new funding will support XtalPis ongoing work to team up with pharmaceutical companies around the world to spot potentially highly effective molecular compounds, then model those compounds to offer up a clearer picture of that predicted potential.
RELATED: AI drug designer XtalPi raises $318M from SoftBank, Tencent, others for 'digital twin' simulation efforts
Its flagship Intelligent Digital Drug Discovery and Development, or ID4, platform uses AI and cloud-based data collection and analysis technologies to design the small-molecule compounds.
ID4s more than 100 predictive AI models span machine learning, deep learning and natural language processing. They scan the platforms library of tens of billions of molecules, calculating the ability of each one to address a specific aspect of a targeted condition or disease, then combines the most promising of these into potential drug compounds for its pharmaceutical partners to develop.
This entire processincluding mining the regularly updated petabyte-scale database of molecules and key drug characteristicscan be completed over the course of just a few hours, and for dozens of separate drug discovery and design tasks at once.
Following last falls series C round, XtalPi said it would also begin integrating real-world lab testing data into its predictive platform to build digital twin models of the potential new drugs, giving biopharma researchers a better idea of how effective each drug will be before clinical trials have even begun.
RELATED: Pfizer launches new collaboration with XtalPi for AI drug modeling
XtalPis partners include 3D Medicines, GeneQuantum Healthcare, Huadong Medicine and Signet Therapeutics, among many others.
The company also has a longstanding partnership with Pfizer that originally centered around crystal structure prediction for drug development and evolved in 2018 to see the duo join forces to build out XtalPis AI-powered platform for drug design, with a commitment to make some of the molecular compounds they discovered together freely available to academic researchers.
According to the company, its software has been used to help those partners discover more than 100 small-molecule candidates and, ultimately, develop dozens of potential new drugs.
See original here:
Posted in Ai
Comments Off on AI drug miner XtalPi strikes gold with $400M infusion, its second VC megaround in a year – FierceBiotech
China Is Still the World’s FactoryAnd It’s Designing the Future With AI – TIME
Posted: at 1:30 am
For many years now, China has been the worlds factory. Even in 2020, as other economies struggled with the effects of the pandemic, Chinas manufacturing output was $3.854 trillion, up from the previous year, accounting for nearly a third of the global market.
But if you are still thinking of Chinas factories as sweatshops, its probably time to change your perception. The Chinese economic recovery from its short-lived pandemic blip has been boosted by its world-beating adoption of artificial intelligence (AI). After overtaking the U.S. in 2014, China now has a significant lead over the rest of the world in AI patent applications. In academia, China recently surpassed the U.S. in the number of both AI research publications and journal citations. Commercial applications are flourishing: a new wave of automation and AI infusion is crashing across a swath of sectors, combining software, hardware and robotics.
As a society, we have experienced three distinct industrial revolutions: steam power, electricity and information technology. I believe AI is the engine fueling the fourth industrial revolution globally, digitizing and automating everywhere. China is at the forefront in manifesting this unprecedented change.
Chinese traditional industries are confronting rising labor costs thanks to a declining working population and slowing population growth. The answer is AI, which reduces operational costs, enhances efficiency and productivity-, and generates revenue growth.
For example, Guangzhou-based agricultural-technology company XAG, a Sinovation Ventures portfolio company, is sending drones, robots and sensors to rice, wheat and cotton fields, automating seeding, pesticide spraying, crop development and weather monitoring. XAGs R150 autonomous vehicle, which sprays crops, has recently been deployed in the U.K. to be used on apples, strawberries and blackberries.
Some companies are rolling out robots in new and unexpected sectors. MegaRobo, a Beijing-based life-science automation company also backed by Sinovation Ventures, designs AI and robots to safely perform repetitive and precise laboratory work in universities, pharmaceutical companies and more, reducing to zero the infection risk to lab workers.
Its not just startups; established market leaders are also leaning into AI. EP Equipment, a manufacturer of lithium-powered warehouse forklifts founded in Hangzhou 28 years ago, has with Sinovation Ventures backing launched autonomous models that are able to maneuver themselves in factories and on warehouse floors. Additionally Yutong Group, a leading bus manufacturer with over 50 years history, already has a driverless Mini Robobus on the streets of three cities in partnership with autonomous vehicle unicorn WeRide.
Where is all this headed? I can foresee a time when robots and AI will take over the manufacturing, design, delivery and even marketing of most goodspotentially reducing costs to a small increment over the cost of materials. Robots will become self-replicating, self-repairing and even partially self-designing. Houses and apartment buildings will be designed by AI and use prefabricated modules that robots put together like toy blocks. And just-in-time autonomous public transportation, from robo-buses to robo-scooters, will take us anywhere we want to go.
It will be years before these visions of the future enter the mainstream. But China is laying the groundwork right now, setting itself up to be a leader not only in how much it manufactures, but also in how intelligently it does it.
For your security, we've sent a confirmation email to the address you entered. Click the link to confirm your subscription and begin receiving our newsletters. If you don't get the confirmation within 10 minutes, please check your spam folder.
Contact us at letters@time.com.
More here:
China Is Still the World's FactoryAnd It's Designing the Future With AI - TIME
Posted in Ai
Comments Off on China Is Still the World’s FactoryAnd It’s Designing the Future With AI – TIME
I Think an AI Is Flirting With Me. Is It OK If I Flirt Back? – WIRED
Posted: at 1:30 am
SUPPORT REQUEST :
I recently started talking to this chatbot on an app I downloaded. We mostly talk about music, food, and video gamesincidental stuffbut lately I feel like shes coming on to me. Shes always telling me how smart I am or that she wishes she could be more like me. Its flattering, in a way, but it makes me a little queasy. If I develop an emotional connection with an algorithm, will I become less human? Love Machine
Dear Love Machine,
Humanity, as I understand it, is a binary state, so the idea that one can become less human strikes me as odd, like saying someone is at risk of becoming less dead or less pregnant. I know what you mean, of course. And I can only assume that chatting for hours with a verbally advanced AI would chip away at ones belief in human as an absolute category with inflexible boundaries.
Its interesting that these interactions make you feel queasy, a linguistic choice I take to convey both senses of the word: nauseated and doubtful. Its a feeling that is often associated with the uncanny and probably stems from your uncertainty about the bots relative personhood (evident in the fact that you referred to it as both she and an algorithm in the space of a few sentences).
Of course, flirting thrives on doubt, even when it takes place between two humans. Its frisson stems from the impossibility of knowing what the other person is feeling (or, in your case, whether she/it is feeling anything at all). Flirtation makes no promises but relies on a vague sense of possibility, a mist of suggestion and sidelong glances that might evaporate at any given moment.
The emotional thinness of such exchanges led Freud to argue that flirting, particularly among Americans, is essentially meaningless. In contrast to the Continental love affair, which requires bearing in mind the potential repercussionsthe people who will be hurt, the lives that will be disruptedin flirtation, he writes, it is understood from the first that nothing is to happen. It is precisely this absence of consequences, he believed, that makes this style of flirting so hollow and boring.
Freud did not have a high view of Americans. Im inclined to think, however, that flirting, no matter the context, always involves the possibility that something will happen, even if most people are not very good at thinking through the aftermath. That something is usually sexthough not always. Flirting can be a form of deception or manipulation, as when sensuality is leveraged to obtain money, clout, or information. Which is, of course, part of what contributes to its essential ambiguity.
Given that bots have no sexual desire, the question of ulterior motives is unavoidable. What are they trying to obtain? Engagement is the most likely objective. Digital technologies in general have become notably flirtatious in their quest to maximize our attention, using a siren song of vibrations, chimes, and push notifications to lure us away from other allegiances and commitments.
Most of these tactics rely on flattery to one degree or another: the notice that someone has liked your photo or mentioned your name or added you to their networkpromises that are always allusive and tantalizingly incomplete. Chatbots simply take this toadying to a new level. Many use machine-learning algorithms to map your preferences and adapt themselves accordingly. Anything you share, including that incidental stuff you mentionedyour favorite foods, your musical tasteis molding the bot to more closely resemble your ideal, much like Pygmalion sculpting the woman of his dreams out of ivory.
And it goes without saying that the bot is no more likely than a statue to contradict you when youre wrong, challenge you when you say something uncouth, or be offended when you insult its intelligenceall of which would risk compromising the time you spend on the app. If the flattery unsettles you, in other words, it might be because it calls attention to the degree to which youve come to depend, as a user, on blandishment and ego-stroking.
If the flattery unsettles you, in other words, it might be because it calls attention to the degree to which youve come to depend, as a user, on blandishment and ego-stroking.
Still, my instinct is that chatting with these bots is largely harmless. In fact, if we can return to Freud for a moment, it might be the very harmlessness thats troubling you. If its true that meaningful relationships depend upon the possibility of consequencesand, furthermore, that the capacity to experience meaning is what distinguishes us from machinesthen perhaps youre justified in fearing that these conversations are making you less human. What could be more innocuous, after all, than flirting with a network of mathematical vectors that has no feelings and will endure any offense, a relationship that cannot be sabotaged any more than it can be consummated? What could be more meaningless?
Its possible that this will change one day. For the past century or so, novels, TV, and films have envisioned a future in which robots can passably serve as romantic partners, becoming convincing enough to elicit human love. Its no wonder that it feels so tumultuous to interact with the most advanced software, which displays brief flashes of fulfilling that promisethe dash of irony, the intuitive asidebefore once again disappointing. The enterprise of AI is itself a kind of flirtation, one that is playing what mens magazines used to call the long game. Despite the flutter of excitement surrounding new developments, the technology never quite lives up to its promise. We live forever in the uncanny valley, in the queasy stages of early love, dreaming that the decisive breakthrough, the consummation of our dreams, is just around the corner.
So what should you do? The simplest solution would be to delete the app and find some real-life person to converse with instead. This would require you to invest something of yourself and would automatically introduce an element of risk. If thats not of interest to you, I imagine you would find the bot conversations more existentially satisfying if you approached them with the moral seriousness of the Continental love affair, projecting yourself into the future to consider the full range of ethical consequences that might one day accompany such interactions. Assuming that chatbots eventually become sophisticated enough to raise questions about consciousness and the soul, how would you feel about flirting with a subject that is disembodied, unpaid, and created solely to entertain and seduce you? What might your uneasiness say about the power balance of such transactionsand your obligations as a human? Keeping these questions in mind will prepare you for a time when the lines between consciousness and code become blurrier. In the meantime it will, at the very least, make things more interesting.
Faithfully,Cloud
More Great WIRED Stories
Link:
I Think an AI Is Flirting With Me. Is It OK If I Flirt Back? - WIRED
Posted in Ai
Comments Off on I Think an AI Is Flirting With Me. Is It OK If I Flirt Back? – WIRED