The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
U.S. urged to invest more in AI; ex-Google CEO warns of China’s progress – Reuters
Posted: November 9, 2019 at 8:42 am
WASHINGTON (Reuters) - U.S. government funding in artificial intelligence has fallen short and the country needs to invest in research, train an AI-ready workforce and apply the technology to national security missions, a government-commissioned panel led by Googles former CEO said in an interim report on Monday.
The National Security Commission on Artificial Intelligence (NSCAI), created by Congress last year, raised concerns about the progress China has made in this area. It also said the U.S. government still faces enormous work before it can transition AI from a promising technological novelty into a mature technology integrated into core national security missions.
The commission thinks an allied effort on AI in the realm of national security is important, Robert Work, vice chairman of the NSCAI and a former deputy secretary of defense, told reporters. The NSCAI has spoken with Japan, Canada, the United Kingdom, Australia and the European Union, Work said.
China is investing more than the United States in AI, said the report, which referred to the Asian nation more than 50 times.
China takes advantage of the openness of U.S. society in numerous ways - some legal, some not - to transfer AI know-how, the report said, at a time of heightened tensions between the countries.
A spokeswoman for Chinas embassy in Washington did not immediately return a request for comment.
China is ahead in two areas. One is in the face recognition surveillance area. And another one is in financial technology. This does not mean that theyre ahead (in) AI overall, Eric Schmidt, the panels chairman and one of several U.S. tech industry executives on the commission, told reporters.
A poll the commission conducted of researchers found that China is a fast follower but that the best and the most original papers are still occurring in the West, Schmidt said. There is still a way forward for the United States to win the high-stakes technological race, he said.
Schmidt is a technical advisor to Googles parent Alphabet Inc (GOOGL.O). He was previously the companys executive chairman and before that was Googles chief executive officer. Others on the commission include Andrew Jassy, CEO of Amazon Web Services (AMZN.O), and Safra Catz, CEO of Oracle Corp (ORCL.N).
Part of the commissions report addressed whether the United States should restrict American cooperation with Chinese AI researchers, including through visa and export controls. The challenge U.S. officials face is that American industry and academic leaders have said that any such restrictions would harm the U.S. economy, the report said.
The commissioners did not specify solutions, saying instead that the choice need not be a binary one between cooperating and disentangling.
It said, however, the United States should be open to cooperating with China on promoting the responsible use of AI, including for example jointly banning use of AI to authorize the launch of nuclear weapons.
The commission also expressed concern that China, by allegedly using AI to violate human rights, will set a bad example for authoritarian regimes. It noted that beside China, at least 74 other countries are also engaging in AI-powered surveillance, including half of advanced liberal democracies.
Calling attention to activism by tech workers who have protested industry partnerships with the U.S. military, the report said, ethics and strategic necessity are compatible with one another. There is widespread support for making the technology unbiased and safe, the report said, but the Commission is concerned that debate will paralyze AI development.
Last month, the Pentagon awarded a controversial contract to Microsoft Corp (MSFT.O), protested by some of the companys workers, that will help the military better access data and cloud computing services from battlefields and other remote locations.
The final report from the commission will be ready in about a year, the commission said.
Reporting by Jeffrey Dastin and Nandita Bose; Additional reporting by Paresh Dave and Mike Stone; Editing by Paul Simao
More:
U.S. urged to invest more in AI; ex-Google CEO warns of China's progress - Reuters
Posted in Ai
Comments Off on U.S. urged to invest more in AI; ex-Google CEO warns of China’s progress – Reuters
AI Can Help YouAnd Your BossMaximize Your Potential. Will You Trust It? – Forbes
Posted: October 20, 2019 at 10:32 pm
Hands of robot and human touching on global virtual network connection future interface. Artificial ... [+] intelligence technology concept.
Would you trust an Artificial Intelligence (AI) to tell you how to become more effective and successful at your job? How would you feel if you knew your HR department uses AI to determine whether you are leadership material? Or that an AI just suggested to your boss that she should treat you better or else you might soon quit and join a competitorwell before the thought of jumping ship entered your mind?
Meet Yva, introduced by her creator David Yang in this fascinating podcast discussion.
David Yang is an impressive serial entrepreneur: he has launched twelve companies, beginning when he was in fourth grade. David started training as a physicist, to follow in his parents footsteps. He won math and physics Olympiads; then his first entrepreneurial detour distracted him from his studies for a while and sparked his passion for computer science and AIits really worth hearing the story from Davids own voice, especially his concern of possibly disappointing his parents even as he was launching a hugely successful entrepreneurial and scientific career.
Yva, Davids latest creation, is an AI-powered people analytics platforma remarkable example of the powerful role that AI is starting to play in the workplace, with the ethical implications that quickly come to the fore.
Yvas neural network can mine and analyze workers activities across a range of work applications: email, Slack, G-Suite, GitHub. With these data, the AI can pick up a treasure trove of nuanced insights about employee behaviors: how quickly an employee responds to certain types of emails; or the tree structure of her communications: how many to subordinates, how many to peers or superiors, how many outside the company; and much more.
These insights can provide value to an organization in two main ways:
First, in identifying which employees have high potential to be great performers or strong leaders. The company tells Yva which individuals it currently considers as best performers; Yvas neural network identifies which behaviors are characteristic of these top performers, and then finds other employees who exhibit some if not all of the same traits. It can tell you who has the potential to become a top salesperson, or an extremely effective leader; and it can tell you which characteristics they already possess and which ones they need to develop.
Second, Yva helps minimize regrettable attrition by identifying employees who are a high resignation risk. A decision to resign never comes out of the blue. First the employee will feel increasingly frustrated or burnt out; then she will become more open to consider other opportunities; then she will actively seek another job. Each stage carries subtle changes in our behavior: maybe how early we send out our first email in the morning, or how quickly we respond, or something in the tone of our messages. We cant detect these changes, but Yva can.
For large companies, reducing regrettable attrition is Yvas top contribution: losing and having to replace valuable employees represents a substantial cost. This, notes David Yang, makes the Return On Investment from deploying Yva very easy to identify. For smaller companies, especially in their growth stage, attrition is less of a concern and the greater value comes from the way Yva helps them build talent and leadership from within their ranks.
Given the ubiquitous concerns that technology will eliminate jobs, its refreshing and reassuring to hear that Yva instead proves its value by boosting employee retention.
Yva can also help the individual worker; it can create your personal dashboard with insights and suggestions on how you can change your behavior to become more effective and successful.
There is a trade-off. By default, Yva will respect your privacy, working on anonymized data. But the more individual data you are willing to share, the more Yva can help. The choice is yours.
David Yang notes some interesting geographic differences in the share of employees who opt in; he also notes that across the board, close to one employee in five remains adamantly opposed to disclosing her individual data.
Privacy concerns are fully understandable when faced with an AI that can drive important HR decisions. But is it smart to trust humans more than AI? David Yang notes that AI can help eliminate the human biases that often influence hiring and promotion decisions. Providedhe stressesthat the AI gets trained in the right way, only on final outcomes, on objective performance criteria, without feeding into it intermediate variables such as race, gender or age, which couldcreate a built-in bias in the AI itself.
David Yang, unsurprisingly, is very bullish on the role that AI can play in people analytics and in our lives. Bullish, but very realistic and thoughtful, and willing to put himself on the lineat the end of the podcast discussion he talks of the role that Morpheus, another AI, plays in his personal life.
David thinks that in the future smaller companies (500 employees or less) will rely completely on AI-powered people analytics platform; he believes that AI will play a major role in leveraging the creativity and efficiency of individuals, while HR (human) professionals will focus on business-specific HR-partner roles. He has a horse in the raceYva. But there seems to be little doubt that whatever role AI takes in HR and people analytics, it will be one of its most powerful influences in our professionaland personallives.
View post:
AI Can Help YouAnd Your BossMaximize Your Potential. Will You Trust It? - Forbes
Posted in Ai
Comments Off on AI Can Help YouAnd Your BossMaximize Your Potential. Will You Trust It? – Forbes
Every HR Leader Needs AI On Their Career Roadmap Part II – Forbes
Posted: at 10:32 pm
iStock
Bottom Line: The stronger a CHROs and companies' AI digital dexterity skills, the stronger they are at overcoming talent management challenges theyre facing today.
For HR leaders to excel at improving AI digital dexterity skills across their organizations, they need to learn from the shortcomings of existing Learning & Management systems that continue to have dismal adoption. First, all employees, especially Millennials and Gen Z, need to understand Why they should invest the time to take certain classes, courses, and whats in it for them after they have learned those skills. Second, employees want to learn about AI in a self-service mode, where they have greater control over the pace, review, and mastery of each lesson. Third, self-service learning tailored to employees learning preferences has proven to be more effective than them participating in classes led by HR team members who themselves may not be aware of all the available options.
Easily discovering different career paths, validating them with peers, finding mentors who can serve as North Stars in a self-service manner is what the modern employee experience needs to include. This is only feasible with a machine driven AI platform because the options can be in the millions if not billions. Manual attempts or patchwork of point solutions will continue to be sub-par and rejected by employees who want greater control over their learning experiences.
HR needs to have an urgency about providing a broad base of AI digital dexterity training if they are going to close talent gaps, improve recruiting, retain valuable employees, and offer meaningful career paths. Job candidates are prioritizing AI training & professional development as a must-have because they know its essential for their career growth. When HR leaders and their teams know which AI techniques and technologies to use when theyre better equipped to battle biases, close talent gaps, making a greater contribution.
How AI Digital Dexterity Helps Tame The Talent Crisis
Attracting high-potential candidates, reskilling and upskilling workforces, and equipping HR teams with the knowledge they need to manage talent at scale are the catalysts driving organizations to add AI training. High-potential candidates are attracted to employers who create an intellectually rich and vibrant culture they can learn and grow within. HR leaders are seeing how the greater their commitment to upskilling and reskilling their workforce, the greater the energy theyre unleashing. Taking an interest in an employees growth and career path is rocket fuel for any business. And its the catalyst that CEOs are relying on to propel innovation too. PwCs 22nd Annual Global CEO Survey found that 55% of CEOs say upskilling and reskilling are necessary for their organizations to innovate and scale.
Forward-thinking HR leaders see AI expertise and knowledge as a competitive advantage. Theyre able to overcome many of talent managements most difficult challenges today by taking a pragmatic approach to using AI techniques and technologies. Findings from the recent Harris Interactive survey in collaboration with Eightfold titled Talent Intelligence And Management Report 2019-2020 reflect the challenges HR leaders face and why AI training with a strong focus on how to use matching technologies is a must-have today:
Harris Interactive survey in collaboration with Eightfold titled Talent Intelligence And Management Report 2019-2020
Harris Interactive survey in collaboration with Eightfold titled Talent Intelligence And Management Report 2019-2020
Harris Interactive survey in collaboration with Eightfold titled Talent Intelligence And Management Report 2019-2020
Harris Interactive survey in collaboration with Eightfold titled Talent Intelligence And Management Report 2019-2020
Conclusion
HR leaders need to set the pace when it comes to adopting AI digital dexterity skills and scaling them across their organizations. Upskilling and reskilling are essential for all organizations to stay competitively strong from an innovation standpoint. Taking an active interest in employees career direction and growth unleashes high energy levels and improves morale while increasing an entire organizations AI expertise and mastery. With AI skills increasing, HR can be more discerning and focused on taming the bias beast as well. Being strong with AI skills from an organizational standpoint helps solve the talent crisis every organization faces, too, starting with improving qualified talent pipelines.
Continue reading here:
Every HR Leader Needs AI On Their Career Roadmap Part II - Forbes
Posted in Ai
Comments Off on Every HR Leader Needs AI On Their Career Roadmap Part II – Forbes
Adopting AI in Health Care Will Be Slow and Difficult – Harvard Business Review
Posted: at 10:32 pm
Executive Summary
Artificial intelligence, including machine learning, presents exciting opportunities totransformthe health and life sciences spaces. It offers tantalizing prospects for swifter, more accurate clinical decision making and amplified R&D capabilities. However, open issues around regulation and clinical relevance remain, causing both technology developers andpotential investorsto grapple with how to overcome todays barriers to adoption, compliance, and implementation. This article explains thekey obstacles and offers ways to overcome them.
Artificial intelligence, including machine learning, presents exciting opportunities totransformthe health and life sciences spaces. It offers tantalizing prospects for swifter, more accurate clinical decision making and amplified R&D capabilities. However, open issues around regulation and clinical relevance remain, causing both technology developers andpotential investorsto grapple with how to overcome todays barriers to adoption, compliance, and implementation.
Here are key obstacles to consider and how to handle them:
Developing regulatory frameworks. Over the past few years, the U.S. Food and Drug Administration (FDA) has been taking incremental steps toupdate its regulatory framework to keep up with the rapidly advancing digital health market. In 2017, the FDA released its Digital Health Innovation Action Plan to offer clarity about the agencys role in advancing safe and effective digital health technologies, and addressing key provisions of the 21st Century Cures Act.
The FDA has also been enrolling select software-as-a-medical-device (SaMD) developers in its Digital Health Software Precertification (Pre-Cert) Pilot Program. The goal of the Pre-Cert pilot is to help the FDA determine the key metrics and performance indicators required for product precertification, while also identifying ways to make the approval process easier for developers and help advance healthcare innovation.
Most recently, the FDA released in September its Policy for Device Software Functions and Mobile Medical Applications a series of guidance documents that describe how the agency plans to regulate software that aids in clinical decision support (CDS), including software that utilizes machine-learning-based algorithms.
In a related statement from the FDA, Amy Abernethy, its principal deputy commissioner, explained that the agency plans to focus regulatory oversight on higher-risk software functions, such as those used for more serious or critical health circumstances. This also includes software that utilizes machine learning-based algorithms, where users might not readily understand the programs logic and inputs without further explanation.
An example of CDS software that would fall under the FDAs higher-risk oversight category would be one that identifies a patient at risk for a potentially serious medical condition such as a postoperative cardiovascular eventbut does not explain why the software made that identification.
Achieving FDA approval. To account for the shifting FDA oversight and approval processes, software developers must carefully think through how to best design and roll out their product so its well positioned for FDA approval, especially if the software falls under the agencys higher risk category.
One factor that must be considered is the fact that AI-powered therapeutic or diagnostic tools, by nature, will continue to evolve. For example, it is reasonable to expect that a software product will be updated and change over time (e.g., security updates, adding new features or functionalities, updating an algorithm, etc.). But given the product has technically changed, its FDA approval status could be put at risk after each update or new iteration.
In this case, planning to take a version-based approach to the FDA approval process might be in the developers best interest. In this approach, a new version of software is created each time the softwares internal ML algorithm(s) is trained by a new set of data, with each new version being subjected to independent FDA approval.
Although cumbersome, this approach sidesteps FDA concerns about approving software products that functionally change post-FDA approval. These strategic development considerations are crucial for solutions providers to consider.
Similarly, investors must also have a clear understanding of a companys product development plans and intended approach for continual FDA approval as this can provide clear differentiation over other competitors in the same space. Clinicians will be hard pressed to adopt technologies that havent been validated by the FDA, so investors need to be sure the companies they are considering supporting have a clear product development roadmap including an approach to FDA approvals as software products themselves and regulatory guidelines continue to develop.
AI is a black box. Besides current regulatory ambiguity, another key issue that poses challenges to the adoption of AI applications in the clinical setting is theirblack-box nature and the resulting trust issues.
One challenge is tracking: If a negative outcome occurs, can an AI applications decision-making process be tracked and assessed for example, can users identify the training data and/or machine learning (ML) paradigm that led to the AI applications specific action?. To put it more simply, can the root cause of the negative outcome be identified within the technology so that it can be prevented in the future?
From reclassifying the training data to redesigning the ML algorithms that learn from the training data, the discovery process is complex and could even result in the application being removed from the marketplace.
Another concern raised about the black-box aspect of AI systems is that someone, either on purpose or by mistake, could feed incorrect data into the system, causing erroneous conclusions (e.g., misdiagnosis, incorrect treatment recommendations). Luckily, detection algorithms designed to identify doctored or incorrect inputs could reduce, if not eliminate, this risk.
A bigger challenge posed by AI systems black box nature is that physicians are reluctant to trust (due in part to malpractice-liability risk) and therefore adopt something that they dont fully understand. For example, there are a number of emerging AI imaging diagnostic companies with FDA-approved AI software tools that can assist clinicians in diagnosing and treating conditions such as strokes, diabetic retinopathy, intracranial hemorrhaging, and cancer.
However, clinical adoption of these AI tools has been slow. One reason is physician certification bodies such as the American College of Radiology (ACR) have only recently started releasing formalized use cases for how AI software tools can be reliably used. Patients are also likely to have trust issues with AI-powered technologies. While they may accept the reality that human errors can occur, they have very little tolerance of machine error.
While efforts to help open up the black box are underway, AIs most useful role in the clinical setting during this early period of adoption may be to help providers make better decisions rather than replacing them in the decision-making process. Most physicians may not trust a black box, but they will use it as a support system if they remain the final arbiter.
To gain physicians trust, AI-software developers will have to clearly demonstrate that when the solutions are integrated into the clinical decision-making process, they help the clinical team do a better job. The tools must also be simple and easy to use. Applying AI in lower-stakes tasks initially, such as billing and coding (e.g., diagnostics, AI-assisted treatments), should also help increase trust over time.
At the industry level, there needs to be a concerted effort to publish more formalized use cases that support AIs benefits. Software developers and investors should be working with professional associations such as the ACR to publish more use cases and develop more frameworks to spur industry adoption and get more credibility.
Lower hurdles in life sciences. While AIs application in the clinical care setting still faces many challenges, the barriers to adoption are lower for specific life sciences use cases. For instance,ML is an exceptional toolfor matching patients to clinical trials and for drug discovery and identifying effective therapies.
But whether its in a life sciences capacity or the clinical care setting, the fact remains that many stakeholders stand to be impacted by AIs proliferation in health care and life sciences. Obstacles certainly exist for AIs wider adoption from regulatory uncertainties to the lack of trust to the dearth of validated use cases. But the opportunities the technology presents to change the standard of care, improve efficiencies, and help clinicians make more informed decisions is worth the effort to overcome them.
The rest is here:
Adopting AI in Health Care Will Be Slow and Difficult - Harvard Business Review
Posted in Ai
Comments Off on Adopting AI in Health Care Will Be Slow and Difficult – Harvard Business Review
AI Is Hard But Worth the Investment – PCMag.com
Posted: at 10:32 pm
In 1955, scientists behind the first AI research project believed it would take a 10-man team two months to develop thinking machines that could replicate the problem-solving capabilities of the human mind. But six decades, thousands of projects, and billions of dollars later, human-level artificial intelligence remains an elusive goal.
The difficulty of achieving human-level AI has split the field into two subdomains: artificial general intelligence (AGI), the original vision of "thinking" machines; and artificial narrow intelligence, a limited but easier-to-achieve application now found in many industries.
The more we make advances in AI, the more we come to appreciate the complexity of the human brain. But does that mean we should abandon the pursuit of artificial general intelligence?
Many scientists have become disillusioned about cracking the code of AGI. In his latest book, Architects of Intelligence, futurist and author Martin Ford asked 23 prominent AI scientists and thought leaders how long it would take to achieve AGI. Five refrained from giving an estimate, and most of the remaining 18 preferred to guess anonymously. Their mean estimate for AGI was 209980 years from now.
"We have been working on AI problems for over 60 years. And if the founders of the field were able to see what we tout as great advances today, they would be very disappointed, because it appears we have not made much progress. I don't think that AGI is in the near future for us at all," said Daniela Rus, Director of the MIT Computer Science and AI Lab (CSAIL), one of the scientists Ford interviewed.
Other scientists argue that pursuing AGI is pointless. "We don't need to duplicate humans. That's why I focus on having tools to help us rather than duplicate what we already know how to do. We want humans and machines to partner and do something that they cannot do on their own," Peter Norvig, Director of Research at Google and the co-author of the leading AI textbook, said in a 2016 interview with Forbes.
Deep-learning algorithms fail at simple, general problem-solving: tasks that humans learn at a very early age, such as understanding the meaning of text and navigating open environments.
But deep learning is efficient in narrow applications such as computer vision, cancer detection, and speech recognition. In many cases, it surpasses human performance considerably. Most of the current research and funding in AI is focused on these narrow AI or intelligence augmentation applications, the kind Norvig suggests.
While narrow AI makes inroads into new fields every day, the few AI labs still focused on artificial general intelligence continue to burn through mounds of cash and seem to make very little progress toward human-level AI (if any).
Alphabet-owned AGI lab DeepMind incurred $570 million in losses in 2018 alone, according to documents it filed with the UK's Companies House registry in August. OpenAI, another AI lab that aims to create AGI, recently had to shed its nonprofit structure to find investors for its expensive research. Both labs have accomplished remarkable feats, including creating bots that play complex board and video games. But they're still nowhere near creating artificial general intelligence.
So, should we abandon the pursuit of AGI? Or should we focus on finding practical (and profitable) applications for current narrow AI technologies and stop funding AGI research?
Often overlooked in the failure to create AGI are the big rewards we've reaped in six decades of AI research. We owe many scientific advances and tools that we use every day to failed efforts to replicate the human brain.
One of my favorite quotes in this regard comes from Artificial Intelligence: A Modern Approach, the famous AI book Norvig co-authored with distinguished scientist Stuart Russell. "[W]ork in AI has pioneered many ideas that have made their way back to mainstream computer science, including time-sharing, interactive interpreters, personal computers with windows and mice, rapid development environments, the linked list data type, automatic storage management, and key concepts of symbolic, functional, declarative, and object-oriented programming," Norvig and Russell wrote.
We would have had none of those things (nor smartphones, smart speakers, and smart assistants) had it not been for scientists chasing the wild dream of creating human-level AI.
Artificial neural networks (ANN), the main component of deep-learning algorithms, drew inspiration from the human brain and were meant to replicate its functions. Today, ANNs are not nearly as efficient and versatile as their biological counterparts. Nonetheless, they've yielded many important applications in fields such as computer vision, natural language processing, machine translation, and voice synthesis. And many scientific fields, including neuroscience, cognitive science, and other areas that have to do with the study of the human brain have benefited from the research in artificial general intelligence.
So if history is any guide, the pursuit of artificial general intelligence will yield many benefits for humanity. Undoubtedly, we'll encounter many more hurdles, and we might never get to the finish line. But even if we never reach the stars, the journey will be rewarding.
More here:
Posted in Ai
Comments Off on AI Is Hard But Worth the Investment – PCMag.com
AI And CRM: Will Customer Management Get Easier? – Forbes
Posted: at 10:32 pm
AI, Artificial Intelligence,3d rendering,conceptual image.
If customer experience is the center of digital transformation, customer relationship management (CRM) must be central to managing that experience. But mentioning the term CRM in your meeting room often leads to groans of disgust rather than coos of excitement. Indeed, most companies have a love/hate relationship with their customer management software. It allows them to keep in touch with the people keeping them in business. But in many cases, its sluggish, time-sucking, and confusingnot words youd like to describe the tech most central to your companys success.
Enter, AI. Yes, AI has obviously played a role in past CRM iterations. But new developments in natural language processing and machine learning could (and will) help make customer management easier than ever before. The following are a few ways companies are using AI with their CRM platforms to improve customer management and how software companies like Salesforce are creating solutions to meet the needs of their customers.
It Can Help Save Time
None of us have the patience to click through multiple screens to do mind-numbing work. That includes your sales team. In a recent survey regarding the top challenges of CRM tools, the highest complaint was the time it takes to enter data and keep it up today. In fact, 46.5 percent of those surveyed named this as a problemhigher than CRM platforms being expensive (30 percent), hard to learn (28 percent), or difficult to configure (15 percent.) Why is that important? Because when software is too clunky, time-consuming, and difficult to use, itwait for itwont get used. This leads to outdated data, incomplete data and unusable datawhich is, ultimately, pointless.
Salesforce must have read the survey. Its newest iteration of AI, Einstein Search for Sales and Service, claims to reduce clicks and page loads by 50 to 80 percent for frequently used tasks. Thats the type of change that turns CRM from a necessary investment into a profitable one.
However, Salesforce isnt the only one trying to tackle this problem. Microsoft Dynamics 365 has built a dashboard that can help users understand the amount of time users as well as customers are able to save through the utilization of AI powered chatbots. This type of data will be critical for companies to optimize customer experience and free resources to be more efficient at work.
Taking a Cue from Everyday People
Google had a search satisfaction level of 82 percent in 2018. Customer management platforms? Not so much. Though CRM Magazine says more than 90 percent companies with 10+ employees utilize CRM platforms, the jury is still out on how effective they are in terms of finding the right lead or even simply accurate information. AI could help keep data clean, centralized, and easy to find.
Again, there are companies making this easier. Platforms like Oracles Digital Assistant (for CRM) hope to improve this by using AI to make their customer searches fast and accurate. Using NLP, for instance, users can search phrases like open opportunities in Los Angeles rather than using challenging search terms like +Lead +Open +Nonconverted +Los Angeles + California +Myname. Imagine how many more employees will be willing to use the software just because its easy to use.
Get Personal
We all know personalization is driving sales in the marketing world, but how about sales and customer service? Customer management requires the same type of personal touch, if not more so, as huge dealstemperspersonalitiesoften collide. Using AI, customer management is incorporating personalized intel. For instance, its now possible that contextual data will show up on a call screen before a sales person answers the phone, allowing them to prioritize callstalk more personally to those callingand even divert calls to voicemail if they know the caller is a notoriously cold fish. Less wasted time is more potential money in customer management.
But personalization isnt just about knowing customers, its about knowing the preferences of the company and salespeople overall. Einsteins newest search capabilities also make it possible for users (at the company and individual level) to tailor their preferences for searchand the AI will improve its ability to return those preferences over time.
Build a Predictive Pipeline
Obviously, one of the most important roles of customer management is converting leads to salespotential customers to long-term loyalists. Using vast amounts of data, AI can help determine which leads are the strongest. It helps you determine the types of data that indicate a solid lead (both inside your database and outside of it), what actions you should take to convert that person based on their past actions, and which leads you can kick to the curb.
Globally, CRM spending is expected to hit more than $55.2 billion this year. The report claims that Salesforce has nearly 20 percent of that market share, follow (far behind) by SAP at least than 10 percent. Other leading players include Microsoft, Oracle and Adobe. All of these companies are making significant investments in embedding AI into their platforms. Microsoft, SAP and Adobe are even working to create a common data platform as part of their Open Data Initiative (ODI) to streamline how data can be used for AI and analytics across platforms as many companies use more than one CRM/ERP/CEM platform.
Clearly, most of us know the value of customer managementthe problem is that were using less-than-stellar tools or using them in a way that is less-than optimal. At the end of the day, customer management is about knowing what data to gather about your leads, keeping it up to date, and gaining insights from it in the fastest way possible. AI is a clear partner for CRMs and companies looking to build a more loving relationship with customer management and their customers both.
See original here:
Posted in Ai
Comments Off on AI And CRM: Will Customer Management Get Easier? – Forbes
Artificial Intelligence Is on the Case in the Legal Profession – Observer
Posted: at 10:32 pm
AI robot lawyers are hereand they arent going away. Pixabay/Gerd Altmann
When you hear the phrase robot lawyer, what comes to mind?
My brain conjures up an image of C-3PO in a three-piece suit, shuffling around a courtroom, while throwing out cross-examination quips such as: Dont call me a mindless philosopher, you overweight glob of prosecuting witness grease!
SEE ALSO: Banks Will Replace 200,000 Workers With Robots by Next Decade
But thats not exactly the case (yet).
Artificial intelligence (AI) is, in fact, becoming a mainstay component of the legal profession. In some circumstances, this analytics-crunching technology is using algorithms and machine learning to do work that was previously done by entry-level lawyers. (What does that say about entry-level lawyers?)
Apparently, AI robot lawyers are hereand theyre not going away.
Still, Elon Musk has warned that AI is a bigger threat to humanity than nuclear weapons, but before we start worrying about how the robot lawyer uprising wont be televised (it will happen slowly and quietly in the middle of night), we connected with Lane Lillquist, the co-founder and CTO of legal tech companyInCloudCounsel, to give us his thoughts on what we need to fear and/or not fear when it comes to lawyer robots.
AIs application to the legal profession is very similar, Lillquist explained. It can make contract review more accurate, enable us to take a more data-driven approach to the practice of law and make the legal space overall more efficient.
Lillquist sees robot lawyers, AKA artificial intelligence being used in the legal profession, akin to the simple tools that make everyday life easier and more productive, along the lines of spellcheck or autocorrect.
AIs present capability meets a sizable need in the legal space by automating a number of high-volume, recurring tasks that otherwise take lawyers focus away from more meaningful work, Lillquist said. Beyond this, the role of the lawyer is still vital to conducting quality legal work.
Over the next five years, Lillquist predicts the role of AI in the legal space will continue to be accomplishing narrow and specific tasks, such as finding terms in a set of documents or filling out certain forms.
Take the company DoNotPay. The app trumpets that its the worlds first robot lawyer.
Fight corporations, beat bureaucracy and sue anyone at the press of a button, says DoNotPays website.
The company has built an AI chatbot that interviews users about their legal problems, then uses their answers to complete and submit paperwork on their behalf.
Some might think AI legal services, such as DoNotPay, will eventually replace humans.
But Lillquist doesnt think so.
He sees the rise of legal artificial intelligence on par with the initial rise of ATMs; the number of bank tellers actually increased because it became easier to open smaller bank branches in more locations.
AI is a tool. Having a better tool doesnt mean were going to have less people doing an ever increasing amount of work, said Lillquist. Enabled by technology, lawyers are more productive, allowing more legal matters to be represented around the world.
He sees AI continually changing the legal profession, requiringlawyers to possess an increasing number of skills to make use of such technology to remain competitive in the market. This wave of technology will also require the creation of more data analytics jobs that can tap into legal and business datasets and generate actionable insights to improve the practice of law.
Were already seeing a rise of legal technology companies providing alternative legal services backed by AI and machine learning that are enhancing how lawyers practice law, said Lillquist. Law firms will begin building their own engineering departments and product teams, too.
Deep legal expertise is required to create technology that successfully operates in the legal space, and that knowledge resides in humans, he added.
In turn (or in theory), AI enabling legal tech solutions will allow human lawyers to complete more work at a higher degree of accuracy, freeing up bandwidth to focus on different and/or more complex types of work that can create substantial value for their companies and clients.
AI will also be able to handle repetitive tasks of increasing complexity, especially in data extraction, which will require new systems to be built to extract value out of new kinds of data, Lillquist explained.
Another factor to consider is that artificial intelligence will make legal assistance more affordable. Again, look what can be done with an app such as DoNotPay compared to what those types of services would cost by acquiring a human lawyer.
But the big pink elephant in the courtroom goes back to Musks apocalyptic warning about AI running amok. Shouldnt the same cautionary tale be applied to the legal profession?
Lillquist doesnt agree. Although he does believe that with great AI power comes great AI responsibility, a human hand still needs to be involved in the process. Case in point: AI isnt going to give us the answer to questions requiring strong creative thinking or a value judgment.
Its true that AI is creating more and more powerful tools, and legal AI can be dangerous to people who use it that dont fully understand the ins and outs of the practice of law, Lillquist said. They may use these tools blindly, exposing themselves to legal risk that they dont understand.
Still, AI can only do what it is narrowly trained to do; its not creatively thinking about all angles of a problem.
This is a big reason why I think lawyers will always be involved in the practice of law for the foreseeable future, he continued. An AI-human paired team can accomplish more than either humans or machines are able to accomplish on their own.
So what does the future hold for us in our nations dystopian courtrooms?
Lillquist foresees that AI should continue to improve and widen its currently narrow scope over the next couple of decades, impacting and expanding the practice of law in ways that we cant fully comprehend with our 2019 brains. This could include the ability to generate agreements, to mark-up and negotiate a document and to automatically administer and make appropriate filings.
Software will continue to eat the world, and AI will help ensure the legal space achieves the same efficiencies that we have seen technology deliver to other industries, Lillquist said. Im excited to see how technology will continue to transform the legal industry in the future. My eyes are wide open; Im continually amazed at the power of technological innovation.
See the article here:
Artificial Intelligence Is on the Case in the Legal Profession - Observer
Posted in Ai
Comments Off on Artificial Intelligence Is on the Case in the Legal Profession – Observer
How big data and AI work together – The Enterprisers Project
Posted: at 10:32 pm
Big data isnt quite the term de rigueur that it was a few years ago, but that doesnt mean it went anywhere. If anything, big data has just been getting bigger.
That once might have been considered a significant challenge. But now, its increasingly viewed as a desired state, specifically in organizations that are experimenting with and implementingmachine learningand other AI disciplines.
AI and ML are now giving us new opportunities to use the big data that we already had, as well as unleash a whole lot of new use cases with new data types, says Glenn Gruber, senior digital strategist atAnexinet. We now have much more usable data in the form of pictures, video, and voice [for example]. In the past, we may have tried to minimize the amount of this type of data that we captured because we couldnt do quite so much with it, yet [it] would incur great costs to store it.
[ Could AI solve that problem? Get real-world lessons learned from CIOs in thenew HBR Analytic Services report,An Executives Guide to Real-World AI.]
The more data we put through the machine learning models, the better they get. Its a virtuous cycle.
Theres a reciprocal relationship between big data and AI: The latter depends heavily on the former for success, while also helping organizations unlock the potential in their data stores in ways that were previously cumbersome or impossible.
Today, we want as much [data] as we can get not only to drive better insight into business problems were trying to solve, but because the more data we put through the machine learning models, the better they get, Gruber says. Its a virtuous cycle in that way.
Its not as if storage and other issues with big data and analytics have gone bye-bye. Gruber, for one, notes that the pairing of big data and AI creates new needs (or underscores existing ones) around infrastructure, data preparation, and governance, for example. But in some cases, AI and ML technologies might be a key part of how organizations address those operational complexities. (Again, theres a cyclical relationship here.)
[ Sort out the jargon jumble. Read:AI vs. machine learning: Whats the difference?]
About that better insight thing: How is AI and ML as its most prominent discipline in the business world at the moment helping IT leaders deliver that, whether now or in the future? Let us count some ways.
One of the fundamental business problems of big data could sometimes be summarized with a simple question: Now what? As in: Weve got all this stuff (thats the technical term for it) and plenty more of it coming so what do we do with it? In the once-deafening buzz around big data, it wasnt always easy to hear the answers to that question.
Moreover, answering that question or deriving insights from your data usually required a lot of manual effort. AI is creating new methods for doing so. In a sense, AI and ML are the new methods, broadly speaking.
Historically, when it comes to analyzing data, engineers have had to use a query or SQL (a list of queries). But as the importance of data continues to grow, a multitude of ways to get insights have emerged. AI is the next step to query/SQL, says Steven Mih, CEO atAlluxio. What used to be statistical models now has converged with computer science and has become AI and machine learning.
As a result, managing and analyzing data depends less on time-consuming manual effort than in the past. People still play a vital role in data management and analytics, but processes that might have taken days or weeks (or longer) are picking up speed thanks toAI.
AI and ML are tools that help a company analyze their data more quickly and efficiently than what could be done [solely] by employees, says Sue Clark, senior CTO architect atSungard AS.
Mathias Golombek, CTO atExasol, has observed a trend to a two-tier strategy when it comes to big data, as organizations contend with the massive scope of the information they must manage if theyre going to get any value from it: The storage layer and an operational analytics layer that sits on top of it. News flash: the operational analytics layer is the one the CEO cares about, even if it cant function without the storage layer.
For specific use cases, it revolutionizes the way you get rules, decisions, and predictions done.
Thats where insights are extracted out of data and data-driven decisions take place, Golombek says. AI is enhancing this analytics world with totally new capabilities to take semi-automatic decisions based on training data. Its not applicable for all questions you have for data, but for specific use cases, it revolutionizes the way you get rules, decisions, and predictions done without complex human know-how.
(In an upcoming post, well look at some use cases that illuminate how AI and big data combine forces, such as in predictive maintenance essentially predicting when a machine might fail, for example and other practical applications.)
In other words, insights and decisions can happen faster. Moreover, IT can apply similar principles using AI technologies to reduce manual, labor-intensive burdens and increase speed to the back-end stuff that, lets face it, few outside of IT want to hear about.
The real-time nature of data insights, coupled with the fact that it exists everywhere now siloed across different racks, regions, and clouds means that companies are having to evolve from the traditional methods of managing and analyzing [data], Mih from Alluxio says. Thats where AI comes in.Gone are the days of data engineers manually copying data around again and again, delivering datasets weeks after a data scientist requests it.
Like others, Elif Tutuk, associate VP ofQlik Research, sees AI and ML as powerful levers when it comes to big data.
AI and machine learning, among other emerging technologies, are critical to helping businesses have a more holistic view of all of that data, providing them with a way to make connections between key data sets, Tutuk says. But, she adds,its not a matter of cutting out human intelligence and insight.
Businesses need to combine the power of human intuition with machine intelligence to augment these technologies or augmented intelligence. More specifically, an AI system needs to learn from data, as well as from humans, in order to be able to fulfill its function, Tutuk says.
Businesses that successfully combined the power of human and technology are able to expand who has access to key insights from analytics beyond data scientists and business analystswhile saving time andreducing potential biasthat may result from business users interpreting data. This results in more efficient business operations, quicker insights gleaned from data and ultimately increased enterprise productivity.
Read the original:
How big data and AI work together - The Enterprisers Project
Posted in Ai
Comments Off on How big data and AI work together – The Enterprisers Project
The US Army Wants to Reinvent Tank Warfare with AI – Defense One
Posted: at 10:32 pm
A new project aims to make the battlefield more transparent while relying on robots and software that think in unpredictable ways.
Tank warfare isnt aseasy to predict as hulkingmachines lumbering across open spaces would suggest. In July 1943, for instance, German military planners believed that their advance on the Russian city of Kursk would be over in ten days. In fact, that attempt lasted nearly two months and ultimately failed. Even the 2003 Battle of Baghdad, in which U.S. forces had air superiority, took a week. For the wars of the future, thats too slow. The U.S. Army has launched a new effort, dubbed Project Quarterback, to accelerate tank warfare by synchronizing battlefield data with the aid of artificialIntelligence.
The project, about a month old, aims for an AI assistant that can look out across the battlefield, taking in all the relevant data from drones, radar, ground robots, satellites, cameras mounted in soldier goggles, etc., and then output the best strategy for taking out the enemy(s) with whatever weapons available. Quarterback, in other words, would help commanders do two things better and faster, understand exactly whats on the battlefield and then select the most appropriate strategy based on the assets available and otherfactors.
Just the first part of that challenge is huge. The amount of potentially usable battlefield data is rapidly expanding, and it takes a long time to synchronizeit.
Simple map displays require 96 hours to synchronize a brigade or division targeting cycle, said Kevin McEnery, the deputy director of the Armys Next Generation Combat Vehicle Cross Functional Team, said on Thursday at an event at the National Robotics Engineering Center. One goal is to bring that down to 96 seconds, with the assistance of AI, hesaid.
Subscribe
Receive daily email updates:
Subscribe to the Defense One daily.
Be the first to receive updates.
All the vast array of current and future military sensors, aviation assets, electronic warfare assets, cyber assets, unmanned aerial, unmanned ground systems, next generation manned vehicles and dismounted soldiers will detect and geolocate an enemy on our battlefield. We need an AI system to help identify that threat, aggregate [data on the threat] with other sensors and threat data, distribute it across our command and control systems and recommend to our commanders at echelon the best firing platform for the best effects, be it an F-35, an [extended range cannon] or an [remote controlled vehicle], McEnerysaid.
Ultimately, the Army is looking for a lot more than a data visualizer. They want AI to help with battle strategy, said Lt. Col. Jay Wisham, one of the program leaders. How do you want to make decisions based on [battlefield data]? How do you want to select the most efficient way to engage a target, based on probability of hit, probability of kill? Do you have indirect fire assets available to you that you can request? Do you have real assets that you can request? Can I send you my wingman or, does the computer then recommend, Red One, our wingman should take that target instead of you for x, y reasons? That goes back to that concept of how you make a more informed decision, faster. And who is making that decision could be a tank commander or it could be a battalion commander, hesaid.
The Armys future plans rely a lot not just on AI but also on ever-more-intelligent ground robots. Right now, a single U.S. Army operator can control about two ground robots. The Armys plans are to get that ratio to one human to a dozen robots. That will require those future ground robots to not just collect visual data but actually perceive the world around them, designating (though primitively) objects in their field of perception. Those robots will have to make decisions with minimal human oversight as well since the availability high-bandwidth networking is hardlycertain.
During the event, which was put on by the Army Research Lab, Carnegie Mellon researchers unveiled robotic experiments where ground robots demonstrated that they could collect intelligence, maneuver autonomously and even decipher what it meant to move covertly, with minimal human commands. The robot learns and applies labels to objects in its environment after watchinghumans.
Relying on those sorts of robots will require a deeper dependence on small and large artificially intelligent systems that reach conclusions via opaque, neural networked or deep learning reasoning. Both of these are sometimes referred to as black box learning processes because, unlike straight or simple statistical models, its difficult to tell how neural nets reach the decisions that they do. In other words, commanders and soldiers will have to become more comfortable with robots and software that produce outputs via processes that cant be easily explained, even by the programers that producedthem.
The way to develop that trust, said Wisham, is the same way humans develop trust in one another, slowly and with a lot of practice. Most humans are not as explainable as we like to think If you demonstrate to a soldier that the tool or the system that you are trying to enable them with generally functions relatively well and adds some capability to them they will grow trust very, veryrapidly.
But, he said, when it comes to big decision aids, that will be muchharder.
Anthony Stenz, director of software engineering at Ubers Advanced Technologies Group, said, You trust something because it works, not because you understand it. The way that you show it works is you run many, many, many tests, build a statistical analysis and build trust that way. Thats true not only of deep learning systems but other systems as well that are sufficiently complex. You are not going to prove them correct. You will need to put them through a battery of tests and then convince yourself that they meet thebar.
The surging availability of big data and exascale computing through enterprise cloud architectures is also hastening a new state of neural networks and deep learning solutions, one that is potentially more transparent. In machine learning, theres a lot of work going on precisely in this direction, said Dieter Fox, senior director of robotics research at NVIDIA. Techniques are being developed [to] inspect these networks and see why these networks might come up with a certain recognition or solution or something like that Theres also important emerging research in fencing off neural networks and deep learning systems while they learn, including neural networks in robots, How we can put this physical structure or constraints into these networks so that they learn within the confines of what we think is physicallyokay.
Go here to see the original:
The US Army Wants to Reinvent Tank Warfare with AI - Defense One
Posted in Ai
Comments Off on The US Army Wants to Reinvent Tank Warfare with AI – Defense One
Are we modeling AI on the wrong brain? – The Boston Globe
Posted: at 10:32 pm
Octopuses are cephalopods, related to oysters. They have personalities, interact with their surroundings, and have expressions and memories. It is their approach to solving problems that intrigues those looking for a model for machines.
Many believe that mimicking the human brain is the optimal way to create artificial intelligence. But scientists are struggling to do this, due to the substantial intricacies of the human mind. Billye reminds us that there is a vast array of nonhuman life that is worthy of emulation.
RELATED | Emily Kumler: Why artificial intelligence is far too human
Much of the excitement around state-of-the-art artificial intelligence research today is focused on deep learning, which utilizes layers of artificial neural networks to perform machine learning through a web of nodes that are modeled on interconnections between neurons in the vertebrate brain cortex. While this science holds incredible promise, given the enormous complexity of the human brain, it is also presenting formidable challenges, including that some of these AI systems are arriving at conclusions that cannot be explained by their designers.
Maybe this should be expected, since humans do not know exactly how we make decisions either. We do not fully understand how our own brains work, nor do we even have a universally accepted definition of what human intelligence is. We dont exactly know why we sleep or dream. We dont know how we process memories. We dont know whether we have free will, or what consciousness is (or who has it). And one of the main obstacles currently in the way of our creating a high level of nuanced intellectual performance in machines is our inability to code what we call common sense.
Some scientists, however, oppose the obvious archetype, suggesting that trying to pattern synthetic intelligence predominantly on our own is unnecessarily anthropocentric. Our world has a wondrous variety of sentient organisms that AI can train computers to model; why not think creatively beyond species and try to engineer intelligent technology that reflects our worlds prismatic diversity?
Roboticist Rodney Brooks thinks that nonhuman intelligence is what AI developers should be investigating. Brooks first began studying insect intelligence in the 1980s, and went on to build several businesses from the robots he developed (he co-invented the Roomba). When asked about his approach, Brooks said that its unfair to claim that an elephant has no intelligence worth studying just because it does not play chess.
The range of skill, ingenuity, and creativity of our biological brethren on this planet is astounding. But a fixation on humans as the preeminent metric of intelligence discounts other species unique abilities. Perhaps the most humbling example for humans is slime mold (Physarum polycephalum), a brainless and neuron-less organism (more like a collective organism or superorganism) that can make trade-offs, solve labyrinthian mazes, take risks, and remember where it has been. Some say slime mold could be the key to more efficient self-driving cars.
Roboticists are intrigued by the swarm intelligence of termites as well as theirs and other creatures stigmergy a mechanism that allows them to collectively make decisions without directly communicating with one another by picking up signs left behind in the environment. Computer scientist Radikha Nagpal has been conducting research on the architectural feats of termites and the movement of schools of fish and flocks of birds. She thinks that we need to move away from a human on top mentality to design the next generation of robotics.
Octopuses like Billye possess what is called distributed intelligence, with two-thirds of their neurons residing in their eight arms, allowing them to perform various tasks both independently and at the same time. Researchers at Raytheon think that emulating octopuses multifaceted brilliance is better suited for the robots they are constructing for space exploration. In his book Other Minds, Peter Godfrey-Smith suggests that observing octopus intelligence is the closest we will ever get to studying alien intelligence. Taking cues from the vision of hawks, the dexterity of cats, or the sense of smell of bears can expand the technological horizons of whats possible.
Humans have long mimicked nature and nonhuman life for our inventions, from modeling X-ray machines on the reflective eyesight of lobsters, to creating an ultrasound cane for the visually impaired based on echolocation (the sensory mechanism of bats), to simulating the anatomy of sea lampreys to make tiny robots that could someday swim through our bodies detecting disease.
Much like humans had to first let go of having to fly exactly like birds fly in order to crack the code of flight, we must now look beyond the widely held belief that the human mind is singular and unique as an intellectual model, and that replicating it is the only way artificial neural networks could truly be deemed intelligent. Being open to holding all beings in esteem, respecting their complexities and gifts, is foundational to building and valuing future intelligent machines.
RELATED: What complex technology can learn from simple ants
Science continues to show us that we are not quite as sui generis as we may have thought; we are discovering now that certain attributes we assumed were reserved solely for humans moral judgment, empathy, emotions are also found across the spectrum of life on earth. Jessica Pierce and Marc Bekoff, in their book Wild Justice, establish that animals demonstrate nuanced emotions and moral behaviors, such as fairness and empathy. The authors maintain that animals are social beings that also have a sense of social justice.
Simply put: Humans are not the lone species whose study can serve as a guide for future forms of AI. We are but one form of intelligent life. Other living creatures exhibit incredible intelligence in a mosaic of mesmerizing ways. Spiders weave silk balloons to parachute and fly. Chimpanzees mourn their dead. So do orcas; as do elephants, who also have distinct personalities and can empathize and coordinate with each other. Crows create and use tools to gather food, and can also solve puzzles. Birds can form long-term alliances and display relationship intelligence. Bees can count and use dance to communicate complex information to the rest of their colonies. Pigeons have fantastic memories, can recognize words, perceive space and time, and detect cancer in image scans.
Humans have much to learn from the acumen of bees and termites, elephants and parrots; but some of us are still uncomfortable with the idea of nonhumans having thoughts and emotions. Is it because sanctioning their agency devalues our own? Our appreciation of animals does not follow the scientific evidence, and nonhumans remain mostly excluded from our notions of intelligence, justice, and rights.
As we strive to create machines that can think for themselves and possibly become self-aware, its time to take a look in the mirror and ask ourselves not only what kind of AI we want to build, but also what kind of humans we want to be. Modeling AI on myriad forms of intelligence, drawing from the vast panoply of intelligent life, is not only a potential solution to the conundrum of how to construct a digital mind; it could also be a gateway to a more inclusive, peaceful existence; to preserving the life that exists on our only home.
RELATED: Artificial intelligences diversity problem
For those speculating about how we may treat synthetically intelligent beings in the future, looking at how we have bestowed rights on other nonhumans is instructive. Our historical treatment of animals or in truth, of any being we are able to convince ourselves is other or less than human does not bode well for their treatment and acceptance. The root of the word robot comes from the Old Church Slavonic word rabota which means forced labor perhaps a prescient forecast that we may be prone to consider AI as nothing more than a tool to do our work and bidding. Creating a hierarchy of intelligence makes it easy to assign lesser dignities to other thinking things. Insisting on absolute human supremacy in all instances does not portend well for us in the Intelligent Machine Age.
I believe that our failure to model AI on the human mind may ultimately be our salvation. It will compel us to assign greater value to all types of intelligence and all living things; and to ask ourselves difficult questions about which beings are worthy of emulation, respect, and autonomy. This (possibly last) frontier of scientific invention may be our chance to embrace our human limitations, our weaknesses, the glitches and gaps in our systems, and to expand our worldview beyond ourselves. Being willing to admit other species are brilliant could be the smartest thing we can do.
Flynn Coleman is the author of A Human Algorithm: How Artificial Intelligence Is Redefining Who We Are, available now from Counterpoint Press. Send comments to ideas@globe.com.
Read the original post:
Posted in Ai
Comments Off on Are we modeling AI on the wrong brain? – The Boston Globe