If you need to treat anxiety in the future, odds are the treatment wont just be therapy, but also an algorithm.Across the mental-health industry, companies are rapidly building solutions for monitoring and treating mental-health issues that rely on just a phone or a wearable device. To do so, companies are relying on affective computing to detect and interpret human emotions. Its a field thats forecast to become a $37 billion industry by 2026, and as the COVID-19 pandemic has increasingly forced life online, affective computing has emerged as an attractive tool for governments and corporations to address an ongoing mental health crisis.
Despite a rush to build applications using it, emotionally intelligent computing remains in its infancy and is being introduced in the realm of therapeutic services as a fix-all solution without scientific validation nor public consent. Scientists still disagree over the over the nature of emotions and how they are felt and expressed among various populations, yet this uncertainty has been mostly disregarded by a wellness industry eager to profit on the digitalization of health care. If left unregulated, AI-based mental-health solutions risk creating new disparities in the provision of care as those who cannot afford in-person therapy will be referred to bot-powered therapists of uncertain quality.
The field of affective computing, also more commonly referred to as emotion AI, is a subfield of computer science originating in the 1990s. Rosalind Picard, widely credited as one of its pioneers, defined affective computing as computing that relates to, arises from, or deliberately influences emotions. It involves the creation of technology that is said to recognize, express, and adapt to human emotions. Affective computer scientists rely on sensors, voice and sentiment analysis programs, computer vision, and ML techniques to capture and analyze physical cues, written text, and/or physiological signals. These tools are then used to detect emotional changes.
Start-ups and corporations are now working to apply this field of computer science to build technology that can predict and model human emotions for clinical therapies. Facial expressions, speech, gait, heartbeats, and even eye blinks are becoming profitable sources of data. Companion Mx, for example, is a phone application that analyses users voices to detect signs of anxiety. San-Francisco-based Sentio Solutions is combining physiological signals and automated interventions to help consumers manage their stress and anxiety. A sensory wristband monitors your sweat, skin temperature and blood flow, and, through a connected app, asks users to select how they are feeling from a series of labels, such as distressed or content. Additional examples include the Muse EEG-powered headband, which guides users toward mindful meditation by providing live feedback on brain activity, and the Apollo Neuro ankle band, which monitors users heart rate variability to emit vibrations that provide stress relief.
While wearable technologies remain costly for the average consumer, therapy can now come in the form of a free 30-second download. App-based conversational agents, such as Woebot, are using emotion artificial intelligence to replicate the principles of cognitive behavioral therapy, a common method to treat depression, and to deliver advice regarding sleep, worry, and stress. Sentiment analysis used in chatbots combines sophisticated natural language processing (NLP) and machine learning techniques to determine the emotion expressed by the user. Ellie, a virtual avatar therapist developed by the University of Southern California, can pick up on nonverbal cues and guide the conversation accordingly, such as by displaying an affirmative nod or a well-placed hmmm. Though Ellie is not currently available to the wider public, it provides a hint of the future of virtual therapists.
In order to operate, artificial intelligence systems require a simplification of psychological models and neurobiological theories on the functions of emotions. Emotion AI cannot capture the diversity of human emotional experience and is often embedded with the programmers own cultural bias. Voice inflections or gestures vary from one population to another, and affective computer systems are likely to struggle to capture a diversity of human emotional experience. As the researchers Ruth Aylett and Ana Paiva write, affective computing demands that qualitative relationships must be quantified, a definite selection made from competing alternatives, and internal structures must be mapped onto software entities. When qualitative emotions are coded into digital systems, developers use models of emotions that rest on shaky parameters. Emotions are no hard science, and the metrics produced by such software are at best an educated guess. Yet few developers are transparent about the serious limitations of their systems.
Emotional expressions manifested through physical changes also have overlapping parameters. Single biological measures such as heart rate and skin conductance are not infallible indicators of emotional changes. A spiked heart rate may be the result of excitement, fear, or simply drinking a cup of coffee. There is still no consensus within the scientific community about physiological signal combinations that are the most relevant to emotion changes, as emotional experiences are highly individualized. The effectiveness of affective computing systems is seriously impeded by their limited reliability, lack of specificity, and restricted generalizability.
The questionable psychological science behind some of these technologies is at times reminiscent of pseudo-sciences, such as physiognomy, which were rife with eugenicist and racist beliefs. In Affective Computing, the 1997 book credited with outlining the framework for affective computing, Picard observed that emotional or not, computers are not purely objective. This lack of objectivity has complicated efforts to build affective computing systems without racial bias. Research by the scholar Lauren Rhue revealed that two top emotion AI systems assigned professional black basketball players more negative emotional scores than their white counterparts. After accusations of racial bias, recruitment company HireVue stopped using facial expressions to deduce an applicants emotional states and employability. Given the obvious risks for discrimination, AI Now called in 2019 for a ban on the use of affect-detecting technologies in decisions that can impact peoples lives and access to information.
The COVID-19 pandemic exacerbated the need to improve already limited access to mental-health services amid reports of staggering increases in mental illnesses. In June 2020, the U.S. Census Bureau reported that adults were three times more likely to screen positive for depressive and/or anxiety disorders compared to statistics collected in 2019. Similar findings were reported by the Centers for Disease Control and Prevention, with 11% of respondents admitting to suicidal ideation in the 30 days prior to completing a survey in June 2020. Adverse mental health conditions disproportionately affected young adults, Hispanic persons, Black persons, essential workers, and people who were receiving treatment for pre-existing psychiatric conditions. During this mental-health crisis, Mental Health America estimated that 60% of individuals suffering from a mental illness went untreated in 2020.
To address this crisis, government officials loosened regulatory oversight of digital therapeutic solutions. In what was described as a bid to serve patients and protect healthcare workers, the FDA announced in April 2020 it would expedite approval processes for digital solutions that provide services to individuals suffering from depression, anxiety, obsessive-compulsive disorder, and insomnia. The change in regulation was said to provide flexibility for software developers designing devices for psychiatric disorders and general wellness, without requiring developers to state the different AI-ML-based techniques that power their systems. Consumers would therefore be unable to know whether, for example, their insomnia app was using sentiment analysis to track and monitor their moods.
By failing to provide instructions regarding the collection and management of emotion and mental health-sensitive data, the announcement demonstrated the FDAs neglect of patient privacy and data security. Whereas traditional medical devices require testing, validation and recertification after software changes that could impact safety, digital devices tend to receive a light touch by the FDA. As noted by Bauer et al., very few medical apps and wearables are subject to FDA review, as the majority are classified as minimal risk and outside of the agencys enforcement. For example, under current regulation, mental health apps that are designed to assist users in self-managing their symptoms, but do not explicitly diagnose, are seen as posing minimal risk to consumers.
The growth of affective computing therapeutics is occurring simultaneously with the digitization of public-health interventions and the collection of data in self-tracking devices. Over the course of the pandemic, governments, and private companies pumped funding into the rapid development of remote sensors, phone apps, and AI for quarantine enforcement, contact tracing, and health-status screening. Through the popularization of self-tracking applicationsmany of which are already integrated into our personal deviceswe have become accustomed to passive monitoring in our data-fied lives. We are nudged by our devices to record sleep, exercise, and eat to maximize physical and mental wellbeing. Tracking our emotions is a natural next step in the digital evolution of our livesFitbit, for example, has now added stress management to its devices. Yet few of us know where this data goes or what is done with it.
Digital products that rely on emotion AI attempt to solve the affordability and availability crisis of mental-health care. The cost of conventional face-to-face therapy remains high, ranging between $65 to $250 an hour for those without insurance based on the therapist directory GoodTherapy.org. According to the National Alliance on Mental Illness, nearly half of the 60 million individuals living with mental health conditions in the United States do not have access to treatment. Unlike a therapist, tech platforms are indefatigable and available to users 24/7.
People are turning to digital solutions at increasing rates to address mental-health issues. First-time downloads of the top 10 mental wellness apps in the United States reached 4 million in April 2020, a 29% increase since January. In 2020, the Organisation for the Review of Care and Health Apps found a 437% increase in searches for relaxation apps, 422% for OCD, and 2483% in mindfulness apps. Evidence of their popularity beyond the pandemic is also reflected in the growing number of corporations offering digital mental-health tools to their employees. Research by McKinsey concludes that such tools can be used by corporations to reduce productivity losses due to employee burn out.
Rather than addressing the lack of mental-health resources, digital solutions may be creating new disparities in the provision of services. Digital devices that are said to help with emotion regulation such as the MUSE headband and the Apollo Neuro band cost $250 and $349, respectively. Individuals are thus encouraged to seek self-treatment through cheaper guided mediation and/or conversational bot-based applications. Even among smart-phone based services, many are hidden behind pay-walls and hefty subscription fees to access full content.
Disparities in health-care outcomes may be exacerbated by persistent questions about whether digital mental healthcare can live up to its analog forerunner. Artificial intelligence is not sophisticated enough to replicate spontaneous, natural conversations of talk therapy, and cognitive behavioral therapy involves the recollection of detailed personal information and engrained beliefs since childhooddata points that cannot be acquired through sensors. Psychology is part science and part trained intuition. As Dr. Adam Miner, a clinical psychologist at Stanford, argues, an AI system may capture a persons voice and movement, which is likely related to a diagnosis like major depressive disorder. But without more context and judgement, crucial information can be left out.
Most importantly, these technologies can operate without clinician oversight or other forms of human support. For many psychologists, the essential ingredient in effective therapies is the therapeutic alliance between the practitioner and the patient, but devices are not required to abide by clinical safety protocols that record the occurrence of adverse events. A survey of 69 apps for depression published in BMC Medicine found that only 7% included more than three suicide prevention strategies. Six of the apps examined failed to provide accurate information on suicide hotlines. Apps supplying incorrect information were reportedly downloaded more than 2 million times through Google Play and the App Store.
As these technologies are being developed, there are no policies in place that dictate who has the right to our emotion data and what constitutes breaches of privacy. Inferences made by emotion recognition systems can reveal sensitive health information that poses risks to consumers. Depression detection by workplace software monitoring or wearables may cost individuals their sources of employment or lead to higher insurance premiums. BetterHelp and Talkspace, two counseling apps that connect users to licensed therapists, were found to disclose sensitive information with third parties about users mental health history, sexual orientation, and suicidal thoughts.
Emotion AI systems fuel the wellness economy, in which the treatment of mental-health and behavioral issues are becoming a profitable business venture, despite a large portion of developers having no prior certification in therapeutic or counseling services. According to an estimate by the American Psychological Association, there are currently more than 20,000 mental-health apps available to mobile users. One study revealed that only 2.08% of psychosocial and wellness mobile apps are backed by published, peer-reviewed evidence of efficacy.
Digital wellness tools tend to have high drop-out rates, as only a small segment of users regularly follow treatment on the apps. An Arean et al. study on self-guided mobile apps for depression found that 74% of registered participants ceased using the apps. These high attrition rates have stalled investigations into their long-term effectiveness and the consequences of mental health self-treatment through digital tools. As with other AI-related issues, non-White populations, who are underserved in psychological care, continue to be underrepresented in the data used to research, develop, and deploy these tools.
These findings do not negate the ability of affective computing to provide promising medical and other healthcare developments. Affective computing has led to advances such as detecting spikes in heart rate in patients suffering from chronic pain, facial analysis to detect stroke, and speech analysis to detect Parkinsons.
Yet in the United States there remains no widely coordinated effort to regulate and evaluate digital mental-health resources and products that rely on affective computing techniques. Digital products marketed as therapies are being deployed without adequate consideration of patients access to technical resources and monitoring of vulnerable users. Few products provide specific guidance on their safety and privacy policies and whether data collected is shared with third parties. By being labelled as wellness products, companies are not subject to the Health Insurance Portability and Accountability Act. In response, non-profit initiatives, such as the Psyberguide, have sought to rate apps by the credibility of their scientific protocols and transparency in privacy policies. But these initiatives are severely limitedand not a stand-in for government.
Beyond the limited proven effectiveness of these digital services, we must take a step back and evaluate how such technology risks deepening divides in the provision of care to already underserved populations. There are significant disparities in the United States when it comes to technological access and digital literacy. This limits the potential for users to make informed health choices and to consent to the use of their sensitive data. As digital solutions are cheap, scalable, and cost-efficient, segments of the population may have to rely on a substandard tier of service to address their mental health issues. Such trends also risk placing the responsibility for mental-health care on users rather than healthcare providers.
Mental-health technologies that rely on affective computing are jumping ahead of the science. Even emotion AI researchers are denouncing overblown claims made by companies and unsupported by scientific consensus. We do not have the sophistication of technology nor the confidence of science to guarantee the effectiveness of such digital solutions in addressing the mental health crisis. And at the very least, governmental regulation should push companies to be transparent about that.
Alexandrine Royer is a doctoral candidate studying the digital economy at the University of Cambridge, and a student fellow at the Leverhulme Centre for the Future of Intelligence.
Original post:
The wellness industry's risky embrace of AI-driven mental health care - Brookings Institution
- AI File Extension - Open . AI Files - FileInfo [Last Updated On: June 14th, 2016] [Originally Added On: June 14th, 2016]
- Ai | Define Ai at Dictionary.com [Last Updated On: June 16th, 2016] [Originally Added On: June 16th, 2016]
- ai - Wiktionary [Last Updated On: June 22nd, 2016] [Originally Added On: June 22nd, 2016]
- Adobe Illustrator Artwork - Wikipedia, the free encyclopedia [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- AI File - What is it and how do I open it? [Last Updated On: June 29th, 2016] [Originally Added On: June 29th, 2016]
- Ai - Definition and Meaning, Bible Dictionary [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- ai - Dizionario italiano-inglese WordReference [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- Bible Map: Ai [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai dictionary definition | ai defined - YourDictionary [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai (poet) - Wikipedia, the free encyclopedia [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- AI file extension - Open, view and convert .ai files [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- History of artificial intelligence - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Artificial intelligence (video games) - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- North Carolina Chapter of the Appraisal Institute [Last Updated On: September 8th, 2016] [Originally Added On: September 8th, 2016]
- Ai Weiwei - Wikipedia, the free encyclopedia [Last Updated On: September 11th, 2016] [Originally Added On: September 11th, 2016]
- Adobe Illustrator Artwork - Wikipedia [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- 5 everyday products and services ripe for AI domination - VentureBeat [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Realdoll builds artificially intelligent sex robots with programmable personalities - Fox News [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- ZeroStack Launches AI Suite for Self-Driving Clouds - Yahoo Finance [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI and the Ghost in the Machine - Hackaday [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Why Google, Ideo, And IBM Are Betting On AI To Make Us Better Storytellers - Fast Company [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Roses are red, violets are blue. Thanks to this AI, someone'll fuck you. - The Next Web [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Who Leads On AI: The CIO Or The CDO? - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI For Matching Images With Spoken Word Gets A Boost From MIT - Fast Company [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Teach undergrads ethics to ensure future AI is safe compsci boffins - The Register [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- AI is here to save your career, not destroy it - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- A Heroic AI Will Let You Spy on Your Lawmakers' Every Word - WIRED [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- With a $16M Series A, Chorus.ai listens to your sales calls to help your team close deals - TechCrunch [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Microsoft AI's next leap forward: Helping you play video games - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Samsung Galaxy S8's Bixby AI could beat Google Assistant on this front - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- 3 common jobs AI will augment or displace - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Stephen Hawking and Elon Musk endorse new AI code - Irish Times [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- SumUp co-founders are back with bookkeeping AI startup Zeitgold - TechCrunch [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Five Trends Business-Oriented AI Will Inspire - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI Systems Are Learning to Communicate With Humans - Futurism [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Pinterest uses AI and your camera to recommend pins - Engadget [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Chinese Firms Racing to the Front of the AI Revolution - TOP500 News [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Real life CSI: Google's new AI system unscrambles pixelated faces - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI could transform the way governments deliver public services - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Amazon Is Humiliating Google & Apple In The AI Wars - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- What's Still Missing From The AI Revolution - Co.Design (blog) [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Legaltech 2017: Announcements, AI, And The Future Of Law - Above the Law [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Can AI make Facebook more inclusive? - Christian Science Monitor [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- How a poker-playing AI could help prevent your next bout of the flu - ExtremeTech [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Dynatrace Drives Digital Innovation With AI Virtual Assistant - Forbes [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- AI and the end of truth - VentureBeat [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Taser bought two computer vision AI companies - Engadget [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Google's DeepMind pits AI against AI to see if they fight or cooperate - The Verge [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- The Coming AI Wars - Huffington Post [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Is President Trump a model for AI? - CIO [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Who will have the AI edge? - Bulletin of the Atomic Scientists [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- How an AI took down four world-class poker pros - Engadget [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- We Need a Plan for When AI Becomes Smarter Than Us - Futurism [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- See how old Amazon's AI thinks you are - The Verge [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford to invest $1 billion in autonomous vehicle tech firm Argo AI - Reuters [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Zero One: Are You Ready for AI? - MSPmentor [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford bets $1B on Argo AI: Why Silicon Valley and Detroit are teaming up - Christian Science Monitor [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google Test Of AI's Killer Instinct Shows We Should Be Very Careful - Gizmodo [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations - ScienceAlert [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- An artificially intelligent pathologist bags India's biggest funding in healthcare AI - Tech in Asia [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Ford pledges $1bn for AI start-up - BBC News [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Dyson opens new Singapore tech center with focus on R&D in AI and software - TechCrunch [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How to Keep Your AI From Turning Into a Racist Monster - WIRED [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How Chinese Internet Giant Baidu Uses AI And Machine Learning - Forbes [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Humans engage AI in translation competition - The Stack [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Watch Drive.ai's self-driving car handle California city streets on a ... - TechCrunch [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Cryptographers Dismiss AI, Quantum Computing Threats - Threatpost [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Is AI making credit scores better, or more confusing? - American Banker [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI and Robotics Trends: Experts Predict - Datamation [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- IoT And AI: Improving Customer Satisfaction - Forbes [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI's Factions Get Feisty. But Really, They're All on the Same Team - WIRED [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Elon Musk: Humans must become cyborgs to avoid AI domination - The Independent [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Facebook Push Into Video Allows Time To Catch Up On AI Applications - Investor's Business Daily [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Defining AI, Machine Learning, and Deep Learning - insideHPC [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI Predicts Autism From Infant Brain Scans - IEEE Spectrum [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- The Rise of AI Makes Emotional Intelligence More Important - Harvard Business Review [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Google's AI Learns Betrayal and "Aggressive" Actions Pay Off - Big Think [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI faces hype, skepticism at RSA cybersecurity show - PCWorld [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- New AI Can Write and Rewrite Its Own Code to Increase Its Intelligence - Futurism [Last Updated On: February 17th, 2017] [Originally Added On: February 17th, 2017]