The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
Top 5 Benefits of Artificial intelligence in Software Testing – Analytics Insight
Posted: April 17, 2022 at 11:44 pm
Have a look at the top 5 benefits of using Artificial intelligence in software testing
One of the recent buzzwords in the software development industry is artificial intelligence. Even though the use of artificial intelligence in software development is still in its infancy, the technology has already made great strides in automating software development. Integrating AI in software testing enhanced the quality of the end product as the systems adhere to the basic standards and also maintain company protocols. So, let us have a look at some of the other crucial benefits offered by AI in software testing.
A method of testing that is getting more and more popular every day is image-based testing using automated visual validation tools. Many ML-based visual validation tools can detect minor UI anomalies that human eyes are likely to miss.
Shared automated tests can be used by the developers to catch problems quickly before sending them to the QA team. Tests can be run automatically whenever the source code changes, checked in and notified the team or the developer if they fail.
Manual testing is a slow process. And every code change requires new tests that consume the same amount of time as before. AI can be leveraged to automate the test processes. AI provides for precise and continuous testing at a fast pace.
AI/ ML tools can read the changes made to the application and understand the relationship between them. Such self-healing scripts observe changes in the application and start learning the pattern of changes and then can identify a change at runtime without you having to do anything.
With software tests being repeated each time source code is changed, manually happening those tests can be not only time-consuming but also expensive. Interestingly, once created automated tests can be executed over and over, with zero additional cost at a much quicker pace.
Conclusion: The future of artificial intelligence and machine learning is bright. AI and its adjoining technologies are making new waves in almost every industry and will continue to do so in the future.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.
See the rest here:
Top 5 Benefits of Artificial intelligence in Software Testing - Analytics Insight
Posted in Artificial Intelligence
Comments Off on Top 5 Benefits of Artificial intelligence in Software Testing – Analytics Insight
Sitting Out of the Artificial Intelligence Arms Race Is Not an Option – The National Interest Online
Posted: at 11:44 pm
Viewing the dangerous advances in military technology, from Nazi V-weapons to hydrogen bombs, investigative journalist I.F. Stone once described arms races as the inevitable product of there being no limit to the ingenuity of science and no limit to the deviltry of human beings. This dark truth about the era of human-controlled kinetic weapons of mass destruction that so concerned Stone remains true today of the emerging range of increasingly automated systems that may now be fusing scientific ingenuity with a silicon-based deviltry of all its own.
For most of history, from stones to siege guns, warfare consisted of hurling some amount of mass with sufficient energy to do serious harm. The general trend has been toward increasing mass and energy, giving weapons greater range. Yet, until the first automated guidance systems came into play during World War II, the information content of weaponry was quite small, reducing accuracy. But what began with the first ballistic and cruise missiles in 1944 quickened in the following decades, to the point that some missiles had electronic brains of their own to guide them in flight, like the American Tomahawk that went into service in 1983. Even though its launched at human command, once underway its brain does all of the sensing and maneuvering, over whatever distance, with precision accuracy.
And this increasing information content of weapons isnt just for long-range use. The stalwart Ukrainian defense that has hammered so hard at Russian tanks and helicopters has been greatly enhanced by smart, short-range anti-tank Javelins and anti-aircraft Stingers. Thus, the much heavier and more numerous invading forces have been given a very hard time by defenders whose weapons have brains of their own.
But this is just a small slice of the rising space into which automated systems are moving. Beyond long-range missile strikes and shorter-range battlefield tactics lies a wide variety of other military applications for artificial intelligence. At sea, for example, the Chinese have more than two dozen types of mines, some of which have significant autonomous capabilities for sensing the type of enemy vessel and then rising from the seafloor to attack it. Needless to say, U.S. Navy Ford-class carriers, costing $10 billion-plus per, can be mortally threatened by these small, smart, cheap weapons. As for the Russians, their advances in naval robotics have led to the creation of an autonomous U-bot that can dive deep and locate fiber-optic links, either tapping into or severing them. More than 95 percent of international communications move through the roughly 400 of these links that exist around the world. So, this bot, produced even in very small numbers, has great potential as a global weapon of mass disruption.
There are other ways in which silicon-based intelligence is being used to bring about the transformation of war in the twenty-first century. In cyberspace, with its botnets and spiders, everything from economy-draining strategic crime to broad infrastructure attacks is greatly empowered by increasingly intelligent autonomous systems. In outer space, the Chinese now have a robot smart enough to sidle up to a satellite and place a small explosive (less than 8 lbs.) in its exhaust nozzleand when the shaped charge goes off, the guts of the satellite are blown without external debris. Mass disruption is coming to both the virtual and orbital realms.
The foregoing prompts the question of what the United States and its friends and allies are doing in response to these troubling advances in the use of artificial intelligence to create new military capabilities. The answer is as troubling as the question: too little. Back in 2018, then-Under Secretary of Defense for Research and Engineering Michael Griffin acknowledged that There might be an artificial arms race, but were not in it yet. There was a glimmer of hope that the Americans might be lacing up their running shoes and getting in the AI arms race when Eric Lander became President Joe Bidens science advisor in January 2021, as he had publicly stated that China is making breathtaking progress in robotics and that the United States needed to get going. But Lander apparently didnt play well with others and resigned in February 2022. Given that NATO and other friends tend to move in tandem with the Americans, all are too slow getting off the mark.
Beyond personnel issues, the United States and other liberal and free-market societies are having some trouble ramping up to compete in the robot arms race for three other reasons. The first is conceptual, with many in the military, political, and academic circles taking the view that advances in artificial intelligence do not fit classical notions and patterns of weapons-based arms races. It is hard to make the case for urgency, for the need to race, when there doesnt even seem to be a race underway.
Next, at the structural level, the United States and other free-market-based societies tend to see most research in robotics undertaken by the private sector. The Pentagon currently spends about 1 percent of its budget (just a bit over $7 billion) on advancing artificial intelligence. And in the American private sector, much of the research in AI is focused on improving business practices and increasing consumer comfort. Whereas, in the case of China, about 85 percent of robotics research is state-funded and military-related. The Russians are following a kind of hybrid system, with the Putin government funding some 400 companies research in strategic robotics. As Putin has said in a number of his speeches, the leader in artificial intelligence will become master of the world. So, it seems that the structure of market societies is making it a bit harder to compete with authoritarians who can, with the stroke of a pen, set their countries directions in the robot arms race and provide all necessary funding.
The final impediment to getting wholeheartedly into the robot arms race is ethical. Throughout the free world, there is considerable concern about the idea of giving kill decisions in battle over to autonomous machines. Indeed, there is so much resistance to this possibility that a major initiative at the United Nations has sought to outlaw lethal autonomous weapon systems (LAWS). Civil society NGOs have supported this proposed ban and drawn celebrity adherents like Steve Wozniak and Elon Musk to the cause. Pope Francis has joined this movement, too.
One of the main concerns of all these objectors is about the possibility that robots will unwittingly kill innocent non-combatants. Of course, human soldiers have always caused civilian casualties, and still do. Given the human penchant for cognitive difficulties rising from fatigue, anger, desire for revenge, or just the fog of war, there is an interesting discussion that needs to be had about whether robotic warriors will be likely to cause more or possibly less collateral damage than human soldiers do.
So far, the United States, Britain, and a few other democracies have resisted adopting a ban on weaponized robotics; but the increasingly heated discourse about killer robots even in these lands has slowed their development and use. Needless to say, neither China nor Russia has shown even the very slightest hesitation about developing military robots, giving them the edge in this arms race.
It is clear that the ideal first expressed eighty years ago in the opening clause of Isaac Asimovs First Law of Robotics, A robot may not injure a human being, is being widely disregarded in many places. And those who choose to live by the First Law, or whose organizational structures impede swift progress in military robotics, are doomed to fall fatally behind in an arms race now well underway. It is a race to build autonomous weapons that will have as much impact on military affairs in the twenty-first century as aircraft did on land and naval warfare in the twentieth century. Simply put, sitting out this arms race is not an option.
John Arquilla is Distinguished Professor Emeritus at the United States Naval Postgraduate School and author, most recently, of Bitskrieg: The New Challenge of Cyberwarfare. The views expressed are his alone.
Image: Flickr.
See the article here:
Sitting Out of the Artificial Intelligence Arms Race Is Not an Option - The National Interest Online
Posted in Artificial Intelligence
Comments Off on Sitting Out of the Artificial Intelligence Arms Race Is Not an Option – The National Interest Online
New York Citys New Law Regulating the Use of Artificial Intelligence in Employment Decisions – JD Supra
Posted: at 11:44 pm
On Nov. 10, 2021, the New York City Council passed a bill that regulates employers and employment agencies use of automated employment decision tools in making employment decisions. The bill was returned without Mayor Bill de Blasios signature and lapsed into law on Dec. 11, 2021. The new law takes effect on Jan. 1, 2023. This new law is part of a growing trend towards examining and regulating the use of artificial intelligence (AI) in hiring, promotional and other employment decisions.
Requirements of the New Law. The new law regulates employers and employment agencies use of automated employment decision tools on candidates and employees residing in New York City. An automated employment decision tool refers to any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.
The new law prohibits an employer or employment agency from using an automated employment decision tool in making an employment decision unless, prior to using the tool, the following requirements are met: (1) the tool has been subject to a bias audit within the last year; and (2) a summary of the results of the most recent bias audit and distribution data for the tool have been made publicly available on the employer or employment agencys website. A bias audit is defined as an impartial evaluation by an independent auditor, which includes the testing of an automated employment decision tool to assess the tools disparate impact on persons of any component 1 category required to be reported by employers pursuant to 42 U.S.C. 2000e-8(c) and 29 C.F.R. 1602.7.
The new law also requires employers and employment agencies to satisfy two notice requirements. First, at least 10 business days before using the tool, the employer or employment agency must notify a candidate or employee who resides in New York City of the following: (1) that an automated employment decision tool will be used in assessing the candidate or employee; and (2) the job qualifications and characteristics that the tool will use in the assessment. The employer or employment agency must allow the candidate or employee to request an alternative process or accommodation. However, the law is silent as to the employer or employment agencys obligation to provide such alternative process or accommodation. Second, the employer or employment agency must disclose on their website or make available to a candidate or employee within 30 days of receiving a written request, the following: (1) information about the type of data collected for the automated employment decision tool; (2) the source of the collected data; and (3) the employer or employment agencys data retention policy.
Penalties for Violations. Violations of the new law will result in liability for a civil penalty of up to $500 for the first violation and each additional violation occurring on the same day as the first violation, and a civil penalty between $500 and $1,500 for each subsequent violation. Importantly, each day the automated employment decision tool is used in violation of the law constitutes a separate violation and the failure to provide any of the required notices constitutes a separate violation.
Recommendations for Timely Compliance. Employers with candidates or employees who reside in New York City can take several steps now to facilitate compliance with this new requirement when it goes into effect on Jan. 1, 2023. Employers should ensure that any covered automated employment decision tool that they plan to use in 2023 or thereafter to assess New York City candidates and employees is subject to a bias audit by an independent auditor and the results of such audit are available on their website. Additionally, we recommend that employers and employment agencies work with their legal counsel to develop and implement practices that comply with the notice provisions required by the new law.
Other Regulations on Automated Employment Decision Tools. Several states and cities have passed or are considering similar laws regarding the use of artificial intelligence and other technology in employment decisions. For example, Illinois Artificial Intelligence Video Interview Act, which took effect Jan. 1, 2020, requires employers using AI interview technology to provide advanced notice and an explanation of the technology to applicants, to obtain the applicants consent to use the technology and to comply with restrictions on the distribution and retention of videos. Similarly, Maryland enacted a law that took effect Oct. 1, 2020, which requires employers to obtain an applicants written consent and a waiver prior to using facial recognition technology during pre-employment job interviews. California and Washington, D.C. have also proposed legislation that would address the use of AI in the employment context.
Additionally, on Oct. 28, 2021, the U.S. Equal Employment Opportunity Commission (EEOC) launched a new initiative aimed at ensuring artificial intelligence and other technological tools used in making employment decisions comply with the federal civil rights laws. As part of its initiative, the EEOC will gather information about the adoption, design and impact of employment-related technologies, and issue technical assistance to provide employers with guidance on algorithmic fairness and the use of artificial intelligence in employment decisions.
[View source.]
See original here:
Posted in Artificial Intelligence
Comments Off on New York Citys New Law Regulating the Use of Artificial Intelligence in Employment Decisions – JD Supra
Flight Simulator in the Age of Virtual Reality (VR) and Artificial Intelligence (AI) – ReAnIn Analysis – PR Newswire
Posted: at 11:44 pm
HYDERABAD, India, April 14, 2022 /PRNewswire/ --According to ReAnIn, the global aircraft simulation market was valued at USD 5,837.59 million in the year 2021 and is projected to reach USD 8,952.96 million by the year 2028, registering a CAGR of 6.3% during the forecast period. Increasing demand for pilots for commercial aircraft, significant cost savings associated with the simulator in comparison with training in actual aircraft, and technological advancements in simulators are primary drivers for the aircraft simulation market. However, the COVID-19 pandemic had a severe impact on the growth of this market as various restrictions were imposed and international borders were closed for the majority of the months in 2020. Also, there is a consensus among industry experts that recovery to the pre-pandemic level might take a few years.
Download free sample: Global Aircraft Simulation Market Growth, Share, Size, Trends and Forecast (2022 - 2028)
More than 260,000 new pilots for the civil aviation industry will be required over the next decade according to CAE, a leading player in the aircraft simulators market
According to CAE's pilot demand outlook report, there were about 387,000 active pilots for civil aircraft in 2019 which is expected to increase to about 484,000 in 2029. Moreover, more than 167,000 pilots will have to be replaced during this time period. Hence, about 264,000 new pilots will have to be trained between 2019 and 2029. As simulator is an important aspect of pilot training, demand for aircraft simulators is expected to increase significantly in the near future. The Asia Pacific is expected to be the growth engine with the highest demand of ~91,000, more than one-third of these new pilots.
Furthermore, technological advancements such as virtual reality (VR) and artificial intelligence (AI) is expected to fuel the growth of the flight simulators market. In April 2019, the US Air Force launched a Pilot Training Next class using VR headsets and advanced AI biometrics. The use of VR and AI significantly reduced the training period and cost. The usual pilot training system takes about a year, while VR-based training was completed in just 4 months. Moreover, the cost of VR-based flight training was about US$1,000 per unit, while the usual cost was US$4.5 million for a legacy simulator. In April 2021, European Union Aviation Safety Agency (EASA) granted the first certificate for a Virtual Reality (VR) based Flight Simulation Training Device (FSTD).
Key Highlights of the Report:
Access the report description on: Global Aircraft Simulation Market
Market Segmentation:
ReAnIn has segmented the global aircraft simulation market by:
Competitive Landscape
Key players in the aircraft simulation market include CAE Inc., Boeing Company, Collins Aerospace, FlightSafety International, L3Harris Technologies, Precision Flight Controls, SIMCOM Aviation Training, Frasca International, TRU Simulation + Training, Airbus Group, Indra Sistemas, and Thales Group.
Know more about this report: Global Aircraft Simulation Market
About ReAnIn
ReAnIn provides end-to-end market research services which span across different support areas such as syndicated market research reports, custom market research reports, consulting, long-term engagement, and flash delivery. We are proud to have more than 100 Fortune 500 companies in our clientele. We are a client-first organization and we are known not just for meeting our client expectations but for exceeding them.
Media Contact:
Name: Deepak KumarEmail: [emailprotected]Phone: +1 469-730-0260
SOURCE Reanin Research & Consulting Private Limited
The rest is here:
Posted in Artificial Intelligence
Comments Off on Flight Simulator in the Age of Virtual Reality (VR) and Artificial Intelligence (AI) – ReAnIn Analysis – PR Newswire
How Artificial Intelligence Streamlines and Improves the Online Dating Experience – Analytics Insight
Posted: at 11:44 pm
How Artificial Intelligence Streamlines and Improves the Online Dating Experience
Online dating through dating apps is fast becoming one of the most efficient and effective ways to meet new people. Whether youre seeking love, fun, or adventure, a variety of different apps throughout the world are connecting people to help them achieve their relationship goals.
Artificial intelligence can play an integral part in the online dating experience by collecting your data and customizing your experience. You might have a renewed sense of appreciation for technology when you learn just how helpful AI can be.
It may seem like the profiles you see on dating apps are completely random or based on your provided location. While your current and home locations do contribute to the selection of profiles youre shown, machine learning algorithms also play a part.
This innovative technology learns from your profile data and swiping choices to refine your potential date options. Rather than being shown a random selection of suitors, youre shown those that algorithms deem more matched to your interests and behavior.
It might seem like a breach of privacy to have artificial intelligence screening your messages, but it actually might be doing you a favor. As millions of people begin using dating apps, unsolicited content has become a significant problem. People have experienced online stalking, abuse, and unsolicited pictures. AI has been a game-changer in filtering out this content and giving users the choice to view it or not.
For example, Bumble introduced a safety feature called Private Detector with AI that detects unsought pictures and allows users to decide if they wish to open them. According to some reports, this feature is 98% accurate. Essentially, the goal with AI in this respect is to make sure all users are only seeing and receiving the content they want to be delivered to them.
Safety is a crucial consideration in the dating scene, regardless of whether youre dating online or in person. While artificial intelligence is of little use for secure in-person dating, its proving practical in a number of online dating platforms safety and security measures.
AI can detect and ban scammers, minimize how much spam content people receive, and even exert some control over the photos users upload. When you add a picture to your profile, it can determine that the photo is real, current, of you, and not overly edited to mislead prospective partners. Some AI security measures are also so advanced that they can pick up fake accounts to potentially save people from a considerable amount of unnecessary heartache.
Hinge promotes itself as an app thats designed to be deleted. When you begin chatting with someone on this platform and other dating apps, the whole idea is to get off the app, meet in person, and see where your interactions take you.
However, some people need additional encouragement to move their communications into the real world, and you can rest assured that AI is there to provide those hints. Many platforms offer ideas for what to do when you meet in person and send prompts as gentle reminders about how fun it can be to meet new people.
Almost half of Americans say that dating is much harder than it used to be, which may be surprising for those who have had success meeting the loves of their lives on dating apps. Realizing that not everyone has had an ideal experience, some app creators are utilizing neuro-linguistic programming to improve potential dating experiences.
Some apps, like Mei Messaging App and Crush, will analyze the contents of messages to help potential couples plan dates, while Loveflutter will use the same information to provide meeting place suggestions and calculate compatibility.
Even the most skilled writers have trouble writing about themselves. That task can be even more challenging when youre trying to make yourself sound desirable to prospective dates in your dating bio.
Not all is lost if you havent created a winning profile the first time around. Some dating apps are relying on AI to scan your profile and offer suggestions to help make yourself more appealing to other people.
Some of the most common suggestions relate to the choice of profile picture, such as having too many people in a photo or not clearly showing your face. The app may also recommend including more relevant information about yourself, such as your hobbies, career, pets, and favorite foods.
You might assume that youre in complete control of your online dating experience, but artificial intelligence is holding some of the cards. The next time you swipe right on a potential new love interest, you might just wonder whether it was you or technology that brought you together.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.
The rest is here:
Posted in Artificial Intelligence
Comments Off on How Artificial Intelligence Streamlines and Improves the Online Dating Experience – Analytics Insight
Artificial intelligence can spot the signs of PTSD in your text messages – Study Finds
Posted: at 11:44 pm
EDMONTON, Alberta A text message may be able to reveal if someone is dealing with post-traumatic stress disorder (PTSD), a new study finds. Researchers from the University of Alberta say a machine learning program a form of artificial intelligence is capable of reading between the lines to find potential warning signs in the way people write.
The team believes this program could become an inexpensive tool that helps mental health professionals detect and diagnose cases of PTSD or other disorders. Psychiatry PhD candidate Jeff Sawalha performed a sentiment analysis of texts using a dataset created by Jonathan Gratch from USCs Institute for Creative Technologies.
Study authors explain that a sentiment analysis takes a large amount of data and categorizes it. In this case, the model took a massive amount of texts and sorted them according to positive and negative thoughts.
We wanted to strictly look at the sentiment analysis from this dataset to see if we could properly identify individuals with PTSD just using the emotional content of these interviews, Sawalha says in a university release.
The text sampling came from 250 semi-structured interviews conducted by an artificial interviewer (Ellie) who spoke with real participants using video conferencing calls. Eighty-seven people had PTSD while the other 188 did not.
From their text responses, the team was able to identify people with PTSD through their scores reflecting how often their words displayed neutral or negative thoughts.
This is in line with a lot of the literature around emotion and PTSD. Some people tend to be neutral, numbing their emotions and maybe not saying too much. And then there are others who express their negative emotions, Sawalha says.
Study authors note that this process isnt black and white. For example, a phrase like I didnt hate that could be confusing for the algorithm. Despite that, the machine learning system was able to detect PTSD patients with 80 percent accuracy.
Text data is so ubiquitous, its so available, you have so much of it, Sawalha continues. From a machine learning perspective, with this much data, it may be better able to learn some of the intricate patterns that help differentiate people who have a particular mental illness.
The team is planning to integrate other types of data, including speech patterns and human motions, which they say may help the system spot mental health disorders better. Moreover, signs of neurological conditions like Alzheimers disease are detectable through a persons ability to speak.
Unlike an MRI that takes an experienced person to look at it, this is something people can do themselves. I think thats the direction medicine is probably going, toward more screening tools, says Russ Greiner, a professor in the Department of Computing Science.
Having tools like this going forward could be beneficial in a post-pandemic world, Sawalha concludes.
The study is published in the journal Frontiers in Psychiatry.
Read the original here:
Artificial intelligence can spot the signs of PTSD in your text messages - Study Finds
Posted in Artificial Intelligence
Comments Off on Artificial intelligence can spot the signs of PTSD in your text messages – Study Finds
Global Personal Artificial Intelligence and Robotics Markets, 2022-2027: Leading Solutions for Personalized AI and Robotics are Safety, Information,…
Posted: March 31, 2022 at 3:12 am
Dublin, March 28, 2022 (GLOBE NEWSWIRE) -- The "Personal Artificial Intelligence and Robotics Market by AI and Robot Type, Components, Devices and Solutions 2022 - 2027" report has been added to ResearchAndMarkets.com's offering.
This report evaluates the market for personalized robots, bot software, and systems. The report also assesses the impact of AI and evaluates the market for AI-enhanced robots and robotic systems for the consumer market. It includes analysis and forecasts for personalized AI and robotics from 2022 through 2027.
There is an emerging service robot market that has very different dynamics than traditional industrial robotics. Service robots are very personal and include both physical robots as well as logical (e.g. software) bots that act on behalf of their owners, managers, and/or controllers. Service robots will ultimately evolve beyond purpose-built machines to become more general-purpose tools for supporting human safety and lifestyle needs.
While Asia is the predominant market today, we see the United States as a high growth market as the USA has grossly underinvested in the personal healthcare infrastructure market. Largely depending upon informal family support, personalized care represents an industry that is sustained by poorly paid workers - largely immigrants and women of color. This is poised to change with carebots, programmed to oversee the care for the elderly and/or those with healthcare issues that require constant attention.
We see substantial overall industry growth across a wide range of robot types that engage in diverse tasks such as home cleaning, personalized healthcare service, home security, autonomous cars, robotic entertainment and toys, carebots services, managing daily schedules, and many more assistive tasks. Furthermore, we see a few key factors such as the aging population, personalization services trends, and robot mobility will drive growth in this industry segment.
In addition, developments in artificial intelligence and cognitive computing support the inclusion of these technologies with virtually every type of robot including general-purpose bots that act on behalf of their owner. The combination of AI and IoT (AIoT) will further support market development, leading to semi-autonomous markets that interact with humans directly as well as other machines, and assets through interconnected systems.
Select Report Findings:
Key Topics Covered:
1 Executive Summary
2 Introduction2.1 Overall AI and Robotics Market2.2 Personal AI and Robotics Market2.3 Development of Autonomous Agents and Care Bots2.4 AI Technology and Deep Learning Hacks2.5 Contextual Awareness and Intelligent Decision Support Systems2.6 Aging Population, Mass Digitization, and Human-Robotics Interaction Accelerates Growth2.7 Evolution of Personal Assistants and Smart Advisory Services2.8 Price Declines Drive Adoption for Low-Cost Robotics2.9 Open Software Platforms Accelerate Growth but Raises Ethical Concerns2.10 Technical Complexity and Lack of Skilled Robot Designer May Hinder Growth
3 Cloud Robotics to Drive Democratization and Expanded Usage3.1 Enabling Technologies3.1.1 Fifth Generation Cellular3.1.2 Teleoperation3.1.3 Cloud Computing3.1.4 Edge Computing3.2 Market Opportunities
4 Personal AI and Robotics Market, Application, and Ecosystem Impact4.1 Market Segmentation and Application Scenario4.1.1 Personal Robots and Robotics Components4.1.2 Digital Personal Assistant Services4.1.3 AI-Based System and Analytics4.2 Economic Impact including Job Market4.3 Investment Trends in Robotics and AI Systems4.4 Robotics Patents a Key Area to Watch
5 Personal AI and Robotics Market Drivers and Challenges5.1 Personal AI and Robotics Market Dynamics5.2 Personal AI and Robotics Market Drivers5.3 Personal AI and Robotics Market Challenges
6 Personal AI and Robot Market Outlook and Forecasts 2022 - 20276.1 Aggregate Global Market Forecast 2022 - 20276.2 Personal Robot Market Forecast 2022 - 20276.3 Digital Personal Assistant Market Forecast 2022 - 20276.4 Personal AI-Based Solution Market Forecast 2022 - 2027
7 AI and Robotics Company Analysis7.1 Assessment of Select Market Leaders7.2 Honda Motor Co. Ltd.7.3 Samsung Electronics Co Ltd.7.4 iRobot Corporation7.5 Sony Corporation7.6 F&P Robotics AG7.7 ZMP INC.7.8 Segway Inc.7.9 Neato Robotics, Inc.7.10 Ecovacs Robotics, Inc.7.11 Hasbro, Inc.7.12 Parrot SA7.13 Geckosystems Intl. Corp.7.14 Hoaloha Robotics7.15 Lego Education7.16 Sharp Corporation7.17 Toyota Motor Corporation7.18 WowWee Group Limited7.19 Lely Group7.20 Intel Corporation7.21 AsusTek Computer Inc.7.22 Amazon.com, Inc7.23 RealDoll7.24 True Companion7.25 Robotbase7.26 Dongbu Group7.27 Softbank Robotics7.28 Buddy7.29 Jibo7.30 NTT DoCoMo7.31 Rokid7.32 MJI Robotics7.33 Cubic7.34 5 Elements Robotics7.35 Branto7.36 Aido7.37 Vinclu Gatebox7.38 Future Robot7.39 Apple Inc.7.40 Artificial Solutions7.41 Clara Labs7.42 Google7.43 Microsoft Corporation7.44 Speaktoit Inc.7.45 Facebook7.46 SK Telecom Co, Ltd.7.47 motion.ai7.48 Indigo7.49 24me7.50 Wunderlist7.51 Hound7.52 Mycroft7.53 Ubi7.54 EasilyDo7.55 Evi7.56 Operator7.57 Charlie7.58 Alfred7.59 x.ai7.60 AIVC7.61 EVA7.62 NVidia7.63 Tesla Motors7.64 Baidu7.65 SparkCognition
8 Personal AI and Robot Use Cases8.1 Cleaning Robots8.2 Entertainment Robots8.3 Home Security and Surveillance8.4 Wheel-powered Robot8.5 PARO, Advanced interactive Robot8.6 Vortex, a Programmable Robot8.7 ROBEAR, Nursing Care Robot8.8 AV1, A Small Telepresence Robot
9 Conclusions and Recommendations9.1 Recommendations to Robotics Makers9.2 Recommendations to Investors9.3 Recommendations for AI Companies9.4 Recommendations for Equipment Manufacturers9.5 Future of Personal AI
For more information about this report visit https://www.researchandmarkets.com/r/vplny7
The rest is here:
Posted in Artificial Intelligence
Comments Off on Global Personal Artificial Intelligence and Robotics Markets, 2022-2027: Leading Solutions for Personalized AI and Robotics are Safety, Information,…
West Ham United Announces Fetch.ai as their Official Artificial Intelligence Partner – Geeks World Wide
Posted: at 3:12 am
Originally posted here.By: NewsBTC
Fetch.ai is West Ham Uniteds exclusive official artificial intelligence partner and the premier leagues giant non-exclusive Official Global Partner. Under the deal, Fetch.ai has also been designated as West Ham United Womens football clubs non-exclusive official partner. Through this partnership, Fetch.ai and West Ham United will leverage and promote the impact of artificial intelligence in enhancing businesses and daily lives. Fetch.ai Brand to be Displayed in West Ham United LEDs Subsequently, West Ham United will promote the Fetch.ai brand and its products in their mega London Stadium on their LED perimeter advertising boards and displays, marketing Fetch.ais smart parking concept, upcoming social media platform, and future smart solutions. West Ham Uniteds London Stadium at the Queen Elizabeth Olympic Park has a capacity of 67,000 fans. It is larger than Tottenham Hotspurs 1 billion stadium. In London, the West Ham Uniteds mega stadium is only second after Wembley and Twickenham stadiums. Nathan Thompson, the Commercial Director of West Ham United, said he was delighted with the partnership. We are delighted to announce our first Official Artificial Intelligence Partner and welcome Fetch.ai to the Club at an exciting time for the business, and the industry. Were looking forward to working with Fetch.ai on their smart parking concept, social media platform, and upcoming projects that will provide smart solutions for fans. Using Artificial Intelligence to Drive Crypto Solutions The developers of Fetch.ai are firm believers that smart contracts can, as their name implies, be smart. Fetch.ai integrates artificial intelligence and machine learning for the building and deployment of smart code to deliver enhanced service delivery for users, businesses, and organizations. Through their secure and decentralized blockchain, Fetch.ai can securely launch their Autonomous Economic Agents (AEA)representing connected devices, users, or organizationsand act on their behalf on the Fetch.ai network. These agents depend on artificial intelligence and are created as digital citizens. They are tasked with securely and instantaneously connecting to vast data sources and hardware environments, effectively eliminating the need for aggregators. Therefore, by using artificial intelligence solutions in creative ways, the founder of Fetch.ai, Humayun Sheikh, believes it will power the future of world-class Premier League football for fans in the U.K. and worldwide.
Fetch.ai is West Ham Uniteds exclusive official artificial intelligence partner and the premier leagues giant non-exclusive Official Global Partner. Under the deal, Fetch.ai has also been designated as West Ham United Womens football clubs non-exclusive official partner. Through this partnership, Fetch.ai and West Ham United will leverage and promote the impact of artificial intelligence in enhancing businesses and daily lives.
Fetch.ai Brand to be Displayed in West Ham United LEDs
Subsequently, West Ham United will promote the Fetch.ai brand and its products in their mega London Stadium on their LED perimeter advertising boards and displays, marketing Fetch.ais smart parking concept, upcoming social media platform, and future smart solutions.
West Ham Uniteds London Stadium at the Queen Elizabeth Olympic Park has a capacity of 67,000 fans. It is larger than Tottenham Hotspurs 1 billion stadium. In London, the West Ham Uniteds mega stadium is only second after Wembley and Twickenham stadiums.
Nathan Thompson, the Commercial Director of West Ham United, said he was delighted with the partnership.
We are delighted to announce our first Official Artificial Intelligence Partner and welcome Fetch.ai to the Club at an exciting time for the business, and the industry. Were looking forward to working with Fetch.ai on their smart parking concept, social media platform, and upcoming projects that will provide smart solutions for fans.
Using Artificial Intelligence to Drive Crypto Solutions
The developers of Fetch.ai are firm believers that smart contracts can, as their name implies, be smart. Fetch.ai integrates artificial intelligence and machine learning for the building and deployment of smart code to deliver enhanced service delivery for users, businesses, and organizations.
Through their secure and decentralized blockchain, Fetch.ai can securely launch their Autonomous Economic Agents (AEA)representing connected devices, users, or organizationsand act on their behalf on the Fetch.ai network. These agents depend on artificial intelligence and are created as digital citizens.
They are tasked with securely and instantaneously connecting to vast data sources and hardware environments, effectively eliminating the need for aggregators. Therefore, by using artificial intelligence solutions in creative ways, the founder of Fetch.ai, Humayun Sheikh, believes it will power the future of world-class Premier League football for fans in the U.K. and worldwide.
Visit link:
Posted in Artificial Intelligence
Comments Off on West Ham United Announces Fetch.ai as their Official Artificial Intelligence Partner – Geeks World Wide
Improving biodiversity protection through artificial intelligence – Nature.com
Posted: at 3:12 am
A biodiversity simulation framework
We have developed a simulation framework modelling biodiversity loss to optimize and validate conservation policies (in this context, decisions about data gathering and area protection across a landscape) using an RL algorithm. We implemented a spatially explicit individual-based simulation to assess future biodiversity changes based on natural processes of mortality, replacement and dispersal. Our framework also incorporates anthropogenic processes such as habitat modifications, selective removal of a species, rapid climate change and existing conservation efforts. The simulation can include thousands of species and millions of individuals and track population sizes and species distributions and how they are affected by anthropogenic activity and climate change (for a detailed description of the model and its parameters see Supplementary Methods and Supplementary Table 1).
In our model, anthropogenic disturbance has the effect of altering the natural mortality rates on a species-specific level, which depends on the sensitivity of the species. It also affects the total number of individuals (the carrying capacity) of any species that can inhabit a spatial unit. Because sensitivity to disturbance differs among species, the relative abundance of species in each cell changes after adding disturbance and upon reaching the new equilibrium. The effect of climate change is modelled as locally affecting the mortality of individuals based on species-specific climatic tolerances. As a result, more tolerant or warmer-adapted species will tend to replace sensitive species in a warming environment, thus inducing range shifts, contraction or expansion across species depending on their climatic tolerance and dispersal ability.
We use time-forward simulations of biodiversity in time and space, with increasing anthropogenic disturbance through time, to optimize conservation policies and assess their performance. Along with a representation of the natural and anthropogenic evolution of the system, our framework includes an agent (that is, the policy maker) taking two types of actions: (1) monitoring, which provides information about the current state of biodiversity of the system, and (2) protecting, which uses that information to select areas for protection from anthropogenic disturbance. The monitoring policy defines the level of detail and temporal resolution of biodiversity surveys. At a minimal level, these include species lists for each cell, whereas more detailed surveys provide counts of population size for each species. The protection policy is informed by the results of monitoring and selects protected areas in which further anthropogenic disturbance is maintained at an arbitrarily low value (Fig. 1). Because the total number of areas that can be protected is limited by a finite budget, we use an RL algorithm42 to optimize how to perform the protecting actions based on the information provided by monitoring, such that it minimizes species loss or other criteria depending on the policy.
We provide a full description of the simulation system in the Supplementary Methods. In the sections below we present the optimization algorithm, describe the experiments carried out to validate our framework and demonstrate its use with an empirical dataset.
In our model we use RL to optimize a conservation policy under a predefined policy objective (for example, to minimize the loss of biodiversity or maximize the extent of protected area). The CAPTAIN framework includes a space of actions, namely monitoring and protecting, that are optimized to maximize a reward R. The reward defines the optimality criterion of the simulation and can be quantified as the cumulative value of species that do not go extinct throughout the timeframe evaluated in the simulation. If the value is set equal across all species, the RL algorithm will minimize overall species extinctions. However, different definitions of value can be used to minimize loss based on evolutionary distinctiveness of species (for example, minimizing phylogenetic diversity loss), or their ecosystem or economic value. Alternatively, the reward can be set equal to the amount of protected area, in which case the RL algorithm maximizes the number of cells protected from disturbance, regardless of which species occur there. The amount of area that can be protected through the protecting action is determined by a budget Bt and by the cost of protection ({C}_{t}^{c}), which can vary across cells c and through time t.
The granularity of monitoring and protecting actions is based on spatial units that may include one or more cells and which we define as the protection units. In our system, protection units are adjacent, non-overlapping areas of equal size (Fig. 1) that can be protected at a cost that cumulates the costs of all cells included in the unit.
The monitoring action collects information within each protection unit about the state of the system St, which includes species abundances and geographic distribution:
$${S}_{t}={{{{H}}}_{{{t}}},{{{D}}}_{{{t}}},{{{F}}}_{{{t}}},{{{T}}}_{{{t}}},{{{C}}}_{{{t}}},{{{P}}}_{{{t}}},{B}_{t}}$$
(1)
where Ht is the matrix with the number of individuals across species and cells, Dt and Ft are matrices describing anthropogenic disturbance on the system, Tt is a matrix quantifying climate, Ct is the cost matrix, Pt is the current protection matrix and Bt is the available budget (for more details see Supplementary Methods and Supplementary Table 1). We define as feature extraction the result of a function X(St), which returns for each protection unit a set of features summarizing the state of the system in the unit. The number and selection of features (Supplementary Methods and Supplementary Table 2) depends on the monitoring policy X, which is decided a priori in the simulation. A predefined monitoring policy also determines the temporal frequency of this action throughout the simulation, for example, only at the first time step or repeated at each time step. The features extracted for each unit represent the input upon which a protecting action can take place, if the budget allows for it, following a protection policy Y. These features (listed in Supplementary Table 2) include the number of species that are not already protected in other units, the number of rare species and the cost of the unit relative to the remaining budget. Different subsets of these features are used depending on the monitoring policy and on the optimality criterion of the protection policy Y.
We do not assume species-specific sensitivities to disturbance (parameters ds, fs in Supplementary Table 1 and Supplementary Methods) to be known features, because a precise estimation of these parameters in an empirical case would require targeted experiments, which we consider unfeasible across a large number of species. Instead, species-specific sensitivities can be learned from the system through the observation of changes in the relative abundances of species (x3 in Supplementary Table 2). The features tested across different policies are specified in the subsection Experiments below and in the Supplementary Methods.
The protecting action selects a protection unit and resets the disturbance in the included cells to an arbitrarily low level. A protected unit is also immune from future anthropogenic disturbance increases, but protection does not prevent climate change in the unit. The model can include a buffer area along the perimeter of a protected unit, in which the level of protection is lower than in the centre, to mimic the generally negative edge effects in protected areas (for example, higher vulnerability to extreme weather). Although protecting a disturbed area theoretically allows it to return to its initial biodiversity levels, population growth and species composition of the protected area will still be controlled by the deathreplacementdispersal processes described above, as well as by the state of neighbouring areas. Thus, protecting an area that has already undergone biodiversity loss may not result in the restoration of its original biodiversity levels.
The protecting action has a cost determined by the cumulative cost of all cells in the selected protection unit. The cost of protection can be set equal across all cells and constant through time. Alternatively, it can be defined as a function of the current level of anthropogenic disturbance in the cell. The cost of each protecting action is taken from a predetermined finite budget and a unit can be protected only if the remaining budget allows it.
We frame the optimization problem as a stochastic control problem where the state of the system St evolves through time as described in the section above (see also Supplementary Methods), but it is also influenced by a set of discrete actions determined by the protection policy Y. The protection policy is a probabilistic policy: for a given set of policy parameters and an input state, the policy outputs an array of probabilities associated with all possible protecting actions. While optimizing the model, we extract actions according to the probabilities produced by the policy to make sure that we explore the space of actions. When we run experiments with a fixed policy instead, we choose the action with highest probability. The input state is transformed by the feature extraction function X(St) defined by the monitoring policy, and the features are mapped to a probability through a neural network with the architecture described below.
In our simulations, we fix monitoring policy X, thus predefining the frequency of monitoring (for example, at each time step or only at the first time step) and the amount of information produced by X(St), and we optimize Y, which determines how to best use the available budget to maximize the reward. Each action A has a cost, defined by the function Cost(A, St), which here we set to zero for the monitoring action (X) across all monitoring policies. The cost of the protecting action (Y) is instead set to the cumulative cost of all cells in the selected protection unit. In the simulations presented here, unless otherwise specified, the protection policy can only add one protected unit at each time step, if the budget allows, that is if Cost(Y, St) The protection policy is parametrized as a feed-forward neural network with a hidden layer using a rectified linear unit (ReLU) activation function (Eq. (3)) and an output layer using a softmax function (Eq. (5)). The input of the neural network is a matrix x of J features extracted through the most recent monitoring across U protection units. The output, of size U, is a vector of probabilities, which provides the basis to select a unit for protection. Given a number of nodes L, the hidden layer h(1) is a matrix UL: $${h}_{u{l}}^{(1)}=gleft(mathop{sum}limits_{j =1}^{J}{x}_{uj}{W}_{j{l}}^{(1)}right)$$ (2) where u {1, , U} identifies the protection unit, l {1, , L} indicates the hidden nodes and j {1, , J} the features and where is the ReLU activation function. We indicate with W(1) the matrix of J L coefficients (shared among all protection units) that we are optimizing. Additional hidden layers can be added to the model between the input and the output layer. The output layer takes h(1) as input and gives an output vector of U variables: $${h}_{u}^{(2)}=sigma left(mathop{sum}limits_{{l=1}}^{L}{h}_{u{l}}^{(1)}{W}_{{l}}^{(2)}right)$$ (4) where is a softmax function: $$sigma(x_i) = frac{exp(x_i)}{sum_u{exp(x_u)}}$$ (5) We interpret the output vector of U variables as the probability of protecting the unit u. This architecture implements parameter sharing across all protection units when connecting the input nodes to the hidden layer; this reduces the dimensionality of the problem at the cost of losing some spatial information, which we encode in the feature extraction function. The natural next step would be to use a convolutional layer to discover relevant shape and space features instead of using a feature extraction function. To define a baseline for comparisons in the experiments described below, we also define a random protection policy ({hat{pi }}), which sets a uniform probability to protect units that have not yet been protected. This policy does not include any trainable parameter and relies on feature x6 (an indicator variable for protected units; Supplementary Table 2) to randomly select the proposed unit for protection. The optimization algorithm implemented in CAPTAIN optimizes the parameters of a neural network such that they maximize the expected reward resulting from the protecting actions. With this aim, we implemented a combination of standard algorithms using a genetic strategies algorithm43 and incorporating aspects of classical policy gradient methods such as an advantage function44. Specifically, our algorithm is an implementation of the Parallelized Evolution Strategies43, in which two phases are repeated across several iterations (hereafter, epochs) until convergence. In the first phase, the policy parameters are randomly perturbed and then evaluated by running one full episode of the environment, that is, a full simulation with the system evolving for a predefined number of steps. In the second phase, the results from different runs are combined and the parameters updated following a stochastic gradient estimate43. We performed several runs in parallel on different workers (for example, processing units) and aggregated the results before updating the parameters. To improve the convergence we followed the standard approach used in policy optimization algorithms44, where the parameter update is linked to an advantage function A as opposed to the return alone (Eq. (6)). Our advantage function measures the improvement of the running reward (weighted average of rewards across different epochs) with respect to the last reward. Thus, our algorithm optimizes a policy without the need to compute gradients and allowing for easy parallelization. Each epoch in our algorithm works as: for every worker p do ({epsilon }_{p}leftarrow {{{mathcal{N}}}}(0,sigma )), with diagonal covariance and dimension W+M for t=1,...,T do RtRt1+rt(+p) end for end for Raverage of RT across workers ReR+(1)Re1 for every coefficient in W+M do +A(Re, RT, ) end for where ({mathcal{N}}) is a normal distribution and W + M is the number of parameters in the model (following the notation in Supplementary Table 1). We indicate with rt the reward at time t, with R the cumulative reward over T time steps. Re is the running average reward calculated as an exponential moving average where = 0.25 represents the degree of weighting decrease and Re1 is the running average reward at the previous epoch. =0.1 is a learning rate and A is an advantage function defined as the average of final reward increments with respect to the running average reward Re on every worker p weighted by the corresponding noise p: $$A({R}_{e},{R}_{T},epsilon )=frac{1}{P}mathop{sum}limits_{p}({R}_{e}-{R}_{T}^{p}){epsilon }_{p}.$$ (6) We used our CAPTAIN framework to explore the properties of our model and the effect of different policies through simulations. Specifically, we ran three sets of experiments. The first set aimed at assessing the effectiveness of different policies optimized to minimize species loss based on different monitoring strategies. We ran a second set of simulations to determine how policies optimized to minimize value loss or maximize the amount of protected area may impact species loss. Finally, we compared the performance of the CAPTAIN models against the state-of-the-art method for conservation planning (Marxan25). A detailed description of the settings we used in our experiments is provided in the Supplementary Methods. Additionally, all scripts used to run CAPTAIN and Marxan analyses are provided as Supplementary Information. We analysed a recently published33 dataset of 1,517 tree species endemic to Madagascar, for which presence/absence data had been approximated through species distribution models across 22,394 units of 55km spanning the entire country (Supplementary Fig. 5a). Their analyses included a spatial quantification of threats affecting the local conservation of species and assumed the cost of each protection unit as proportional to its level of threat (Supplementary Fig. 5b), similarly to how our CAPTAIN framework models protection costs as proportional to anthropogenic disturbance. We re-analysed these data within a limited budget, allowing for a maximum of 10% of the units with the lowest cost to be protected (that is, 2,239 units). This figure can actually be lower if the optimized solution includes units with higher cost. We did not include temporal dynamics in our analysis, instead choosing to simply monitor the system once to generate the features used by CAPTAIN and Marxan to place the protected units. Because the dataset did not include abundance data, the features only included species presence/absence information in each unit and the cost of the unit. Because the presence of a species in the input data represents a theoretical expectation based on species distribution modelling, it does not consider the fact that strong anthropogenic pressure on a unit (for example, clearing a forest) might result in the local disappearance of some of the species. We therefore considered the potential effect of disturbance in the monitoring step. Specifically, in the absence of more detailed data about the actual presence or absence of species, we initialized the sensitivity of each species to anthropogenic disturbance as a random draw from a uniform distribution ({d}_{s} sim {{{mathcal{U}}}}(0,1)) and we modelled the presence of a species s in a unit c as a random draw from a binomial distribution with a parameter set equal to ({p}_{s}^{c}=1-{d}_{s}times {D}^{c}), where Dc[0, 1] is the disturbance (or threat sensu Carrasco et al.33) in the unit. Under this approach, most of the species expected to live in a unit are considered to be present if the unit is undisturbed. Conversely, many (especially sensitive) species are assumed to be absent from units with high anthropogenic disturbance. This resampled diversity was used for feature extraction in the monitoring steps (Fig. 1c). While this approach is an approximation of how species might respond to anthropogenic pressure, the use of additional empirical data on species-specific sensitivity to disturbance can provide a more realistic input in the CAPTAIN analysis. We repeated this random resampling 50 times and analysed the resulting biodiversity data in CAPTAIN using the one-time protection model, trained through simulations in the experiments described in the previous section and in the Supplementary Methods. We note that it is possible, and perhaps desirable, in principle to train a new model specifically for this empirical dataset or at least fine-tune a model pretrained through simulations (a technique known as transfer learning), for instance, using historical time series and future projections of land use and climate change. Yet, our experiment shows that even a model trained solely using simulated datasets can be successfully applied to empirical data. Following Carrasco et al.33, we set as the target of our policy the protection of at least 10% of each species range. To achieve this in CAPTAIN, we modified the monitoring action such that a species is counted as protected only when at least 10% of its range falls within already protected units. We ran the CAPTAIN analysis for a single step, in which all protection units are established. We analysed the same resampled datasets using Marxan with the initial budget used in the CAPTAIN analyses and under two configurations. First, we used a BLM (BLM=0.1) to penalize the establishment of non-adjacent protected units following the settings used in Carrasco et al.33. After some testing, as suggested in Marxans manual45, we set penalties on exceeding the budget, such that the cost of the optimized results indeed does not exceed the total budget (THRESHPEN1=500, THRESHPEN2=10). For each resampled dataset we ran 100 optimizations (with Marxan settings NUMITNS=1,000,000, STARTTEMP=1 and NUMTEMP=10,000 (ref. 45) and used the best of them as the final result. Second, because the BLM adds a constraint that does not have a direct equivalent in the CAPTAIN model, we also repeated the analyses without it (BLM=0) for comparison. To assess the performance of CAPTAIN and compare it with that of Marxan, we computed the fraction of replicates in which the target was met for all species, the average number of species for which the target was missed and the number of protected units (Supplementary Table 4). We also calculated the fraction of each species range included in protected units to compare it with the target of 10% (Fig. 6c,d and Supplementary Fig. 6c,d). Finally, we calculated the frequency at which each unit was selected for protection across the 50 resampled datasets as a measure of its relative importance (priority) in the conservation plan. Further information on research design is available in the Nature Research Reporting Summary linked to this article. See the original post: Improving biodiversity protection through artificial intelligence - Nature.com
Posted in Artificial Intelligence
Comments Off on Improving biodiversity protection through artificial intelligence – Nature.com
Artificial intelligence will add $10tn to global economy in next decade – IBM CEO – Gulf Business
Posted: at 3:12 am
Artificial intelligence (AI) will add up to $10tn to the global economy in the next decade, Arvind Krishna, chairman and CEO of IBM has said.
Greater adoption of AI in the UAE could also add up to $200bn in productivity gains by 2030, Krishna told Omar bin Sultan Al Olama, Minister of State for Artificial Intelligence, Digital Economy, and Teleworking Applications, at the World Government Summit 2022, official news agency WAM reported.
The leader of the US tech giant tipped AI to transform the world economy after warning that the planet lacks skilled people to keep up with the pandemic-induced disruption caused to workplaces everywhere.
I fundamentally believe that AI offers over $10tn of productivity to the world. If you think about GDP increase, this could be anywhere between 10, 20, or 30 per cent. But we have to do this carefully, we have to harness the skills and deploy it in the right manner, Krishna said, speaking during a one-on-one panel The Next Big Merger: Governments and Technology.
Omar bin Sultan said that the UAEs talent pool will be boosted by Indias decision to set up the first Indian Institute of Technology (IIT) in Abu Dhabi.
Last year, the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), a graduate-level research university focusing on artificial intelligence (AI), launched an executive programme, designed to assist the UAEs government and business elite in unlocking the potential of AI to ensure smart management, increased efficiencies, and enhanced productivity.
Read:Abu Dhabis AI university launches executive programme for UAE govt and business leaders
Located in Masdar City, MBZUAI offers Master of Science, Msc, and PhD level programmes in key areas of AI such as machine learning, computer vision, and natural language processing.
Read:Video: Abu Dhabi launches worlds first AI university
See original here:
Artificial intelligence will add $10tn to global economy in next decade - IBM CEO - Gulf Business
Posted in Artificial Intelligence
Comments Off on Artificial intelligence will add $10tn to global economy in next decade – IBM CEO – Gulf Business