AI Analysed Over 11,000 Couples’ Relationships. This Is What It Found – ScienceAlert

A first-of-its-kind artificial intelligence (AI) study of romantic relationships based on data from thousands of couples has identified the top predictors that make partners feel positively about their relationship and the findings show romantic happiness is about a lot more than simply who you're with.

Researchers conducted a machine-learning analysis of data collected from over 11,000 couples, and found that relationship-specific characteristics (personal evaluations of the relationship itself) were significantly more powerful predictors of relationship quality overall than variables based on individual characteristics.

In other words, the type of relationship you build with a partner may be more important to your happiness than either of your individual characteristics - in the study, they looked at traits like how satisfied a person was with life, how anxious they were, or whether their parents' marriage worked out.

"Relationships-specific variables were about two to three times as predictive as individual differences, which I think would fit many people's intuitions," says lead researcher and psychologist Samantha Joel from Western University in Canada.

"But the surprising part is that once you have all the relationship-specific data in hand, the individual differences fade into the background."

Relationship science has existed for decades and prompted huge amounts of psychological theory about what makes (and doesn't make) for happy couples.

Yet the researchers say a key challenge for their still-maturing field is bringing cumulative data together on a larger scale, to bolster the findings made in smaller, standalone studies (which can be expensive and time-consuming to conduct, in terms of recruiting and interviewing participants).

One analytical solution could be AI, which has the ability to sift through huge amounts of data collected from individual laboratories. In a new study, Joel's team employed that very approach, using a machine learning system called Random Forests, which can test the predictive power of a large number of variables fed to it.

The technique, having analysed mostly self-reported measures collected from 11,196 couples across 43 separate datasets, determined what kind of reported variables seemed to matter the most in terms of predicting relationship quality.

"Results revealed that variables capturing one's own perceptions of the relationship (eg. conflict, affection) predicted up to 45 percent of the variance in relationship quality at the beginning of each study," the authors write in their paper, noting that the predictive effect did diminish over the course of the studies.

In terms of these relationship-specific variables that most reliably predicted relationship quality, the most reliable were: perceived partner commitment (eg. "My partner wants our relationship to last forever"); appreciation (eg. "I feel very lucky to have my partner in my life"); sexual satisfaction (eg. "How satisfied are you with the quality of your sex life?"); perceived partner satisfaction (eg. "Our relationship makes my partner very happy"); and conflict (eg. "How often do you have fights with your partner?").

By contrast, predictors related to individual characteristics - observations respondents reported about themselves, ranging from their personality traits through to their age and gender - at most explained only 21 percent of variance in relationship quality.

The individual characteristics that most strongly predicted the quality of a relationship were: 'satisfaction with life'; 'negative affect' (eg. feeling distressed or irritable); 'depression'; 'avoidant attachment' (eg. "I prefer not to be too close to romantic partners"); and 'anxious attachment' (eg. I worry a lot about my relationships with others).

"Experiencing negative affect, depression, or insecure attachment are surely relationship risk factors," the researchers write in their paper.

"But if people nevertheless manage to establish a relationship characterised by appreciation, sexual satisfaction, and a lack of conflict - and they perceive their partner to be committed and responsive - those individual risk factors may matter little."

While there's a huge range of potential statistical insights on offer here and numerous interpretive limitations too, it must be said, imposed by these kinds of analytical methods at its heart, the findings boil down to a pretty simple truth, Joel says.

"Really, it suggests that the person we choose is not nearly as important as the relationship we build," Joel explained to Inverse.

"The dynamic that you build with someone the shared norms, the in-jokes, the shared experiences is so much more than the separate individuals who make up that relationship."

The findings are reported in PNAS.

Go here to see the original:

AI Analysed Over 11,000 Couples' Relationships. This Is What It Found - ScienceAlert

World’s Most Powerful Particle Collider Taps AI to Expose Hack Attacks – Scientific American

Thousands of scientists worldwide tap into CERNs computer networks each day in their quest to better understand the fundamental structure of the universe. Unfortunately, they are not the only ones who want a piece of this vast pool of computing power, which serves the worlds largest particle physics laboratory. The hundreds of thousands of computers in CERNs grid are also a prime target for hackers who want to hijack those resources to make money or attack other computer systems. But rather than engaging in a perpetual game of hide-and-seek with these cyber intruders via conventional security systems, CERN scientists are turning to artificial intelligence to help them outsmart their online opponents.

Current detection systems typically spot attacks on networks by scanning incoming data for known viruses and other types of malicious code. But these systems are relatively useless against new and unfamiliar threats. Given how quickly malware changes these days, CERN is developing new systems that use machine learning to recognize and report abnormal network traffic to an administrator. For example, a system might learn to flag traffic that requires an uncharacteristically large amount of bandwidth, uses the incorrect procedure when it tries to enter the network (much like using the wrong secret knock on a door) or seeks network access via an unauthorized port (essentially trying to get in through a door that is off-limits).

CERNs cybersecurity department is training its AI software to learn the difference between normal and dubious behavior on the network, and to then alert staff via phone text, e-mail or computer message of any potential threat. The system could even be automated to shut down suspicious activity on its own, says Andres Gomez, lead author of a paper describing the new cybersecurity framework.

CERNthe French acronym for the European Organization for Nuclear Research lab, which sits on the Franco-Swiss borderis opting for this new approach to protect a computer grid used by more than 8,000 physicists to quickly access and analyze large volumes of data produced by the Large Hadron Collider (LHC). The LHCs main job is to collide atomic particles at high-speed so that scientists can study how particles interact. Particle detectors and other scientific instruments within the LHC gather information about these collisions, and CERN makes it available to laboratories and universities worldwide for use in their own research projects.

The LHC is expected to generate a total of about 50 petabytes of data (equal to 15 million high-definition movies) in 2017 alone, and demands more computing power and data storage than CERN itself can provide. In anticipation of that type of growth the laboratory in 2002 created its Worldwide LHC Computing Grid, which connects computers from more than 170 research facilities across more than 40 countries. CERNs computer network functions somewhat like an electrical grid, which relies on a network of generating stations that create and deliver electricity as needed to a particular community of homes and businesses. In CERNs case the community consists of research labs that require varying amounts of computing resources, based on the type of work they are doing at any given time.

One of the biggest challenges to defending a computer grid is the fact that security cannot interfere with the sharing of processing power and data storage. Scientists from labs in different parts of the world might end up accessing the same computers to do their research if demand on the grid is high or if their projects are similar. CERN also has to worry about whether the computers of the scientists connecting into the grid are free of viruses and other malicious software that could enter and spread quickly due to all the sharing. A virus might, for example, allow hackers to take over parts of the grid and use those computers either to generate digital currency known as bitcoins or to launch cyber attacks against other computers. In normal situations, antivirus programs try to keep intrusions out of a single machine, Gomez says. In the grid we have to protect hundreds of thousands of machines that already allow researchers outside CERN to use a variety of software programs they need for their different experiments. The magnitude of the data you can collect and the very distributed environment make intrusion detection on [a] grid far more complex, he says.

Jarno Niemel, a senior security researcher at F-Secure, a company that designs antivirus and computer security systems, says CERNs use of machine learning to train its network defenses will give the lab much-needed flexibility in protecting its grid, especially when searching for new threats. Still, artificially intelligent intrusion detection is not without risksand one of the biggest is whether Gomez and his team can develop machine-learning algorithms that can tell the difference between normal and harmful activity on the network without raising a lot of false alarms, Niemel says.

CERNs AI cybersecurity upgrades are still in the early stages and will be rolled out over time. The first test will be protecting the portion of the grid used by ALICE (A Large Ion Collider Experiment)a key LHC project to study the collisions of lead nuclei. If tests on ALICE are successful, CERNs machine learningbased security could then be used to defend parts of the grid used by the institutions six other detector experiments.

Read more:

World's Most Powerful Particle Collider Taps AI to Expose Hack Attacks - Scientific American

Detect COVID-19 Symptoms Using Wearable Device And AI – Hackaday

A new study from West Virginia University (WVU) Rockefeller Neuroscience Institute (RNI) uses a wearable device and artificial intelligence (AI) to predict COVID-19 up to 3 days before symptoms occur. The study has been an impressive undertaking involving over 1000 health care workers and frontline workers in hospitals across New York, Philadelphia, Nashville, and other critical COVID-19 hotspots.

The implementation of the digital health platform uses a custom smartphone application coupled with an ura smart ring to monitor biometric signals such as respiration and temperature. The platform also assesses psychological, cognitive, and behavioral data through surveys administered through a smartphone application.

We know that wearables tend to suffer from a lack of accuracy, particularly during activity. However, the ura ring appears to take measurements while the user is very still, especially during sleep. This presents an advantage as the accuracy of wearable devices greatly improves when the user isnt moving. RNI noted that the ura ring has been the most accurate device they have tested.

Given some of the early warning signals for COVID-19 are fever and respiratory distress, it would make sense that a device able to measure respiration and temperature could be used as an early detector of COVID-19. In fact, weve seen a few wearable device companies attempt much of what RNI is doingas well as a few DIY attempts. RNIs study has probably been the most thorough work released so far, but were sure that many more are upcoming.

The initial phase of the study was deployed among healthcare and frontline workers but is now open to the general public. Meanwhile the National Basketball Association (NBA) is coordinating its re-opening efforts using uras technology.

We hope to see more results emerge from RNIs very important work. Until then, stay safe Hackaday.

See the original post here:

Detect COVID-19 Symptoms Using Wearable Device And AI - Hackaday

How AI will earn your trust – JAXenter

In the world of applying AI to IT Operations one of the major enterprise concerns is a lack of trust in the technology. This tends to be an emotional rather than intellectual response. When I evaluate the sources of distrust in relation to IT Ops, I can narrow it down to four specific causes.

The algorithms used in AIOps are fairly complex, even if you are addressing an audience which has a background in computer science. The way in which these algorithms are constructed and deployed is not covered in academia. Modern AI is mathematically intensive and many IT practitioners havent even seen this kind of mathematics before. The algorithms are outside the knowledge base of todays professional developers and IT operators.

SEE ALSO: 3 global manufacturing brands at the forefront of AI and ML

When you analyse the specific types of mathematics used in popular AI-based algorithms, deployed in an IT operations context, the maths is basically intractable. What is going on inside the algorithms cannot be teased out or reverse engineered. The mathematics generates patterns whose sources cannot be determined due to the very nature of the algorithm itself.

For example, an algorithm might tell you a number of CPUs have passed a usage threshold of 90% which will result in end user response time degrading. Consequently, the implicit instruction is to offload the usage of some servers. When you have this situation, executive decision makers will want to know why the algorithm indicates there is an issue. If you were using an expert system it could go back and show you all the invoked rules until you reverted back to the original premise. Its almost like doing a logical inference in reverse. The fact that you can trace it backwards lends credibility and validates the conclusion.

What happens in the case of AI is that things get mixed up and switched around, which means links are broken from the conclusion back to the original premise. Even if you have enormous computer power it doesnt help as the algorithm loses track of its previous steps. Youre left with a general description of the algorithm, the start and end data, but no way to link all these things together. You cant run it in reverse. Its intractable. This generates further distrust, which lives on a deeper level. Its not just about not being familiar with the mathematical logic.

Lets look at the way AI has been marketed since its inception in the late 1950s. The general marketing theme has been that AI is trying to create a human mind, when this is translated into a professional context people view it as a threat to their jobs. This notion has been resented for a long time. Scepticism is rife but it is often a tactic used to preserve livelihoods.

How AI has been marketed as an intellectual goal and a meaningful business endeavour, lends credibility to that concern. This is when scepticism starts to shade into genuine distrust. Not only is this technology that may not work, it is also my personal enemy.

IT Operations, in terms of all the various enterprise disciplines, is always being threatened with cost cutting and role reduction. Therefore, this isnt just paranoia, theres a lot of justification behind the fear.

IT Operations has had a number of bouts with commercialized AI which first emerged in the final days of the cold war when a lot of code was repackaged and sold to the IT Ops as it was a plausible use case. Many of the people who are now in senior enterprise positions, were among the first wave of people who were excited about AI and what it could achieve. Unfortunately, AI didnt initially deliver on expectations. So for these people, AI is not something new, its a false promise. Therefore, in many IT Operations circles there is a bad memory of previous hype. A historical reason for scepticism which is unique to the IT Ops world.

These are my four reasons why enterprises dont trust AIOps and AI in general. Despite these four concerns, the use of AI-based algorithms in an IT Operations context is inevitable, despite the distrust.

Take your mind back to a very influential Gartner definition of big data in 2001. Gartner came up with the idea of the 3Vs. The 3Vs (volume, variety and velocity) are three defining properties or dimensions of big data. Volume refers to the amount of data, variety refers to the number of types of data and velocity refers to the speed of data processing. At the time the definition was very valuable and made a lot of sense.

The one thing Gartner missed is the issue of dimensionality i.e. how many attributes a dataset has. Traditional data has maybe four or five attributes. If you have millions of these datasets, with a few attributes, you can store them in a database and it is fairly straightforward to search on key values and conduct analytics to obtain answers from the data.

However, when youre dealing with high dimensions and a data item that has a thousand or a million attributes, suddenly your traditional statistical techniques dont work. Your traditional search methods become ungainly. It becomes impossible to formulate a query.

As our systems become more volatile and dynamic, we are unintentionally multiplying data items and attributes which leads me onto AI. Almost all of the AI techniques developed to date are attempts to handle high dimensional data structures and collapse them into a smaller number of manageable attributes.

When you go to the leading Universities, youre seeing fewer courses on Machine Learning, but more geared towards embedding Machine Learning topics into courses on high dimensional probability and statistics. Whats happening is that Machine Learning per se is starting to resemble practical oriented bootcamps, while the study of AI is now more focussed on understanding probability, geometry and statistics in relation to high dimensions.

How did we end up here? The brain uses algorithms to process high dimensional data and reduces it to low dimensional attributes, it then processes and ends up with a conclusion. This is the path AI has taken. Lets codify what the brain is doing and you end up realizing that what youre actually doing is high dimensional probability and statistics.

SEE ALSO: Facebook AIs Demucs teaches AI to hear in a more human-like way

I can see discussions about AI being repositioned around high dimensional data which will provide a much clearer vision of what is trying to be achieved. In terms of IT operations, there will soon be an acknowledgement that modern IT systems contain high volume, high velocity and high variety data, but now also high dimensional datasets. In order to cope with this were going to need high dimensional probability and statistics and model it in high dimensional geometry. This is why AIOps is inevitable.

Go here to read the rest:

How AI will earn your trust - JAXenter

Syte.ai, a visual search startup just for fashion, closes $8M Series A … – TechCrunch

Syte.ai, a visual search startup focused on fashion products, announced that it has raised an $8 million Series A. The lead investor is NHN Ventures (the investment arm of Korean internet services giant NHN), with participation from Naver Corp., messaging app Line Corp., Magma VC, Remagine VC, KDC Ventures, and NBM Ventures.

Co-founder and CMO Lihi Pinto Fryman was working as an investment banker in London when a red dress in Vogue caught her eye. She tried to find a similar one online, but couldnt.

I said to my husband, how can it be that in 2014 I see a dress that I really like and cant just tap it and get it? she told TechCrunch. The two started looking at ways to build a better visual search engine and in 2015 teamed up with CTO Dr. Helge Voss and COO Idan Pinto to launch Syte.ai (Lihis husband, Ofer Fryman, a former account executive at Hewlett-Packard, is CEO).

Syte.ais Series A will be used for marketing and growth in the U.S., where it hopes to sign up large fashion publishers and retailers. Of course, with Syte.ais new roster of investors, its fair to assume that it will also look for deals in Asia. Fryman declined to talk about potential partnerships, but said Syte.ais new Asian backers, including NHN and Line, have been searching for a while to find the most accurate deep-learning technology that can make images shoppable.

In a prepared statement, Woo Kim, managing director and partner of NHN Investment, said that many next-generation search solutions have delivered disappointing results so far.

It was largely because such below-average image search results were driven by essentially the same deep learning approach, and the only differentiation was how many sample images you have to train your database, he added.Syte.ais unique approach of redefining how machines understand images is simply ground-breaking innovation. We believe thatSyte.ai will disrupt the way industry adopts image search technology.

Syte.ais founders have spent the last three years developing its deep-learning algorithms, which Fryman describes as building a bridge between physics and fashion. Fryman says Syte.ai is different from its competitors because its deep learning-based search engine focuses only on fashion products, even though there are other verticals, such as home decor, where visual search is also in demand. Its main business are search tools for online publishers and retailers, but it also has several consumer products, including a Chrome extension called Fashion Lover, and Glamix, a chatbot.

The startup is just one of several that are tackling visual search as online businesses try to reduce their dependency on banner ads and find ways to monetize that are better suited to mobile screens. Other companies in the same space include Slyce, Clarifai, and Visenze, which is itself funded by another one of Asias leading Internet firms, Rakuten.

The reason we chose fashion is because its so hard to recognize, says Fryman. Think about your top. It looks completely different if you are wearing it, or if its on a hanger, or in a catalog, or if you are walking on a red carpet or sitting down. Its hard to teach the machine its the same item.

After a site integrates Syte.ais search technology, users can hover their cursor over an item in a photo and automatically get results for similar products that are on sale. For publishers, including fashion blogs and magazines, Syte.ai displays items from a range of stores and price points to increase the chances that the user will click on at least one result. The company monetizes by sharing revenue with publishers and charging e-commerce stores a subscription fee.

One of the main attractions for online stores is that Syte.ais search engine can help customers find alternative items if what they want is out of stock. According to a study by IHL Group, out-of-stocks cost retailers about $634.1 billion a year. It also helps sites turn over indexed inventory, or items overlooked by customers because they arent on the front page (at larger sellers, this can potentially be hundreds of items). Visual search is especially crucial for mobile shopping, where customers want to see as many results displayed as quickly as possible on their small screens. Once Syte.ai perfects visual search for fashion, Fryman says, it will move onto other verticals.

Continue reading here:

Syte.ai, a visual search startup just for fashion, closes $8M Series A ... - TechCrunch

AI Is Coming for Your Most Mind-Numbing Office Tasks – WIRED

In 2018, the New York Foundling, a charity that offers child welfare, adoption, and mental health services, was stuck in cut-and-paste hell.

Clinicians and admin staff were spending hours transferring text between different documents and databases to meet varied legal requirements. Arik Hill, the charitys chief information officer, blames the data entry drudgery for an annual staff turnover of 42 percent at the time. We are not a very glamorous industry, says Hill. We are really only just moving on from paper clinical records.

Since then, the New York Foundling has automated much of this grunt work using what are known as software robotssimple programs hand-crafted to perform dull tasks. Often, the programs are built by recording and mimicking a users keystrokes, such as copying a field of text from one database and pasting it into another, eliminating hours of repetitive-stress-inducing work.

It was mind-blowing, says Hill, who says turnover has fallen to 17 percent.

To automate the work, the New York Foundling got help from UiPath, a so-called robotic process automation company. That project didnt require any real machine intelligence.

But in January, UiPath began upgrading its army of software bots to use powerful new artificial intelligence algorithms. It thinks this will let them take on more complex and challenging tasks, such as transcription or sorting images, across more offices. Ultimately, the company hopes software robots will gradually learn how to automate repetitive work for themselves.

In other words, if artificial intelligence is going to disrupt white-collar work, then this may be how it begins.

When paired with robotic process automation, AI significantly expands the number and types of tasks that software robots can perform, says Tom Davenport, a professor who studies information technology and management at Babson College.

Consider a company that needs to summarize long-winded, handwritten notes. AI algorithms that perform character recognition and natural language processing could read the cursive and summarize the text, before a software robot inputs the text into, say, a website. The latest version of UiPaths software includes a range of off-the-shelf machine learning tools. It is also now possible for users to add their own machine learning models to a robotic process.

With all the AI hype, its notable that so little has found its way into modern offices. But the automation that is there, which simply repeats a persons clicking and typing, is still useful. The technology is mostly used by banks, telcos, insurers, and other companies with legacy systems; market researcher Gartner estimates the industry generated roughly $1.3 billion in revenue in 2019.

Supersmart algorithms won't take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

Simple software automation is eliminating some particularly repetitive jobs, such as basic data entry, which are often already done overseas. In call centers, fewer people are needed to fill out forms if software can be programmed to open the right documents, find the right fields, and enter text. At the New York Foundling, Hills software allowed him to redirect eight workers to other tasks.

But Davenport says software robots that use AI could displace more jobs, especially if we head into a recession. Companies will use it for substantial headcount and cost reductions, he says.

Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy and the author of several books exploring the impact of technology on the workforce, says robotic process automation will mostly affect middle-skilled office workers, meaning admin work that requires some training.

But it wont happen overnight. He says it took many years for simple software robots, which are essentially descended from screen-scrapers and simple coding tools, to affect office work. The lesson is just how long it takes for even a relatively simple technology to have an impact on business, because of the hard work it takes to implement it reliably in complex environments, Brynjolfsson notes.

The rest is here:

AI Is Coming for Your Most Mind-Numbing Office Tasks - WIRED

Google expands AI calling service Duplex to Australia, Canada, and the UK – The Verge

Googles automated, artificial intelligence-powered calling service Duplex is now available in more countries, according to a support page updated today. In addition to the US and New Zealand, Duplex is now available in Australia, Canada, and the UK, reports VentureBeat, which discovered newly added phone numbers on the support page that Google says it will use when calling via Duplex from a distinct country.

It isnt a full rollout of the service, however, as Google clarified to The Verge its using Duplex mainly to reach businesses in those new countries to update business hours for Google Maps and Search.

And indeed, CEO Sundar Pichai did in fact outline this use of Duplex last month, writing in a blog post, In the coming days, well make it possible for businesses to easily mark themselves as temporarily closed using Google My Business. Were also using our artificial intelligence (AI) technology Duplex where possible to contact businesses to confirm their updated business hours, so we can reflect them accurately when people are looking on Search and Maps. Its not clear if a consumer version of the service will be made available at a later date in those countries.

Duplex launched as an early beta in the US via the Google Assistant back in late 2018 after a splashy yet controversial debut at that years Google I/O developer conference. There were concerns about the use of Duplex without a restaurant or other small business express consent and without proper disclosure that the automated call was being handled by a digital voice assistant and not a human being.

Google has since tried to address those concerns, with limited success, by adding disclosures at the beginning of calls and giving businesses the option to opt out of being recording and speak with a human. Duplex now has human listeners who annotate the phone calls to improve Duplexs underlying machine learning algorithms and to take over in the event the call either goes awry or the person on the other end chooses not to talk with the AI.

Google has also expanded the service in waves, from starting on just Pixel phones to iOS devices and then more Android devices. The services first international expansion was New Zealand in October 2019.

Update April 9th, 2:15PM ET: Clarified that the Duplex rollout is to help Google update business hours for Google Maps and Search.

Visit link:

Google expands AI calling service Duplex to Australia, Canada, and the UK - The Verge

Facebook Buys AI Startup Ozlo for Messenger – Investopedia


Investopedia
Facebook Buys AI Startup Ozlo for Messenger
Investopedia
According to media reports, past demos on the company's website show how an AI digital assistant developed by the company can tell a user if a restaurant is group-friendly by gathering and analyzing all the reviews of the establishment. On its website ...
Facebook Acquires AI Startup OzloInc.com
Facebook buys Ozlo to boost its conversational AI efforts | TechCrunchTechCrunch
Facebook acquired an AI startup to help Messenger build out its personal assistantRecode
Fortune -YourStory.com -GeekWire
all 77 news articles »

Continued here:

Facebook Buys AI Startup Ozlo for Messenger - Investopedia

Spreading human rights around the world, one AI at a time? – Reuters

LONDON (Thomson Reuters Foundation) - British billionaire Richard Branson, founder of Virgin Atlantic Airways and a campaigner for LGBT+ rights, cannot be in all places at all times.

Technology could soon change that.

Uncannily familiar in appearance and voice, avatars of Branson and two human rights activists displayed on tablet devices were unveiled at a youth summit in London on Friday.

The digital doppelgangers use pre-recorded phrases to interactively engage in conversations with people about social causes like climate change through a mobile application designed by technology company AI Foundation.

In theory, that means an imprisoned human rights activist could continue to engage with others through their avatar.

You will never be able to silence anyone again, said AI Foundations CEO Lars Buttler at the One Young World conference.

The animated renderings of Branson, a Colombian kidnapping survivor and a North Korean refugee addressed a packed auditorium in central London and asked each other questions aboutdemocracy and forgiveness.

Laura Ulloa, the 28-year-old Colombian activist which one of the avatars is modeled on, told the conference the technology allows her to be in all the places I cant be and to have one-on-one conversations that could change peoples minds through empathy.

Other kinds of artificial intelligence including robots, holograms and AI dolls have yet to be mass produced but are depicted in popular culture including in the 2019 novel by British author Ian Mcewan, Machines Like Me.

Rights groups are examining how to use artificial intelligence to monitor abuses like the death penalty but others have raised concerns about the dangers posed by avatars.

Such avatars come with clear risks, said Edin Omanovic, of UK-based surveillance monitoring group Privacy International.

You have to hand over a huge amount of sensitive information to make the program work and trust the company to keep this data secure from hackers and not to monetize it by sharing your data with third parties.

Co-founder of the One Young World summit, David Jones said: bad people have always done bad things with new technology but were trying to use this technology to drive positive change while protecting it as much as we can.

Editing by Tom Finn Please credit Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers humanitarian news, women's rights, trafficking, property rights, and climate change. Visit http://www.trust.org

Go here to read the rest:

Spreading human rights around the world, one AI at a time? - Reuters

Google uses AI to help diagnose breast cancer – CNNMoney

Google announced Friday that it has achieved state-of-the-art results in using artificial intelligence to identify breast cancer. The findings are a reminder of the rapid advances in artificial intelligence, and its potential to improve global health.

Google used a flavor of artificial intelligence called deep learning to analyze thousands of slides of cancer cells provided by a Dutch university. Deep learning is where computers are taught to recognize patterns in huge data sets. It's very useful for visual tasks, such as looking at a breast cancer biopsy.

With 230,000 new cases of breast cancer every year in the United States, Google (GOOGL, Tech30) hopes its technology will help pathologists better treat patients. The technology isn't designed to, or capable of, replacing human doctors.

"What we've trained is just a little sliver of software that helps with one part of a very complex series of tasks," said Lily Peng, the project manager behind Google's work. "There will hopefully be more and more of these tools that help doctors [who] have to go through an enormous amount of information all the time."

Related: Google's artificial intelligence can actually help the environment

Peng described to CNNTech how the human and the computer could work together to create better outcomes. Google's artificial intelligence system excels at being very sensitive to potential cancer. It will flag things a human will miss. But it sometimes will falsely identify something as cancer, whereas a human pathologist is better at saying, "no, this isn't cancer."

"Imagine combining these two types of super powers," Peng said. "The algorithm helps you localize and find these tumors. And the doctor is really good at saying, 'This is not cancer.'"

For now, Google's progress is still research mode and remains in the lab. Google isn't going to become your pathologist's assistant tomorrow. But Google and many other players are striving toward a future where that becomes a reality.

Jeroen van der Laak, who leads the pathology department at Radboud University Medical Center, believes the first algorithms for cancer will be available within a couple years, and large-scale routine use will occur in about five years. His university provided the slides for Google's research.

The technology will be especially useful in parts of the world where there's a shortage of physicians. For patients who don't have access to a pathologist, an algorithm -- even if imperfect -- would be a meaningful improvement. Van der Laak highlighted India and China as two underserved areas.

CNNMoney (Washington) First published March 3, 2017: 9:03 AM ET

The rest is here:

Google uses AI to help diagnose breast cancer - CNNMoney

Artificial intelligence and humankind – The Irish Times

A chara, Paul Connolly is correct to challenge the assumption that if something produces better outcomes then it does not matter whether its human or machine (Letters, November 5th).

The current lack of significant ethical frameworks, underpinned by law, around the competing usages of artificial intelligence (AI) by so many global commercial firms has been noted in several reports.

Machine intelligence is certainly assisting us to accelerate our learning in ways that we have never experienced before. However, this is surely our greatest challenge, to assimilate our new learning despite our significant biological limitations. After all, we dont have a great track record in these areas, particularly when you examine the misuse of science and its impact on the ecology of our planet. Increasingly the convergence of quantum science with machine learning will produce AI technology certainly beyond our understanding and possibly beyond our control.

Thousands of well-funded commercial and government agencies are already furiously in competition to make this happen.

After all, this is how we humans have progressed for centuries; we experiment to find how things work and we then apply the new science into our lives, even before we fully understand the consequences of our discoveries.

But if our leaders and politicians are not familiar with the language or alert to the ethical challenges and dangers arising from the application of artificial intelligence and its technologies, then surely we are heading for a potential disaster? Is mise,

Dr VINCENT KENNY

Knocklyon,

Dublin 16.

Read this article:

Artificial intelligence and humankind - The Irish Times

AI going to head of the class – Pensions & Investments


Pensions & Investments
AI going to head of the class
Pensions & Investments
Long the tools of the largest, most sophisticated and well-heeled systematic quantitative hedge fund managers, AI/ML processes are making their way down to smaller quantitative and fundamental firms at a rapid rate, sources said. Data-driven approaches ...

Read more:

AI going to head of the class - Pensions & Investments

Artificial Intelligence Regulation Updates: China, EU, and U.S – The National Law Review

Wednesday, August 3, 2022

Artificial Intelligence (AI) systems are poised to drastically alter the way businesses and governments operate on a global scale, with significant changes already under way. This technology has manifested itself in multiple forms including natural language processing, machine learning, and autonomous systems, but with the proper inputs can be leveraged to make predictions, recommendations, and even decisions.

Accordingly,enterprises are increasingly embracing this dynamic technology. A2022 global study by IBMfound that 77% of companies are either currently using AI or exploring AI for future use, creating value by increasing productivity through automation, improved decision-making, and enhanced customer experience. Further, according to a2021 PwC studythe COVID-19 pandemic increased the pace of AI adoption for 52% of companies as they sought to mitigate the crises impact on workforce planning, supply chain resilience, and demand projection.

For these many businesses investing significant resources into AI, it is critical to understand the current and proposed legal frameworks regulating this novel technology. Specifically for businesses operating globally, the task of ensuring that their AI technology complies with applicable regulations will be complicated by the differing standards that are emerging from China, the European Union (EU), and the U.S.

China has taken the lead in moving AI regulations past the proposal stage. In March 2022, China passed aregulationgoverning companies use of algorithms in online recommendation systems, requiring that such services are moral, ethical, accountable, transparent, and disseminate positive energy. The regulation mandates companies notify users when an AI algorithm is playing a role in determining which information to display to them and give users the option to opt out of being targeted. Additionally, the regulation prohibits algorithms that use personal data to offer different prices to consumers. We expect these themes to manifest themselves in AI regulations throughout the world as they develop.

Meanwhile in the EU, the European Commission has published an overarchingregulatory framework proposaltitled the Artificial Intelligence Act which would have a much broader scope than Chinas enacted regulation. The proposal focuses on the risks created by AI, with applications sorted into categories of minimal risk, limited risk, high risk, or unacceptable risk. Depending on an applications designated risk level, there will be corresponding government action or obligations. So far, the proposed obligations focus on enhancing the security, transparency, and accountability of AI applications through human oversight and ongoing monitoring. Specifically, companies will be required to register stand-alone high-risk AI systems, such as remote biometric identification systems, in an EU database. If the proposed regulation is passed, the earliest date for compliance would be the second half of 2024 with potential fines for noncompliance ranging from 2-6% of a companys annual revenue.

Additionally, the previously enacted EU General Data Protection Regulation (GDPR) already carries implications for AI technology.Article 22prohibits decisions based on solely automated processes that produce legal consequences or similar effects for individuals unless the program gains the users explicit consent or meets other requirements.

In the United States there has been a fragmented approach to AI regulation thus far, with states enacting their own patchwork AI laws. Many of the enacted regulations focus on establishing various commissions to determine how state agencies can utilize AI technology and to study AIs potential impacts on the workforce and consumers. Common pending state initiatives go a step further and would regulate AI systems accountability and transparency when they process and make decisions based on consumer data.

On a national level, the U.S. Congress enacted theNational AI Initiative Actin January 2021, creating theNational AI Initiativethat provides an overarching framework to strengthen and coordinate AI research, development, demonstration, and education activities across all U.S. Departments and Agencies . . . . The Act created new offices and task forces aimed at implementing a national AI strategy, implicating a multitude of U.S. administrative agencies including the Federal Trade Commission (FTC), Department of Defense, Department of Agriculture, Department of Education, and the Department of Health and Human Services.

Pending national legislation includes theAlgorithmic Accountability Act of 2022, which was introduced in both houses of Congress in February 2022. In response to reports that AI systems can lead to biased and discriminatory outcomes, the proposed Act would direct the FTC to create regulations that mandate covered entities, including businesses meeting certain criteria, to perform impact assessments when using automated decision-making processes. This would specifically include those derived from AI or machine learning.

While the FTC has not promulgated AI-specific regulations, this technology is on the agencys radar. In April 2021 the FTC issued amemowhich apprised companies that using AI that produces discriminatory outcomes equates to a violation of Section 5 of the FTC Act, which prohibits unfair or deceptive practices. And the FTC may soon take this warning a step fartherin June 2022 theagency indicatedthat it will submit an Advanced Notice of Preliminary Rulemaking to ensure that algorithmic decision-making does not result in harmful discrimination with the public comment period ending in August 2022. The FTC also recently issued areportto Congress discussing how AI may be used to combat online harms, ranging from scams, deep fakes, and opioid sales, but advised against over-reliance on these tools, citing the technologys susceptibility to producing inaccurate, biased, and discriminatory outcomes.

Companies should carefully discern whether other non-AI specific regulations could subject them to potential liability for their use of AI technology. For example, the U.S. Equal Employment Opportunity Commission (EEOC) put forthguidancein May 2022 warning companies that their use of algorithmic decision-making tools to assess job applicants and employees could violate the Americans with Disabilities Act by, in part, intentionally or unintentionally screening out individuals with disabilities. Further analysis of the EEOCs guidance can be foundhere.

Many other U.S. agencies and offices are beginning to delve into the fray of AI. In November 2021, the White House Office of Science and Technology Policysolicited engagementfrom stakeholders across industries in an effort to develop a Bill of Rights for an Automated Society. Such a Bill of Rights could cover topics like AIs role in the criminal justice system, equal opportunities, consumer rights, and the healthcare system. Additionally, the National Institute of Standards and Technology (NIST), which falls under the U.S. Department of Commerce, is engaging with stakeholders todevelopa voluntary risk management framework for trustworthy AI systems. The output of this project may be analogous to the EUs proposed regulatory framework, but in a voluntary format.

The overall theme of enacted and pending AI regulations globally is maintaining the accountability, transparency, and fairness of AI. For companies leveraging AI technology, ensuring that their systems remain compliant with the various regulations intended to achieve these goals could be difficult and costly. Two aspects of AIs decision-making process make oversight particularly demanding:

Opaquenesswhere users can control data inputs and view outputs, but are often unable to explain how and with which data points the system made a decision.

Frequent adaptationwhere processes evolve over time as the system learns.

Therefore, it is important for regulators to avoid overburdening businesses to ensure that stakeholders may still leverage AI technologies great benefits in a cost-effective manner. The U.S. has the opportunity to observe the outcomes of the current regulatory action from China and the EU to determine whether their approaches strike a favorable balance. However, the U.S. should potentially accelerate its promulgation of similar laws so that it can play a role in setting the global tone for AI regulatory standards.

Thank you to co-author Lara Coole, a summer associate in Foley & Lardners Jacksonville office, for her contributions to this post.

Go here to see the original:

Artificial Intelligence Regulation Updates: China, EU, and U.S - The National Law Review

Artificial intelligence isn’t that intelligent | The Strategist – The Strategist

Late last month, Australias leading scientists, researchers and businesspeople came together for the inaugural Australian Defence Science, Technology and Research Summit (ADSTAR), hosted by the Defence Departments Science and Technology Group. In a demonstration of Australias commitment to partnerships that would make our non-allied adversaries flinch, Chief Defence Scientist Tanya Monro was joined by representatives from each of the Five Eyes partners, as well as Japan, Singapore and South Korea. Two streams focusing on artificial intelligence were dedicated to research and applications in the defence context.

At the end of the day, isnt hacking an AI a bit like social engineering?

A friend who works in cybersecurity asked me this. In the world of information security, social engineering is the game of manipulating people into divulging information that can be used in a cyberattack or scam. Cyber experts may therefore be excused for assuming that AI might display some human-like level of intelligence that makes it difficult to hack.

Unfortunately, its not. Its actually very easy.

The man who coined the term artificial intelligence in the 1950s, cybernetics researcher John McCarthy, also said that once we know how it works, it isnt called AI anymore. This explains why AI means different things to different people. It also explains why trust in and assurance of AI is so challenging.

AI is not some all-powerful capability that, despite how much it can mimic humans, also thinks like humans. Most implementations, specifically machine-learning models, are just very complicated implementations of the statistical methods were familiar with from high school. It doesnt make them smart, merely complex and opaque. This leads to problems in AI safety and security.

Bias in AI has long been known to cause problems. For example, AI-driven recruitment systems in tech companies have been shown to filter out applications from women, and re-offence prediction systems in US prisons exhibit consistent biases against black inmates. Fortunately, bias and fairness concerns in AI are now well known and actively investigated by researchers, practitioners and policymakers.

AI security is different, however. While AI safety deals with the impact of the decisions an AI might make, AI security looks at the inherent characteristics of a model and whether it could be exploited. AI systems are vulnerable to attackers and adversaries just as cyber systems are.

A known challenge is adversarial machine learning, where adversarial perturbations added to an image cause a model to predictably misclassify it.

When researchers added adversarial noise imperceptible to humans to an image of a panda, the model predicted it was a gibbon.

In another study, a 3D-printed turtle had adversarial perturbations embedded in its surface so that an object-detection model believed it to be a rifle. This was true even when the object was rotated.

I cant help but notice disturbing similarities between the rapid adoption of and misplaced trust in the internet in the latter half of the last century and the unfettered adoption of AI now.

It was a sobering moment when, in 2018, the then US director of national intelligence, Daniel Coats, called out cyber as the greatest strategic threat to the US.

Many nations are publishing AI strategies (including Australia, the US and the UK) that address these concerns, and theres still time to apply the lessons learned from cyber to AI. These include investment in AI safety and security at the same pace as investment in AI adoption is made; commercial solutions for AI security, assurance and audit; legislation for AI safety and security requirements, as is done for cyber; and greater understanding of AI and its limitations, as well as the technologies, like machine learning, that underpin it.

Cybersecurity incidents have also driven home the necessity for the public and private sectors to work together not just to define standards, but to reach them together. This is essential both domestically and internationally.

Autonomous drone swarms, undetectable insect-sized robots and targeted surveillance based on facial recognition are all technologies that exist. While Australia and our allies adhere to ethical standards for AI use, our adversaries may not.

Speaking on resilience at ADSTAR, Chief Scientist Cathy Foley discussed how pre-empting and planning for setbacks is far more strategic than simply ensuring you can get back up after one. That couldnt be more true when it comes to AI, especially given Defences unique risk profile and the current geostrategic environment.

I read recently that Ukraine is using AI-enabled drones to target and strike Russians. Notwithstanding the ethical issues this poses, the article I read was written in Polish and translated to English for me by Googles language translation AI. Artificial intelligence is already pervasive in our lives. Now we need to be able to trust it.

The rest is here:

Artificial intelligence isn't that intelligent | The Strategist - The Strategist

Can artificial intelligence really help us talk to the animals? – The Guardian

A dolphin handler makes the signal for together with her hands, followed by create. The two trained dolphins disappear underwater, exchange sounds and then emerge, flip on to their backs and lift their tails. They have devised a new trick of their own and performed it in tandem, just as requested. It doesnt prove that theres language, says Aza Raskin. But it certainly makes a lot of sense that, if they had access to a rich, symbolic way of communicating, that would make this task much easier.

Raskin is the co-founder and president of Earth Species Project (ESP), a California non-profit group with a bold ambition: to decode non-human communication using a form of artificial intelligence (AI) called machine learning, and make all the knowhow publicly available, thereby deepening our connection with other living species and helping to protect them. A 1970 album of whale song galvanised the movement that led to commercial whaling being banned. What could a Google Translate for the animal kingdom spawn?

The organisation, founded in 2017 with the help of major donors such as LinkedIn co-founder Reid Hoffman, published its first scientific paper last December. The goal is to unlock communication within our lifetimes. The end we are working towards is, can we decode animal communication, discover non-human language, says Raskin. Along the way and equally important is that we are developing technology that supports biologists and conservation now.

Understanding animal vocalisations has long been the subject of human fascination and study. Various primates give alarm calls that differ according to predator; dolphins address one another with signature whistles; and some songbirds can take elements of their calls and rearrange them to communicate different messages. But most experts stop short of calling it a language, as no animal communication meets all the criteria.

Until recently, decoding has mostly relied on painstaking observation. But interest has burgeoned in applying machine learning to deal with the huge amounts of data that can now be collected by modern animal-borne sensors. People are starting to use it, says Elodie Briefer, an associate professor at the University of Copenhagen who studies vocal communication in mammals and birds. But we dont really understand yet how much we can do.

Briefer co-developed an algorithm that analyses pig grunts to tell whether the animal is experiencing a positive or negative emotion. Another, called DeepSqueak, judges whether rodents are in a stressed state based on their ultrasonic calls. A further initiative Project CETI (which stands for the Cetacean Translation Initiative) plans to use machine learning to translate the communication of sperm whales.

Yet ESP says its approach is different, because it is not focused on decoding the communication of one species, but all of them. While Raskin acknowledges there will be a higher likelihood of rich, symbolic communication among social animals for example primates, whales and dolphins the goal is to develop tools that could be applied to the entire animal kingdom. Were species agnostic, says Raskin. The tools we develop can work across all of biology, from worms to whales.

The motivating intuition for ESP, says Raskin, is work that has shown that machine learning can be used to translate between different, sometimes distant human languages without the need for any prior knowledge.

This process starts with the development of an algorithm to represent words in a physical space. In this many-dimensional geometric representation, the distance and direction between points (words) describes how they meaningfully relate to each other (their semantic relationship). For example, king has a relationship to man with the same distance and direction that woman has to queen. (The mapping is not done by knowing what the words mean but by looking, for example, at how often they occur near each other.)

It was later noticed that these shapes are similar for different languages. And then, in 2017, two groups of researchers working independently found a technique that made it possible to achieve translation by aligning the shapes. To get from English to Urdu, align their shapes and find the point in Urdu closest to the words point in English. You can translate most words decently well, says Raskin.

ESPs aspiration is to create these kinds of representations of animal communication working on both individual species and many species at once and then explore questions such as whether there is overlap with the universal human shape. We dont know how animals experience the world, says Raskin, but there are emotions, for example grief and joy, it seems some share with us and may well communicate about with others in their species. I dont know which will be the more incredible the parts where the shapes overlap and we can directly communicate or translate, or the parts where we cant.

He adds that animals dont only communicate vocally. Bees, for example, let others know of a flowers location via a waggle dance. There will be a need to translate across different modes of communication too.

The goal is like going to the moon, acknowledges Raskin, but the idea also isnt to get there all at once. Rather, ESPs roadmap involves solving a series of smaller problems necessary for the bigger picture to be realised. This should see the development of general tools that can help researchers trying to apply AI to unlock the secrets of species under study.

For example, ESP recently published a paper (and shared its code) on the so called cocktail party problem in animal communication, in which it is difficult to discern which individual in a group of the same animals is vocalising in a noisy social environment.

To our knowledge, no one has done this end-to-end detangling [of animal sound] before, says Raskin. The AI-based model developed by ESP, which was tried on dolphin signature whistles, macaque coo calls and bat vocalisations, worked best when the calls came from individuals that the model had been trained on; but with larger datasets it was able to disentangle mixtures of calls from animals not in the training cohort.

Another project involves using AI to generate novel animal calls, with humpback whales as a test species. The novel calls made by splitting vocalisations into micro-phonemes (distinct units of sound lasting a hundredth of a second) and using a language model to speak something whale-like can then be played back to the animals to see how they respond. If the AI can identify what makes a random change versus a semantically meaningful one, it brings us closer to meaningful communication, explains Raskin. It is having the AI speak the language, even though we dont know what it means yet.

A further project aims to develop an algorithm that ascertains how many call types a species has at its command by applying self-supervised machine learning, which does not require any labelling of data by human experts to learn patterns. In an early test case, it will mine audio recordings made by a team led by Christian Rutz, a professor of biology at the University of St Andrews, to produce an inventory of the vocal repertoire of the Hawaiian crow a species that, Rutz discovered, has the ability to make and use tools for foraging and is believed to have a significantly more complex set of vocalisations than other crow species.

Rutz is particularly excited about the projects conservation value. The Hawaiian crow is critically endangered and only exists in captivity, where it is being bred for reintroduction to the wild. It is hoped that, by taking recordings made at different times, it will be possible to track whether the speciess call repertoire is being eroded in captivity specific alarm calls may have been lost, for example which could have consequences for its reintroduction; that loss might be addressed with intervention. It could produce a step change in our ability to help these birds come back from the brink, says Rutz, adding that detecting and classifying the calls manually would be labour intensive and error prone.

Meanwhile, another project seeks to understand automatically the functional meanings of vocalisations. It is being pursued with the laboratory of Ari Friedlaender, a professor of ocean sciences at the University of California, Santa Cruz. The lab studies how wild marine mammals, which are difficult to observe directly, behave underwater and runs one of the worlds largest tagging programmes. Small electronic biologging devices attached to the animals capture their location, type of motion and even what they see (the devices can incorporate video cameras). The lab also has data from strategically placed sound recorders in the ocean.

ESP aims to first apply self-supervised machine learning to the tag data to automatically gauge what an animal is doing (for example whether it is feeding, resting, travelling or socialising) and then add the audio data to see whether functional meaning can be given to calls tied to that behaviour. (Playback experiments could then be used to validate any findings, along with calls that have been decoded previously.) This technique will be applied to humpback whale data initially the lab has tagged several animals in the same group so it is possible to see how signals are given and received. Friedlaender says he was hitting the ceiling in terms of what currently available tools could tease out of the data. Our hope is that the work ESP can do will provide new insights, he says.

But not everyone is as gung ho about the power of AI to achieve such grand aims. Robert Seyfarth is a professor emeritus of psychology at University of Pennsylvania who has studied social behaviour and vocal communication in primates in their natural habitat for more than 40 years. While he believes machine learning can be useful for some problems, such as identifying an animals vocal repertoire, there are other areas, including the discovery of the meaning and function of vocalisations, where he is sceptical it will add much.

The problem, he explains, is that while many animals can have sophisticated, complex societies, they have a much smaller repertoire of sounds than humans. The result is that the exact same sound can be used to mean different things in different contexts and it is only by studying the context who the individual calling is, how are they related to others, where they fall in the hierarchy, who they have interacted with that meaning can hope to be established. I just think these AI methods are insufficient, says Seyfarth. Youve got to go out there and watch the animals.

There is also doubt about the concept that the shape of animal communication will overlap in a meaningful way with human communication. Applying computer-based analyses to human language, with which we are so intimately familiar, is one thing, says Seyfarth. But it can be quite different doing it to other species. It is an exciting idea, but it is a big stretch, says Kevin Coffey, a neuroscientist at the University of Washington who co-created the DeepSqueak algorithm.

Raskin acknowledges that AI alone may not be enough to unlock communication with other species. But he refers to research that has shown many species communicate in ways more complex than humans have ever imagined. The stumbling blocks have been our ability to gather sufficient data and analyse it at scale, and our own limited perception. These are the tools that let us take off the human glasses and understand entire communication systems, he says.

Read the original post:

Can artificial intelligence really help us talk to the animals? - The Guardian

Global Artificial Intelligence in Healthcare Diagnosis Market Research Report 2022: Rising Adoption of Healthcare Artificial Intelligence in Research…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence in Healthcare Diagnosis Market Research Report by Technology, Component, Application, End User, Region - Global Forecast to 2026 - Cumulative Impact of COVID-19" report has been added to ResearchAndMarkets.com's offering.

The Global Artificial Intelligence in Healthcare Diagnosis Market size was estimated at USD 2,318.98 million in 2020, USD 2,725.72 million in 2021, and is projected to grow at a Compound Annual Growth Rate (CAGR) of 17.81% to reach USD 6,202.67 million by 2026.

Market Segmentation:

This research report categorizes the Artificial Intelligence in Healthcare Diagnosis to forecast the revenues and analyze the trends in each of the following sub-markets:

Competitive Strategic Window:

The Competitive Strategic Window analyses the competitive landscape in terms of markets, applications, and geographies to help the vendor define an alignment or fit between their capabilities and opportunities for future growth prospects. It describes the optimal or favorable fit for the vendors to adopt successive merger and acquisition strategies, geography expansion, research & development, and new product introduction strategies to execute further business expansion and growth during a forecast period.

FPNV Positioning Matrix:

The FPNV Positioning Matrix evaluates and categorizes the vendors in the Artificial Intelligence in Healthcare Diagnosis Market based on Business Strategy (Business Growth, Industry Coverage, Financial Viability, and Channel Support) and Product Satisfaction (Value for Money, Ease of Use, Product Features, and Customer Support) that aids businesses in better decision making and understanding the competitive landscape.

Market Share Analysis:

The Market Share Analysis offers the analysis of vendors considering their contribution to the overall market. It provides the idea of its revenue generation into the overall market compared to other vendors in the space. It provides insights into how vendors are performing in terms of revenue generation and customer base compared to others. Knowing market share offers an idea of the size and competitiveness of the vendors for the base year. It reveals the market characteristics in terms of accumulation, fragmentation, dominance, and amalgamation traits.

Market Dynamics

Drivers

Restraints

Opportunities

Challenges

Key Topics Covered:

1. Preface

2. Research Methodology

3. Executive Summary

4. Market Overview

5. Market Insights

6. Artificial Intelligence in Healthcare Diagnosis Market, by Technology

7. Artificial Intelligence in Healthcare Diagnosis Market, by Component

8. Artificial Intelligence in Healthcare Diagnosis Market, by Application

9. Artificial Intelligence in Healthcare Diagnosis Market, by End User

10. Americas Artificial Intelligence in Healthcare Diagnosis Market

11. Asia-Pacific Artificial Intelligence in Healthcare Diagnosis Market

12. Europe, Middle East & Africa Artificial Intelligence in Healthcare Diagnosis Market

13. Competitive Landscape

14. Company Usability Profiles

15. Appendix

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/vgkht7

More here:

Global Artificial Intelligence in Healthcare Diagnosis Market Research Report 2022: Rising Adoption of Healthcare Artificial Intelligence in Research...

Researchers use artificial intelligence to create a treasure map of undiscovered ant species – EurekAlert

image:Map detailing ant diversity centers in Africa, Madagascar and Mediterranean regions. view more

Credit: Kass et al., 2022, Science Advances

E. O. Wilson once referred to invertebrates as the little things that run the world, without whom the human species [wouldnt] last more than a few months. Although small, invertebrates have an outsized influence on their environments, pollinating plants, breaking down organic matter and speeding up nutrient cycling. And what they lack in stature, they make up for in diversity. With more than one million known species, insects alone vastly outnumber all other invertebrates and vertebrates combined.

Despite their importance and ubiquity, some of the most basic information about invertebrates, such as where theyre most diverse and how many of them there are, still remains a mystery. This is especially problematic for conservation scientists trying to stave off global insect declines; you cant conserve something if you dont know where to look for it.

In a new study published this Wednesday in the journal Science Advances, researchers used ants as a proxy to help close major knowledge gaps and hopefully begin reversing these declines. Working for more than a decade, researchers from institutions around the world stitched together nearly one-and-a-half million location records from research publications, online databases, museums and scientific field work. They used those records to help produce the largest global map of insect diversity ever created, which they hope will be used to direct future conservation efforts.

This is a massive undertaking for a group known to be a critical ecosystem engineer, said co-author Robert Guralnick, curator of biodiversity informatics at the Florida Museum of Natural History. It represents an enormous effort not only among all the co-authors but the many naturalists who have contributed knowledge about distributions of ants across the globe.

Creating a map large enough to account for the entirety of ant biodiversity presented several logistical challenges. All currently known ant species were included, which numbered at more than 14,000, and each one varied dramatically in the amount of data available.

The majority of the records used contained a description of the location where an ant was collected or spotted but did not always have the precise coordinates needed for mapping. Inferring the extent of an ants range from incomplete records required some clever data wrangling.

Co-author Kenneth Dudley, a research technician with the Okinawa Institute of Science and Technology built a computational workflow to estimate the coordinates from the available data, which also checked the data for errors. This allowed the researchers to make different range estimates for each species of ant depending on how much data was available. For species with less data, they constructed shapes surrounding the data points. For species with more data, the researchers predicted the distribution of each species using statistical models that they tuned to reduce as much noise as possible.

The researchers brought these estimates together to form a global map, divided into a grid of 20 km by 20 km squares, that showed an estimate of the number of ant species per square (called the species richness). They also created a map that showed the number of ant species with very small ranges per square (called the species rarity). In general, species with small ranges are particularly vulnerable to environmental changes.

However, there was another problem to overcomesampling bias.

Some areas of the world that we expected to be centers of diversity were not showing up on our map, but ants in these regions were not well-studied, explained co-first author Jamie Kass, a postdoctoral fellow at the Okinawa Institute of Science and Technology. Other areas were extremely well-sampled, for example parts of the USA and Europe, and this difference in sampling can impact our estimates of global diversity.

So, the researchers utilized machine learning to predict how their diversity would change if they sampled all areas around the world equally, and in doing so, identified areas where they estimate many unknown, unsampled species exist.

This gives us a kind of treasure map, which can guide us to where we should explore next and look for new species with restricted ranges, said senior author Evan Economo, a professor at the Okinawa Institute of Science and Technology.

When the researchers compared the rarity and richness of ant distributions to the comparatively well-studied amphibians, birds, mammals and reptiles, they found that ants were about as different from these vertebrate groups as the vertebrate groups were from each other.

This was unexpected given that ants are evolutionarily highly distant from vertebrates, and it suggests that priority areas for vertebrate diversity may also have a high diversity of invertebrate species. The authors caution, however, that ant biodiversity patterns have unique features. For example, the Mediterranean and East Asia show up as diversity centers for ants more than the vertebrates.

Finally, the researchers looked at how well-protected these areas of high ant diversity are. They found that it was a low percentageonly 15% of the top 10% of ant rarity centers had some sort of legal protection, such as a national park or reserve, which is less than existing protection for vertebrates.

Clearly, we have a lot of work to do to protect these critical areas, Economo concluded.

The global distribution of known and undiscovered ant biodiversity

3-Aug-2022

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Excerpt from:

Researchers use artificial intelligence to create a treasure map of undiscovered ant species - EurekAlert

Elon Musk and Silicon Valley’s Overreliance on Artificial Intelligence – The Wire

When the richest man in the world is being sued by one of the most popular social media companies, its news. But while most of the conversation about Elon Musks attempt to cancel his $44 billion contract to buy Twitter is focusing on the legal, social, and business components, we need to keep an eye on how the discussion relates to one of tech industrys most buzzy products: artificial intelligence.

The lawsuit shines a light on one of the most essential issues for the industry to tackle: What can and cant AI do, and what should and shouldnt AI do? The Twitter v Musk contretemps reveals a lot about the thinking about AI in tech and startup land and raises issues about how we understand the deployment of the technology in areas ranging from credit checks to policing.

At the core of Musks claim for why he should be allowed out of his contract with Twitter is an allegation that the platform has done a poor job of identifying and removing spam accounts. Twitter has consistently claimed in quarterly filings that less than 5% of its active accounts are spam; Musk thinks its much higher than that. From a legal standpoint, it probably doesnt really matter if Twitters spam estimate is off by a few percent, and Twitters been clear that its estimate is subjective and that others could come to different estimates with the same data. Thats presumably why Musks legal team lost in a hearing on July 19when they asked for more time to perform detailed discovery on Twitters spam-fighting efforts, suggesting that likely isnt the question on which the trial will turn.

Regardless of the legal merits, its important to scrutinise the statistical and technical thinking from Musk and his allies. Musks position is best summarised in his filing from July 15, which states: In a May 6 meeting with Twitter executives, Musk was flabbergasted to learn just how meager Twitters process was. Namely: Human reviewers randomly sampled 100 accounts per day (less than 0.00005% of daily users) and applied unidentified standards to somehow conclude every quarter for nearly three years that fewer than 5% of Twitter users were false or spam. The filing goes on to express the flabbergastedness of Musk by adding, Thats it. No automation, no AI, no machine learning.

Perhaps the most prominent endorsement of Musks argument here came from venture capitalist David Sacks,who quoted it while declaring, Twitter is toast. But theres an irony in Musks complaint here: If Twitter were using machine learning for the audit as he seems to think they should, and only labeling spam that was similar to old spam, it would actually produce a lower, less-accurate estimate than it has now.

There are three components to Musks assertion that deserve examination: his basic statistical claim about what a representative sample looks like, his claim that the spam-level auditing process should automated or use AI or machine learning, and an implicit claim about what AI can actually do.

On the statistical question, this is something any professional anywhere near the machine learning space should be able to answer (so can many high school students). Twitter uses a daily sampling of accounts to scrutinise a total of 9,000 accounts per quarter (averaging about 100 per calendar day) to arrive at its under-5% spam estimate. Though that sample of 9,000 users per quarter is, as Musk notes, a very small portion of the 229 million active users the company reported in early 2022, a statistics professor (or student) would tell you that thats very much not the point. Statistical significance isnt determined by what percentage of the population is sampled but simply by the actual size of the sample in question. As Facebook whistleblower Sophie Zhang put it, you can make the comparison to soup: It doesnt matter if you have a small or giant pot of soup, if its evenly mixed you just need a spoonful to taste-test.

The whole point of statistical sampling is that you can learn most of what you need to know about the variety of a larger population by studying a much-smaller but decently sized portion of it. Whether the person drawing the sample is a scientist studying bacteria, or a factory quality inspector checking canned vegetables, or a pollster asking about political preferences, the question isnt what percentage of the overall whole am I checking, but rather how much should I expect my sample to look like the overall population for the characteristics Im studying? If you had to crack open a large percentage of your cans of tomatoes to check for their quality, youd have a hard time making a profit, so you want to check the fewest possible to get within a reasonable range of confidence in your findings.

Also read: Why Understanding This 60S Sci-Fi Novel Is Key to Understanding Elon Musk

While this thinking does go against the grain of certain impulses (theres a reason why many people make this mistake), there is also a way to make this approach to sampling more intuitive. Think of the goal in setting sample size as getting a reasonable answer to the question, If I draw another sample of the same size, how different would I expect it to be? A classic approach to explaining this problem is to imagine youve bought a great mass of marbles, that are supposed to come in a specific ratio: 95% purple marbles and 5% yellow marbles. You want to do a quality inspection to ensure the delivery is good, so you load them into one of those bingo game hoppers, turn the crank, and start counting the marbles you draw, in each color. Lets say your first sample of 20 marbles has 19 purple and one yellow; should you be confident that you got the right mix from your vendor? You can probably intuitively understand that the next 20 random marbles you draw could end up being very different, with zero yellows or seven. But what if you draw 1,000 marbles, around the same as the typical political poll? What if you draw 9,000 marbles? The more marbles you draw, the more youd expect the next drawing to look similar, because its harder to hide random fluctuations in larger samples.

There are onlinecalculators that can let you run the numbers yourself. If you only draw 20 marbles and get one yellow, you can have 95% confidence that the yellows would be between 0.13% and 24.9% of the total not very exact. If you draw 1,000 marbles and get 50 yellows, you can have 95% confidence that yellows would be between 3.7% and 6.5% of the total; closer, but perhaps not something youd sign your name to in a quarterly filing. At 9,000 marbles with 450 yellow, you can have 95% confidence the yellows are between 4.56% and 5.47%; youre now accurate to within a range of less than half a percent, and at that point Twitters lawyers presumably told them theyd done enough for their public disclosure.

Printed Twitter logos are seen in this picture illustration taken April 28, 2022. Photo: Reuters/Dado Ruvic/Illustration/File Photo

This reality that statistical sampling works to tell us about large populations based on much-smaller samples underpins every area where statistics is used, from checking the quality of the concrete used to make the building youre currently sitting in, to ensuring the reliable flow of internet traffic to the screen youre reading this on.

Its also what drives all current approaches to artificial intelligence today. Specialists in the field almost never use the term artificial intelligence to describe their work, preferring to use machine learning. But another common way to describe the entire field as it currently stands is applied statistics. Machine learning today isnt really computers thinking in anything like what we assume humans do (to the degree we even understand how humans think, which isnt a great degree); its mostly pattern-matching and -identification, based on statistical optimisation. If you feed a convolutional neural network thousands of images of dogs and cats and then ask the resulting model to determine if the next image is of a dog or a cat, itll probably do a good job, but you cant ask it to explain what makes a cat different from a dog on any broader level; its just recognising the patterns in pictures, using a layering of statistical formulas.

Stack up statistical formulas in specific ways, and you can build a machine learning algorithm that, fed enough pictures, will gradually build up a statistical representation of edges, shapes, and larger forms until it recognises a cat, based on the similarity to thousands of other images of cats it was fed. Theres also a way in which statistical sampling plays a role: You dont need pictures of all the dogs and cats, just enough to get a representative sample, and then your algorithm can infer what it needs to about all the other pictures of dogs and cats in the world. And the same goes for every other machine learning effort, whether its an attempt to predict someones salary using everything else you know about them, with a boosted random forests algorithm, or to break down a list of customers into distinct groups, in a clustering algorithm like a support vector machine.

You dont absolutely have to understand statistics as well as a student whos recently taken a class in order to understand machine learning, but it helps. Which is why the statistical illiteracy paraded by Musk and his acolytes here is at least somewhat surprising.

But more important, in order to have any basis for overseeing the creation of a machine-learning product, or to have a rationale for investing in a machine-learning company, its hard to see how one could be successful without a decent grounding in the rudiments of machine learning, and where and how it is best applied to solve a problem. And yet, team Musk here is suggesting they do lack that knowledge.

Once you understand that all machine learning today is essentially pattern-matching, it becomes clear why you wouldnt rely on it to conduct an audit such as the one Twitter performs to check for the proportion of spam accounts. Theyre hand-validating so that they ensure its high-quality data, explained security professional Leigh Honeywell, whos been a leader at firms like Slack and Heroku, in an interview. She added, any data you pull from your machine learning efforts will by necessity be not as validated as those efforts. If you only rely on patterns of spam youve already identified in the past and already engineered into your spam-detection tools, in order to find out how much spam there is on your platform, youll only recognise old spam patterns, and fail to uncover new ones.

Also read: India Versus Twitter Versus Elon Musk Versus Society

Where Twitter should be using automation and machine learning to identify and remove spam is outside of this audit function, which the company seems to do. It wouldnt otherwise be possible tosuspend half a million accountsevery day and lock millions of accounts each week, as CEO Parag Agrawal claims. In conversations Ive had with cybersecurity workers in the field, its quite clear that large amounts of automation is used at Twitter (though machine learning specifically is actually relatively rare in the field because the results often arent as good as other methods, marketing claims by allegedly AI-based security firms to the contrary).

At least in public claims related to this lawsuit, prominent Silicon Valley figures are suggesting they have a different understanding of what machine learning can do, and when it is and isnt useful. This disconnect between how many nontechnical leaders in that world talk about AI, and what it actually is, has significant implications for how we will ultimately come to understand and use the technology.

The general disconnect between the actual work of machine learning and how its touted by many company and industry leaders is something data scientists often chalk up to marketing. Its very common to hear data scientists in conversation among themselves declare that AI is just a marketing term. Its also quite common to have companies using no machine learning at all describe their work as AI to investors and customers, who rarely know the difference or even seem to care.

This is a basic reality in the world of tech. In my own experience talking with investors who make investments in AI technology, its often quite clear that they know almost nothing about these basic aspects of how machine learning works. Ive even spoken to CEOs of rather large companies that rely at their core on novel machine learning efforts to drive their product, who also clearly have no understanding of how the work actually gets done.

Not knowing or caring how machine learning works, what it can or cant do, and where its application can be problematic could lead society to significant peril. If we dont understand the way machine learning actually works most often by identifying a pattern in some dataset and applying that pattern to new data we can be led deep down a path in which machine learning wrongly claims, for example, to measure someones face for trustworthiness (when this is entirely based on surveys in which people reveal their own prejudices), or that crime can be predicted (when many hyperlocal crime numbers are highly correlated with more police officers being present in a given area, who then make more arrests there), based almost entirely on a set of biased data or wrong-headed claims.

If were going to properly manage the influence of machine learning on our society on our systems and organisations and our government we need to make sure these distinctions are clear. It starts with establishing a basic level of statistical literacy, and moves on to recognising that machine learning isnt magicand that it isnt, in any traditional sense of the word, intelligent that it works by pattern-matching to data, that the data has various biases, and that the overall project can produce many misleading and/or damaging outcomes.

Its an understanding one might have expected or at least hoped to find among some of those investing most of their life, effort, and money into machine-learning-related projects. If even people that deep arent making those efforts to sort fact from fiction, its a poor omen for the rest of us, and the regulators and other officials who might be charged with keeping them in check.

This article was originally published on Future Tense, a partnership between Slate magazine, Arizona State University, and New America.

Read more here:

Elon Musk and Silicon Valley's Overreliance on Artificial Intelligence - The Wire

High Five: Artificial Intelligence-Generated Campaigns and Experiments | LBBOnline – Little Black Book – LBBonline

I cant stop playing with Midjourney. It may signal the end of human creativity or the start of an exciting new era, but heres me, like a monkey at a typewriter chucking random words into the algorithm for an instant hit of this-shouldnt-be-as-good-as-it-is art.

For those who dont know, Midjourney is one of a number of image-generating AI algorithms that can turn written prompts into unworldly pictures, It, along with OpenAIs DALL-E 2, have been having something of a moment in the last month as people get their hands on them and try to push them to their limits. Craiyon - formerly DALL-E mini - is an older, less refined and very much wobblier platform to try too. Its worth having a go just to get a feel for what these algorithms can and cant do - though be warned, the dopamine hit of seeing some silly words turn into something strange, beautiful, terrifying or cool within seconds is quite addictive. A confused dragon playing chess. A happy apple. A rat transcends and perceives the oneness of the universe, pulsing with life. Yes Sir, I can boogie.

Within the LBB editorial team, weve been having lots of discussions about the implications of these art-generating algorithms. What are the legal and IP ramifications for those artists whose works are mined and drawn into the data set (on my Midjourney server, Klimt and HR Giger seem to be the most popular artists to replicate but what of more contemporary artists?). Will the industry use this to find unexpected new looks that go beyond the human creative habits and rules - or will we see content pulled directly from the algorithm? How long will it take for the algorithms to iron out the wonky weirdness that can sometimes take the human face way beyond the uncanny valley to a nightmarish, distorted abyss? What are the keys to writing prompts when you are after something very specific? Why does the algorithm seem to struggle when two different objects are requested in the same image?

Unlike other technologies that have shaken up the advertising industry, these image-generating algorithms are relatively accessible and easy to use (DALL-E 2s waitlist aside). The results are almost instant - and the possibilities, for now, seem limitless. Weve already seen a couple of brands have a go with campaigns that are definitely playing on the novelty and PR-angle of this new technology - and also a few really intriguing art projects too...

Agency: Rethink

The highest profile commercial campaign of the bunch is Rethinks new Heinz campaign. Its a follow up to a previous campaign, in which humans were asked to draw a bottle of ketchup and ended up all drawing a bottle of Heinz. This time around, the team asked Dall-E 2 - and the algorithm, like its human predecessors, couldnt help but create images that looked like Heinz branded bottles (albeit with a funky AI spin). In this case, the AI is used to reinforce and revisit the original idea - but how long will it take before were using AIs to generate ideas for boards or pitch images?

Agency: 10 Days

Animation: Jeremy Higgins

This artsy animated short by art director and designer Jeremy Higgins is a delight and shows how a sequence of similar AI-generated images can serve as frames in a film. The flickering effect ironically gives the animation a very hand-made stop motion style, reminding me of films that use individual oil paintings as frames. Its a really vivid encapsulation of what it feels like to be sucked into a Midjourney rabbit hole too...I also have to tip my hat to Stefan Sagmeister who shared this film on his Instagram account.

For the latest issue of Cosmopolitan, creative Karen X Cheng used Dall-E 2 to create a dramatic and imposing cover - using the prompt: 'a strong female president astronaut warrior walking on the planet Mars, digital art synthwave'. Theres a deep dive into the creative process that also examines some of the potential ramifications of the technology on the Cosmopolitan website thats well worth a read.

Studio:T&DA

Heres a cheeky sixth entry to High Five. This execution is part of a wider summer platform for BT Sport, centred around belief - in this case football pundit Robbie Savage is served up a Dall-E 2 image of striker Aleksander Mitrovi lifting the golden boot. Fulham has just been promoted to the Premier League - but though Robbie can see it, he cant quite believe it.

Read more from the original source:

High Five: Artificial Intelligence-Generated Campaigns and Experiments | LBBOnline - Little Black Book - LBBonline

Artificial Intelligence In Insurtech Market Is Expected to Boom- Cognizant, Next IT Corp, Kasisto – Digital Journal

New Jersey, N.J., Aug 04, 2022 The Artificial Intelligence In Insurtech Market research report provides all the information related to the industry. It gives the outlook of the market by giving authentic data to its client which helps to make essential decisions. It gives an overview of the market which includes its definition, applications and developments, and manufacturing technology. This Artificial Intelligence In Insurtech market research report tracks all the recent developments and innovations in the market. It gives the data regarding the obstacles while establishing the business and guides to overcome the upcoming challenges and obstacles.

Artificial Intelligence (AI) can help insurers assess risk, detect fraud, and reduce human error in the claim process. As a result, insurers are better equipped to sell the most appropriate plans to their customers. Customers benefit from the improved claims handling and processing provided by Artificial Intelligence.

Increased investment by insurance companies in artificial intelligence and machine learning, as well as increased preferences for personalized insurance services, are conducive to the growth of global artificial intelligence in the insurance market. In addition, the increase in cooperation between insurance companies and the company dealing with AI and machine learning solutions positively influences the development of AI in the insurance market.

Get the PDF Sample Copy (Including FULL TOC, Graphs, and Tables) of this report @:

https://www.a2zmarketresearch.com/sample-request/670659

Competitive landscape:

This Artificial Intelligence In Insurtech research report throws light on the major market players thriving in the market; it tracks their business strategies, financial status, and upcoming products.

Some of the Top companies Influencing this Market include:Cognizant, Next IT Corp, Kasisto, Cape Analytics Inc., Microsoft, Google, Salesforce, Amazon Web Services, Lemonade, Lexalytics, H2O.ai,

Market Scenario:

Firstly, this Artificial Intelligence In Insurtech research report introduces the market by providing an overview which includes definition, applications, product launches, developments, challenges, and regions. The market is forecasted to reveal strong development by driven consumption in various markets. An analysis of the current market designs and other basic characteristics is provided in the Artificial Intelligence In Insurtech report.

Regional Coverage:

The region-wise coverage of the market is mentioned in the report, mainly focusing on the regions:

Segmentation Analysis of the market

The market is segmented on the basis of the type, product, end users, raw materials, etc. the segmentation helps to deliver a precise explanation of the market

Market Segmentation: By Type

Service, Product,

Market Segmentation: By Application

Automotive, Healthcare, Information Technology, Others,

For Any Query or Customization: https://a2zmarketresearch.com/ask-for-customization/670659

An assessment of the market attractiveness with regard to the competition that new players and products are likely to present to older ones has been provided in the publication. The research report also mentions the innovations, new developments, marketing strategies, branding techniques, and products of the key participants present in the global Artificial Intelligence In Insurtech market. To present a clear vision of the market the competitive landscape has been thoroughly analyzed utilizing the value chain analysis. The opportunities and threats present in the future for the key market players have also been emphasized in the publication.

This report aims to provide:

Table of Contents

Global Artificial Intelligence In Insurtech Market Research Report 2022 2029

Chapter 1 Artificial Intelligence In Insurtech Market Overview

Chapter 2 Global Economic Impact on Industry

Chapter 3 Global Market Competition by Manufacturers

Chapter 4 Global Production, Revenue (Value) by Region

Chapter 5 Global Supply (Production), Consumption, Export, Import by Regions

Chapter 6 Global Production, Revenue (Value), Price Trend by Type

Chapter 7 Global Market Analysis by Application

Chapter 8 Manufacturing Cost Analysis

Chapter 9 Industrial Chain, Sourcing Strategy and Downstream Buyers

Chapter 10 Marketing Strategy Analysis, Distributors/Traders

Chapter 11 Market Effect Factors Analysis

Chapter 12 Global Artificial Intelligence In Insurtech Market Forecast

Buy Exclusive Report @: https://www.a2zmarketresearch.com/checkout

Contact Us:

Roger Smith

1887 WHITNEY MESA DR HENDERSON, NV 89014

[emailprotected]

+1 775 237 4157

Link:

Artificial Intelligence In Insurtech Market Is Expected to Boom- Cognizant, Next IT Corp, Kasisto - Digital Journal