Page 198«..1020..197198199200..210220..»

Category Archives: Ai

AI Weekly: How power and transformative tech reshape our world – VentureBeat

Posted: November 17, 2019 at 2:33 pm

This week, VentureBeat launched a quarterly magazine. Like the AI Weekly, the special issue gives our editorial team a chance to reflect on important transformative technology influencing business, technology, and society.

The first issue focuses on the relationship between power and AI. Power can shape AI, fromhow we define ethical use of artificial intelligence andprotect personal data to howAI may change how we define inventions to how AI may change how we define inventions or used as both a tool or weapon.

By design, the special issue drew upon topics that dwell in our lives and shape our collective future. The articles are created to tackle issues that linger in the news cycle.

While the special issue began to roll out Monday, the world heard from Jamie Heinemeier Hansson. When she and her husband, Ruby on Rails creator David Heinemeier Hansson, applied for an Apple Card, he was given a credit limit offer 20x than her, a believed demonstration of algorithmic bias. It took no more than two days for aseries of Davids tweets that complained about it to trigger Wall Street regulators to open an investigation. Apple cofounder Steve Wozniak also complained about the credit limit that Apple Card extended to his wife.

The fact that two powerful white men complaining in tweets led to swift government action did not go unnoticed by AI ethicists or people of color who routinely document, witness, or experience algorithmic bias, nor by Jamie Heinemeier Hansson herself.

This is not merely a story about sexism and credit algorithm blackboxes, but about how rich people nearly always get their way. Justice for another rich white woman is not justice at all, she said in a blog post.

The world also got its latest dose this week ofElon Musk saying outlandish things about AI that doesnt exist. In aninterview on an AI podcast, the Tesla and SpaceX CEO said he believes AI equipped with his Neuralink brain hardware will be able to solve autism and schizophrenia but schizophrenia is a mental disorder and autism is a developmental disability. People with the luxury to focus on things AI cannot do are missing or ignoring the growing number of ways AI is impacting human lives today. Its an expression of the relation between AI ethics and power.

Both of these stories reflect themes seen throughout the special issuescover story about how power lies just under the surface of all AI ethics conversations.

The power in AI theme could also be seen this week in news reports that asserted automated bots attempted to sway elections held in the U.S. last week and in Chris OBriens work that lays out the case that deepfakes are not only a threat to the future of democracy but could also fuel a virtual arms race.

Power in AI also came up this week when Portland became the latest major city to propose a ban on facial recognition use and when Fight for the Future activists made the ethically questionable choice to use Amazons Rekognition on thousands of Washington D.C. residents to prove the point that Congress needs to take action on facial recognition regulation now.

Other topics in the special issue will continue to percolate in ongoing conversations, likethe need to have a human in the loop to avoid an AI-driven catastrophe and theethics game developers should consider when creating humanlike AI in virtual worlds.

We want each special issue to strike at the heart of conversations about issues transforming the world happening among business executives, tech workers, the AI ecosystem, and society at large. Were here to convene important conversations for you, so if you have an idea for the focus of a future special, fill out this form and let us know.

Watch out for the second special issue in early 2020.

For AI coverage, send news tips to Khari Johnson, Kyle Wiggers, or AI editor Seth Colaner and be sure to bookmark our AI Channel.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

Read this article:

AI Weekly: How power and transformative tech reshape our world - VentureBeat

Posted in Ai | Comments Off on AI Weekly: How power and transformative tech reshape our world – VentureBeat

Infographic: The Emergence of AI in Retail – Robotics Business Review

Posted: at 2:33 pm

November 15, 2019Demetrius Harrison

Artificial intelligence, through the use of software and robotics, is entering the customer service space across many retail locations. As of 2018, 28% of retailers are utilizing AI in some form, a 600% increase compared to 2016. Retailers in food/grocery, multi-category department stores, apparel, footwear, and home improvement stores have all embraced the adoption of AI.

About 26% of AI technology in retail directly interacts with customers the remaining 74% is being used for operations tasks working in the back to boost operational efficiencies with existing workers or replacing the dull tasks.

The future of customer service appears to be an increase in chatbots and virtual assistants. For example, H&M is using AI to style outfits for their customers upon them sending a link to their favorite item. Several retailers are using robots to roam store aisles and take inventory in order to make sure products are available for customers. Some locations are trialing the use of customer-service robots, while others are using AI to make personalized recommendations for consumers. In a recent survey, 73% of consumers said they prefer brands that use their personal data to improve the shopping experience.

By 2023, an estimated 95% of supply chain vendors will rely on AI learning. This is predicted to improve day-to-day, bottom line operations by forecasting error rates, structuring planning for productivity, and reducing cost by tracking shipment data.

The infographic below can paint a better picture of how AI will be used in retail.

See original here:

Infographic: The Emergence of AI in Retail - Robotics Business Review

Posted in Ai | Comments Off on Infographic: The Emergence of AI in Retail – Robotics Business Review

Why the UK needs a national strategy around AI and work – NS Tech

Posted: at 2:33 pm

News this week of the Ministry of Justices AI tool for prisoners delivering discriminatory outcomes will come as no surprise to many, but precious few people seem to be suggesting solutions to what is an emerging but potentially widespread problem. As the failures and inbuilt bias of AI tools hit the headlines once again, it is important to look not only at the problems with new tech, but also at how we utilise it to the benefit of all.

The backlash against facial recognition software, for example, is only the latest symptom of how a failure to include people in discussions and decisions about new technologies are threatening their legitimacy and the positive opportunities they could offer.

Nowhere is the risk of disconnection and disempowerment greater than in the workplace. Employers are increasingly looking to tech as the solution to their problems and the future of their business, but with little thought about how to engage or involve their workforce.

Challenges to this trajectory are now erupting within the tech sector itself, with an increase in organising activity among tech developers themselves, from Google to the gaming industry, and among workers subject to surveillance or pushed into precarious employment by tech-enabled businesses like Uber or Amazon.

Research we commissioned from YouGov earlier in the year found that 58 per cent of UK workers felt they would be locked out of any discussion about how technology would affect their jobs. No wonder many see AI as a threat, rather than something with possibilities for improving their working lives. This is why Prospect is a partner to this years Women Leading in AI conference on accountability and trust in Artificial Intelligence. Our members are optimistic about the future of work, but concerned about the rules that will govern it. We need to get serious about how we fix the culture of tech before it extends even further into our way of life. Ignoring workers in this debate is a sure-fire way of entrenching distrust and provoking opposition.

The real issue is not the technology but the power relationships behind it. These issues are familiar to unions like Prospect, but the speed of change means we urgently need to keep updating our ways of addressing them. In the last century, collective bargaining and campaigning focused on regulating human relationships and physical working conditions. We now need to understand a future in which critical relationships will be between humans, computer programs and data. The danger facing us is that AI and related technologies build in existing inequalities and insecurity, and in some cases make them worse.

A new agenda is already emerging. DIY unions and tech activism are spreading in America. Precarious workers are fighting back against exploitation by platform companies. Our friends in the GMB are working with unions worldwide to organise Amazon warehouse workers. At an international level we have been working through Uni Global Union, our international federation, on privacy and worker-focussed AI rules, as well as using new tech to empower employees. This month, as an alternative to employer-controlled surveillance and monitoring, we are piloting a new app, a bit like a FitBit for workers, that allows employees to collect their own data on working patterns and pressures. In the UK we are working with the Institute on the Future of Work to look at how the Equality Act can be used to tackle discrimination in algorithms and machine learning, and with the Fabian Commission on Workers and Technology on how we ensure automation is used to benefit everyone.

The UK has an opportunity to benefit from early adoption of technologies like AI. But if we dont talk about power and the imbalances it creates in work and society, then we wont get ethics right or start to deal with distrust. There are four principles that should define our approach.

First, worker voice and co-operation so that those developing, using and impacted by new technologies have a real say on their purpose, design, and implementation. AI ethics need to extend beyond the boardroom and actively engage and use the experiences of workers. Unions are leading the way with New Technology Agreements and increasing attention paid to issues such as transparency and data ownership in their bargaining agendas.

Second, a new focus on the social benefits of technology because a narrow focus on the technical intricacies (or commercial applications) will feel aloof and alienate people from the solutions technology can bring.

Third, we need to hear much, much more about job transformation so that workers are at the centre of the debate about the transition to a new economy. The governments Industrial Strategy singularly fails to include workers in its plans. But nearly two thirds of CEOs recently surveyed by PwC recognised that we need a national strategy around AI and work which the state needs to play a key role in developing.

Finally, we need a national framework that all social partners can buy into on what national policy we need to develop innovative, transformative technology that is ethically responsible and socially beneficial. That should include employee and trade union representation on the board of the AI Council and ensuring worker voice is part of the work of bodies like the ICO and Centre for Data Ethics. New EU Commissioner Margrethe Vestager is already talking of plans for tougher regulation of big tech and ethical rules for AI. The EU social partnership model will mean that workers will be involved in these policy discussions. If the UK is to leave the EU, we must look to at least match this commitment, not think we can win by cutting workers out of the conversation.

There is a saying in the equality movement that is apt here: nothing about us, without us. Imposing change rarely gets the best outcomes. Taking people with you always gets you further.

Andrew Pakes is Research Director at Prospect Union. He tweets @andrew4mk

Read this article:

Why the UK needs a national strategy around AI and work - NS Tech

Posted in Ai | Comments Off on Why the UK needs a national strategy around AI and work – NS Tech

Company trains AI to find wildfires faster than humans can – Roanoke Times

Posted: at 2:33 pm

Multiple factors often align to make California wildfires unusually hard to contain: hurricane-force winds that sweep toward the coastline, steep and often rough terrain, drought conditions exacerbated by climate change and finite resources spread thin by a vast landscape covered in wilderness.

As if that werent bad enough, many wildfires are fueled by another accelerant, something that almost ensures destruction on a mass scale: time.

California has 33 million acres of forest, far too much land for state agencies to monitor simultaneously. By the time reports of fires reach authorities and resources are mobilized, many hours, and sometimes days, can pass.

A San Francisco-based technology company called Chooch AI is trying to narrow that gap with the help of artificial intelligence, reducing the time between a fires eruption and the moment its spotted by people. The company, which is working with state agencies, researchers and technologists, is working to develop an AI tool that would scour hyper-detailed imagery from satellites for evidence of wildfires largely invisible to the naked eye. If successfully refined, experts believe, the tool could lead to earlier wildfire detection that would almost certainly save more people and property from destruction.

Using the same imagery, the AI could be used to identify vegetation that is dangerously close to power lines and locate spots where fires are likely to erupt.

The more specific goal, according to CEO Emrah Gultekin, is an ambitious one: monitoring every acre of forest in California every 10 minutes, alerting authorities almost immediately when a problem arises. He believes the AI may be operational as early as next fire season, which traditionally occurs in the fall, after the states brush land has been dried out by months of summer heat.

If theres a small brush fire, you want to get there as fast as possible, Gultekin said. Oftentimes, its out of control by the time people realize theres a fire and firefighters can be dispatched to the area, which is often hard to reach.

Ultimately, we want to stop these fires from happening entirely, he added, noting that early damage estimates from fires tearing through California in recent weeks have already exceeded $25 billion. Its a huge loss to the economy and a huge loss to human life and wildlife and its seriously costing families. Theres a lot of suffering going on.

Even today, with satellites taking detailed pictures of the Earth multiple times a day, most fires are reported by people on the ground or in planes overhead. To locate fires, Chooch AI is training its AI to rely on infrared technology to identify heat and smoke, the latter of which can be pinpointed on an acre-by-acre basis.

For a hypothetical comparison, Gultekin estimated youd need thousands of people to pore over millions of acres of rural land with a similar degree of precision. By tracking traffic patterns and camp fires, the AI can also identify where people have massed on the ground, creating a higher risk for wildfires. The overwhelming majority of wildfires about 85%, according to some estimates are caused by humans, experts say.

Gultekin said Chooch AIs system is similar to models that are being used to pilot autonomous vehicles and operate facial-recognition technology at airports. Unlike similar systems, he said, Chooch AI has no plan to profit off the technology.

The good thing about this AI is that its agnostic to the weather conditions, Gultekin noted. The AI can see through clouds and it can see through smoke, so it sees everything.

So far, the companys main obstacles have not revolved around training their AI, but instead getting access to useful satellite imagery. Because high-resolution satellites only provide data every 12 hours, Chooch AIs team is training their AI to interpret lower resolution imagery known as geostationary satellite imagery which is updated more frequently.

Nirav Patel, a geospatial data scientist and an AI expert with Californias Defense Innovation Unit, or DIU, who has been in touch with Chooch AI, said he believes their tool, when finished, has the potential to benefit firefighters on the ground. Patel is also a noncommissioned officer with Californias Air National Guard and was part of recent efforts to extinguish the Kincade Fire in Sonoma County. He said he has seen how unpredictable raging fires can become when early warnings fail.

That first six to 18 hours is critical to understand how the fire is moving so people can determine what physical or man-made obstacles can be placed in its path, he said, noting that a satellite-based AI platform could be used alongside planes outfitted with sophisticated sensors and human assets on the ground, such as fire lookouts.

There are other efforts to harness the power of AI to combat wildfires. Last year, James MacKinnon, a NASA computer engineer, told Fast Company hed developed an algorithm that could quickly process imagery from onboard satellites, a much faster process than sending images to Earth to be processed by a supercomputer. MacKinnon said his algorithm recognized fires with 98% accuracy.

The fires stick out like a sore thumb, he told the publication.

As wildfires ravage California, Patel said DIU is interested in partnering with companies that can offer solutions but want to maintain their intellectual property. To encourage commercial companies such as Chooch AI to develop innovative methods for combating natural disasters such as wildfires and doing damage assessments, DIU has created a competition for members the AI community called the xView2 Challenge.

When governments are faced with crises, whether its war or natural disasters, those challenges often produce technological innovation. Gultekin, whose company is participating in the xView2 Challenge, said Californias wildfires are no different, particularly when it comes to AI.

Five or 10 years ago, no way we couldve used AI to interpret satellite data in a meaningful way, he said. Right now, maybe we can. Were working on it and lots of other companies are as well. Expect a boom in this type of technology over the next five years.

More here:

Company trains AI to find wildfires faster than humans can - Roanoke Times

Posted in Ai | Comments Off on Company trains AI to find wildfires faster than humans can – Roanoke Times

The Apple Card algo issue: What you need to know about A.I. in everyday life – CNBC

Posted: at 2:33 pm

Apple CEO Tim Cook introduces Apple Card during a launch event at Apple headquarters on Monday, March 25, 2019, in Cupertino, California.

Noah Berger | AFP | Getty Images

When tech entrepreneur David Heinmeier Hansson recently took to Twitter saying the Apple Card gave him a credit limit that was 20 times higher than his wife's, despite the fact that she had a higher credit score, it may have been the first major headline about algorithmic bias you read in your everyday life. It was not the first there have been major stories about potential algorithmic bias in child care and insurance and it won't be the last.

The chief technology officer of project management software firm Basecamp, Heinmeier was not the only tech figure speaking out about algorithmic bias and the Apple Card. In fact, Apple's own co-founder Steve Wozniak had a similar experience. Presidential candidate Elizabeth Warren even got in on the action, bashing Apple and Goldman, and regulators said they are launching a probe.

Goldman Sachs, which administers the card for Apple, has denied the allegations of algorithmic gender bias, and has also said it will examine credit evaluations on a case-by-case basis when applicants feel the card's determination is unfair.

Goldman spokesman Patrick Lenihan said algorithmic bias is an important issue, but the Apple Card is not an example of it. "Goldman Sachs has not and will never make decisions based on factors like gender, race, age, sexual orientation or any other legally prohibited factors when determining credit worthiness. There is no 'black box.'" he said, referring to a term often used to describe algorithms. "For credit decisions we make, we can identify which factors from an individual's credit bureau issued credit report or stated income contribute to the outcome. We welcome a discussion of this topic with policymakers and regulators."

As AI and the algorithms that underlie technology become an increasingly large part of everyday life, it's important to know more about the technology. One of the major claims made by technology firms using algorithms in decisions like credit scoring is that algorithms are less biased than human beings. That's being used in areas like job hiring: The state of California recently passed a rule to encourage the development of more job-based algorithms to remove human bias from the hiring process. But it is far from 100% scientifically proven that an AI that relies on code written by humans, as well as data fed into it as a learning mechanism, will not reflect the existing biases of our world.

Here are key points about AI algorithms that will factor in future headlines.

As Hansson and his wife found out, AI systems are becoming more commonplace in areas that everyday people rely on.

This technology is not only being introduced in credit and job hiring but insurance, mortgages and child welfare.

In 2016, Allegheny County, Pennsylvania, introduced a tool called the Allegheny Family Screening Tool. It is a predictive-risk modeling tool that is used to help with child welfare call-screening decisions when concerns of child maltreatment are raised to the county's department of human services.

The system collected data on each person in the referral and uses it to create an "overall family score." That score determines the likelihood of a future event.

Allegheny did face some backlash, but one conclusion was that it created "less bad bias." Other places, including Los Angeles, have used similar technology in an attempt to improve child welfare, and it is an example of how AI systems will be used in ways that can affect people in large ways, and as a result, it is important to know how those systems can be flawed.

Most AI is created from a process called machine learning, which is teaching a computer something by feeding them thousands of pieces of data to help them learn the information of the data set by itself.

An example would be giving an AI system thousands of pictures of dogs, with the purpose of teaching the system what a dog is. From there the system would be able to look at a photo and decide whether it is a dog or not based on that past data.

So what if the data you are feeding a system is 75% golden retrievers and 25% Dalmations?

Postdoctoral researcher at the AI Now Institute, Dr. Sarah Myers West, says these systems are built to reflect the data they are fed, and that data can be built on bias.

"These systems are being trained on data that's reflective of our wider society," West said. "Thus, AI is going to reflect and really amplify back past forms of inequality and discrimination."

One real-world example: While the human manager-based hiring process can undoubtedly be biased, debate remains over whether algorithmic job application technology undoubtedly removes human bias. The AI learning process could incorporate the biases of the data they are fed for example, the resumes of top-performing candidates at top firms.

The AI Now Institute has also found biases in the people who are creating AI systems. In an April 2019 study, they found that only 15% of the AI staff at Facebook are women, and only 4% of their total workforce are black. Google's workforce is even less diverse, with only 10% of their AI staff being women and 2.5% of their workers black.

Joy Buolamwini, a computer scientist at MIT, found during her research on a project that would project digital masks onto a mirror, that the generic facial recognition software she was using would not identify her face unless she used a white colored mask.

She found that her system could not identify the face of a black woman, because the data set it was running on were overwhelmingly lighter-skinned.

"Quite clearly, it's not a solved problem," West said. "It's actually a very real problem that keeps resurfacing in AI systems on a weekly, almost daily basis."

AI algorithms are completely proprietary to the company that created them.

"Researchers face really significant challenges understanding where there's algorithmic bias because so many of them are opaque," West said.

Even if we could see them, it doesn't mean we would understand, says co-director of the Digital Platforms and Democracy Project, and Shorenstein Fellow at Harvard University, Dipayan Ghosh.

"It's difficult to draw any conclusions based on source code," Ghosh said. "Apple's proprietary creditworthiness algorithm is something that not even Apple can easily pin down, and say, 'Okay, here is the code for this,' because it probably involves a lot of different sources of data and a lot of different implementations of code to analyze that data in different siloed areas of the company."

To take things a step further, companies like Apple write their code to be legible to Apple employees, and it may not make sense to those outside of the company.

Right now there is little government oversight of AI systems.

"When AI systems are being used in areas that are of incredible social, political and economic importance, we have a stake in understanding how they are affecting our lives," West said. "We currently don't really have the avenues for the kind of transparency we would need for accountability."

One presidential candidate is trying to change that. New Jersey Senator Cory Booker sponsored a bill earlier this year called "The Algorithmic Accountability Act."

The bill requires companies to look at flawed algorithms that could create unfair or discriminatory situations for Americans. Under the bill, the Federal Trade Commission would be able to create regulations to 'conduct impact assessments of highly sensitive automated decision systems.' That requirement would impact systems under the FTC's jurisdiction, new or existing.

Cory Booker's website's description of the bill directly cites algorithmic malpractice from Facebook and Amazon in the past years.

Booker isn't the first politician to call for better regulation of AI. In 2016, the Obama administration called for development within the industry of algorithmic auditing and external testing of big data systems.

While government oversight is rare, an increasing practice is third-party auditing of algorithms.

The process involves an outside entity coming in and analyzing how the algorithm is made without revealing trade secrets, which is a large reason why algorithms are private.

Ghosh says this is happening more frequently, but not all of the time.

"It happens when companies feel compelled by public opinion or public sway to do something because they don't want to be called out having had no audits whatsoever," Ghosh said.

Ghosh also said that regulatory action can happen, as seen in the FTC's numerous investigations into Google and Facebook. "If a company is shown to harmfully discriminate, then you could have a regulatory agency come in and say 'Hey, we're either going to sue you in court, or you're going to do X,Y and Z. Which one do you want to do?'"

This story has been updated to include a comment from Goldman Sachs that it has not and will never make decisions based on factors like gender, race, age, sexual orientation or any other legally prohibited factors when determining credit worthiness.

Read this article:

The Apple Card algo issue: What you need to know about A.I. in everyday life - CNBC

Posted in Ai | Comments Off on The Apple Card algo issue: What you need to know about A.I. in everyday life – CNBC

What AI startups need to achieve before VCs will invest – TechCrunch

Posted: at 2:33 pm

David BlumbergContributor

Funding of artificial intelligence-focused companies reached approximately $9.3 billion in the U.S. in 2018, an amount that will continue to rise as the transformative impact of AI is realized. That said, not every AI startup has what it takes to secure an investment and scale to success.

So, what do venture capitalists look for when considering an investment in an AI company?

What we look for in all startups

Some fundamentals are important in any of our investments, AI or otherwise. First, entrepreneurs need to articulate that they are solving a large and important problem. It may sound strange, but finding the right problem can be more difficult than finding the right solution. Entrepreneurs need to demonstrate that customers will be willing to switch from what theyre currently using and pay for the new solution.

The team must demonstrate their competence in the domain, their functional skills and above all, their persistence and commitment. The best ideas likely wont succeed if the team isnt able to execute. Setting and achieving realistic milestones is a good way to keep operators and investors aligned. Successful entrepreneurs need to show why their solution offers superior value to competitors in the market or, in the minority of cases where there is an unresolved need why theyre in the best position to solve it.

In addition, the team must clearly explain how their technology works, how it differs and is advantageous relative to existing competitors and must explain to investors how that competitive advantage can be sustained.

For AI entrepreneurs, there are additional factors that must be addressed. Why? It is fairly clear that were in the early stages of this burgeoning industry which stands to revolutionize sectors from healthcare to fintech, logistics to transportation and beyond. Standards have not been settled, there is a shortage of personnel, large companies are still struggling with deployment, and much of the talent is concentrated in a few large companies and academic institutions. In addition, there are regulatory challenges that are complex and growing due to the nature of the technologys evolutionary aspect.

Here are five things we like to see AI entrepreneurs demonstrate before making an investment:

Demonstrate mastery over their data and its value: AI needs big data to succeed. There are two models: companies can either help customers add value to their data or build a data business using AI. In either case, startups must demonstrate that the data is reliable, secure and compliant with all regulatory rules. They must also demonstrate that AI is adding value to their own data it must explain something, derive an explanation, identify important trends, optimize or otherwise deliver value.

With the sheer abundance of data available for companies to collect today, its imperative that startups have an agile infrastructure in place that allows them to store, access and analyze this data efficiently. A data-driven startup must become ever more responsive, proactive and consistent over time.

AI entrepreneurs should know that while machine learning can be applied to many problems, it may not always yield accurate predictions in every situation. Models may fail for a variety of reasons, one of which is inadequate, inconsistent or variable data. Successful mastery of the data demonstrates to customers that the data stream is robust, consistent and that the model can adapt if the data sources change.

Entrepreneurs can better address their customer needs if they can demonstrate a fast, efficient way to normalize and label the data using meta tagging and other techniques.

Remember that transparency is a virtue: There is an increased need in certain industries such as financial services to explain to regulators how the sausage is made, so to speak. As a result, entrepreneurs must be able to demonstrate explainability to show how the model arrived at the result (for example, a credit score). This brings us to an additional issue about accounting for bias in models and, here again, the entrepreneur must show the ability to detect and correct bias as soon as they are found.

View post:

What AI startups need to achieve before VCs will invest - TechCrunch

Posted in Ai | Comments Off on What AI startups need to achieve before VCs will invest – TechCrunch

How AI and Facial Recognition Are Impacting the Future of Banking – Observer

Posted: at 2:33 pm

A woman uses an ATM with facial recognition technology during the presentation of the new service by CaixaBank in Barcelona on February 14, 2019. LLUIS GENE/AFP via Getty Images

So, I just got the new iPhone 11 Pro. I have to say, I pretty much love the facial recognition unlock feature. And no, Apple is not paying me to say that. Prior, I was a facial recognition skeptic. But now, I can unlock my phone with my face! I love it, but Im also slightly scared at the possibilities of what other people could do if they get access to my face without my knowledge. Better keep my face to myself.

It was only a matter of time before we heard about the financial services industry adopting innovative biometrics technology for access management of private information. In other words: Banks are using facial recognition.

SEE ALSO: What on Earth Is a Data Scientist? The Buzzwords Inventor Spills All

Sounds practical. Sounds scary. Sounds both practical and scary. Ive seen the John Woo movie, Face/Off,and Im well aware of how this could all go horribly wrong.

The financial sector understands the constant need for new and ever-improving security measures better than most industries, because of the implicit risk of being a bank, Shaun Moore, co-founder and CEO of Trueface,told Observer. There are people trying to hack, rob or defraud this industry every single day.

Moores company is working with some of the top global banks to infuse facial recognition into existing security and access management infrastructure.

We are seeing the financial services sector test face recognition as a part of multi-factor authentication for ATM withdrawals, mobile banking enrollment and mobile account access and transactions, said Moore. By implementing face recognition as the key step in multi-factor authentication, banks are able to mitigate their exposure to risk and fraud, saving themselves millions of dollars in the process.

Good point. Dont we all like saving millions of dollars in the process? I know I do.

What we can expect from our sci-fi financial transactions in the next five to 10 years is a federated identity across the digital and physical banking worldwhere your face will be the key to accessing your banking information, transacting and securing your account. The aim is to reduce fraud and lead to more secure financial data. Mexico has already adopted a biometric security mandate, which Moore sees as a trend that will be spread first to South America and eventually to the U.S.

Whether you are withdrawing money from an ATM or you enter a banks physical branch, our goal is to provide an extremely frictionless, personalized experience with a focus on security, he said.

Moore sees the adoption of facial recognition repositioning the financial sector as a leader in service and security. The tech nuts-and-bolts on how this works?Trueface has developed a suite of SDKs (software development kits) and a dockerized container solution that harness the power of machine learning and artificial intelligence to transform your camera data into actionable intelligence, Moore explained. Computer vision will be used for automated account registration, recognizing VIPs to enhance service at brick and mortar locations, recognize known criminals in branches and alerting authorities, access control for vaults and even employee timekeeping.

The whole VIP banking system does raise some flags about secret consumer scores,which allow companies to sell and profit from our data. As Edward Snowden said, there is no good reason for companies to hold onto our dataexcept when they see value and profit from it.

But according to Moore, We provide the solutions to run on our client infrastructure so that no data ever leaves the clients site/servers, ensuring performance but also data privacy and security.

Still, the city of San Francisco has banned facial recognition technology used by local law enforcement agencies. One slight problem is that facial recognition has trouble identifying people of color.

So, how is that being combatted with financial security?

The city, which was not using face recognition to begin with, created a legal process for using face recognition, not an outright ban, said Moore. This is something that we are in favor of. The bias discussed around face recognition has to do with the underlying data the algorithms are trained with. If the data is disproportionate, then the results will also be skewed in one direction. The industry as a whole recognizes this and has been actively working towards mitigating data bias risk.

Moore said the problem with facial recognition bias is shrinking and will cease to exist in the very near future.

The impact of this hurdle plays more of a role when it comes to recognizing one person out of many; thousands or millions, he stated. Typically with account authentication, the database we scan is few or one-to-one making this a non-issue.

The skeptics of facial recognition, Moore finds, are largely siloed in a surveillance use, not access control. Still, there are other possible failures and downsides with facial recognition and security.

The biggest concern is the ability to spoof or falsify identity when enrolling in an account remotely, said Moore. The solution to this problem is to ensure liveness and/or to pair biometrics with other forms of verification.

Plus, with artificial intelligence as part of the facial recognition formula, there is the classic quote from Elon Muskthat we need to fear AI more than nukes. Is the same fear justified for AI use in the banking sector?

AI is still in its infancy, but what I believe Elon is referring to here is that once something is created, its hard to reverse progress if we dont like the results, Moore said. The computing power required to reach this type of AI-driven world is still a decade or more away, so it is more important that we recognize the potential risks and re-adjust our path accordingly.

Moores takeaway is face recognition is a tool that can be used to significantly improve security and efficiency. In the meantime, Im going to be locking and unlocking my new iPhone with my faceuntil its time for my next banking expedition.

View original post here:

How AI and Facial Recognition Are Impacting the Future of Banking - Observer

Posted in Ai | Comments Off on How AI and Facial Recognition Are Impacting the Future of Banking – Observer

Intel unveils its first chips built for AI in the cloud – Engadget

Posted: at 2:33 pm

The chipmaker also unveiled a next-gen Movidius Vision Processing Unit whose updated computer vision architecture promises over 10 times the inference performance while reportedly managing efficiency six times better than rivals. Those claims have yet to pan out in the real world, but it's safe to presume that anyone relying on Intel tech for visual AI work will want to give this a look.

You'll have to be patient for the Movidius chip when it won't ship until sometime in the first half of 2020. This could nonetheless represent a big leap for AI performance, at least among companies that aren't relying on rivals like NVIDIA. Intel warned that bleeding-edge uses of AI could require performance to double every 3.5 months -- that's not going to happen if companies simply rely on conventional CPUs. And when internet giants like Facebook and Baidu lean heavily on Intel for AI, you might see practical benefits like faster site loads or more advanced AI features.

View post:

Intel unveils its first chips built for AI in the cloud - Engadget

Posted in Ai | Comments Off on Intel unveils its first chips built for AI in the cloud – Engadget

Plum, the AI money management app, raises $3M more and comes to Android – TechCrunch

Posted: at 2:33 pm

Plum, the U.K.-based AI assistant to help you manage your money and save more, has raised $3 million in additional funding money it plans to use for further growth, including European expansion.

The London company has also quietly launched its app for Android phones, adding to an existing iOS app and Facebook Messenger chatbot.

Backing this round which is essentially a second tranche to Plums earlier $4.5 million raise in the summer is EBRD and VentureFriends, both existing investors. Christian Faes, founder and CEO of LendInvest has also participated.

It brings the fintech startups total funding to $9.3 million since being founded by early TransferWise employee Victor Trokoudes, and Alex Michael, in 2016.

The new investment is said to come at the end of a year of rapid expansion for Plum in both London and Athens, including growing the team to 31 employees. Senior hires include Max Mawby, Plums head of Behavioural Science, who previously worked for the U.K. government and ran the fintech sector-focused Behavioural Insights Team.

In a call, Trokoudes told me that take-up for Plums iOS app has been high and Android is also following a similar trajectory, proof that the startups AI assistant has perhaps outgrown its chatbot and Facebook Messenger beginnings (competitor Cleo has also released dedicated iOS and Android apps as an alternative to Facebook Messenger).

He also says Plum now has 650,000 registered users, of which around 70% are active monthly. In recent user feedback sessions conducted by the startup, the biggest draw to the app is that its aim of changing financial behaviour to help people save more appears to be working.

When users stick around using Plum for long enough, Trokoudes says they are surprised (and delighted) that it actually works.

Like similar apps, Plums artificial intelligence deems what you can afford to save by analysing your bank transactions. It then puts money away each month in the form of round-ups and/or regular savings.

You can open an ISA investment account and invest based on themes, such as only in ethical companies or technology. Another related feature is Splitter, which, as the name suggests, lets you split your automatic savings between Plum savings and investments, selecting the percentage amounts to go into each pot from 0-100%.

Trokoudes says that Plum recently launched two new intelligent saving rules: the 52-Week Challenge, which aims to help you save 1367 over a year; and the Rainy Day Rule, which puts aside money whenever it rains (yes, really!).

Saving rules use automation to help people save more effectively without overloading them with information, adds the Plum founder in a statement. We have good evidence that this approach works: our automated round-ups feature, that we launched earlier this year has become a firm favourite among Plum users, boosting their savings by 50% on average.

Meanwhile, another one of Plums competitors, Chip, recently raised 3.8 million in equity crowdfunding on Crowdcube. It was part of a round targeting $7.3 million in total, although it isnt clear if all of that has closed yet (last time I checked the company had so far secured $5 million). Noteworthy, the equity crowdfund gave Chip a pre-money valuation of 36.78 million based on over 153,000 accounts opened.

See the article here:

Plum, the AI money management app, raises $3M more and comes to Android - TechCrunch

Posted in Ai | Comments Off on Plum, the AI money management app, raises $3M more and comes to Android – TechCrunch

Tech Optimization: Getting the most out of AI – Healthcare IT News

Posted: November 9, 2019 at 8:42 am

Artificial intelligence is a highly complex technology that, once implemented, requires ongoing oversight to make sure it is doing what is expected of it and ensure it is operating at optimal levels.

Healthcare provider organizations using AI technologies also need to make sure theyre getting the biggest bang for their buck. In other words, they need to optimize the AI so that the technologies are meeting the specific needs of their organizations.

We spoke with six artificial intelligence experts, each with extensive experience in healthcare deployments, who offered comprehensive advice on how CIOs and other health IT workers can optimize their AI systems and approaches to best work for their provider organizations.

Optimizing AI depends on the understanding of what AI is capable of and applying it to the right problem, said Joe Petro, chief technology officer at Nuance Communications, a vendor of AI technology for medical image interpretations.

There is a lot of hype out there, and, unfortunately, the claims are somewhat ridiculous, he said. To optimize AI, we all need to understand: the problem we are trying to solve; how AI can solve the problem; can an existing capability be augmented with AI; and, when AI is not helpful.

Joe Petro, Nuance Communications

For example, is traceability important? AI has a well-known black box limitation every fact or piece of evidence that contributed to a decision or conclusion made by the neural net is not always known.

It is sometimes impossible to trace back through the bread crumb trail leading to the conclusion made by the neural net, Petro explained. Therefore, if traceability is a requirement of the solution, you may need to retreat to a more traditional computational methodology, which is not always a bad thing.

Also, is the problem well-behaved and well-conditioned for AI? For example, he said, are there clear patterns in the solution to the problem that repeat, do not vary widely, and are essentially deterministic.

For example, if you give the problem to a series of experts, will they all arrive at the same answer, he posed. If humans are given the same inputs and disagree on the answer, then AI may not be able to make sense of the data, and the neural nets may deliver results that do not agree with the opinions of certain experts. Rest assured that AI will find a pattern the question is whether or not the pattern is repeatable and consistent.

So in todays world of AI, the problems being solved by AI, especially in healthcare, are deliberately narrowly defined, thereby increasing the accuracy and applicability of AI, Petro explained. Choosing the right problem to solve and narrowing the scope of that problem is key in delivering a great outcome, he advised.

Furthermore, training data needs to be readily available at the volume necessary to create dependable AI models that produce consistently verified results, he added. Unfortunately, sometimes there is no data available in the form that is required to train the neural nets. For example, in some cases, AI requires marked-up and annotated data. This kind of markup is sometimes not available.

When a radiologist reads an image, they may or may not indicate exactly where in the image the diagnosis was made. No data markup makes training sometimes impossible. When a CDI specialist or care coordinator reads through an entire case, they most likely will not indicate every piece of evidence that prompted a query back to a physician.

Again, no data markup makes training sometimes impossible, Petro stated. Therefore, someone needs to go back over the data and potentially add the markup and annotations to train the initial models. Markup is not always necessary, but we need to realize that the data we need is not always available and may need to be expensively curated. The fact is that data is essentially the new software. Without the right data, AI cannot produce the wanted results.

Ken Kleinberg, practice lead, innovative technologies, at consulting firm Point-of-Care Partners, cautioned that AI is being promoted as being able to solve just about any problem that involves a decision.

Many applications that were formerly addressed with proven rules-based or statistical approaches now are routinely being identified as AI targets, he explained. Given the many extra considerations for AI involving model selection, training, validation, the expertise required, etc., this may be overkill. In addition to ROI concerns, using AI may expose organizations to problems unique or common to AI that simpler or alternative approaches are less susceptible to.

Ken Kleinberg, Point-of-Care Partners

Even basic machine learning approaches that may do little more than try out a bunch of different statistical models require some degree of expertise to use, he added.

Considerations of which applications to pick for AI include how many possible variables are in play, known complexities and dependencies, data variability, historical knowledge, availability of content experts, transparency of decision requirements, liability concerns, and how often the system might need to be retrained and tested, Kleinberg advised.

Experience of the model builders and sophistication and investment with an AI platform also should be considered, but AI should not be an expensive hammer looking for a simple nail.

For example, it may be that only a handful of known variables are key to decision making to decide whether an intervention with a patient suffering from a specific condition is needed if the patient has these specific triggers, they are going to be brought in.

Why attempt to train a system on what is already known? he said. Sure, if the goal is to discover unknown nuances or dependencies, or deal with rare conditions, AI could be used with a broader set of variables. For most organizations, they will be safer to go with basic rules-based models where every aspect of the decision can be reviewed and modified as new knowledge is accumulated especially if there are a manageable number of rules, up to a few hundred. That could be a better initial step than going directly to an AI model.

In order to get the most out of an AI investment and optimize the technology for a specific healthcare provider organization, bring in members from across the organization not just the IT team or clinical leadership, said Sanjeev Kumar, vice president of artificial intelligence and data engineering at HMS (Healthcare Management Systems).

Its important to invest time to understand in detailed nuance the full workflow from patient scheduling to check-in to clinical workflow to discharge and billing, he said.

Sanjeev Kumar, HMS

Each member of the team will be able to communicate how AI technology will impact the patient experience from their perspective and how these new, rich insights will impact the everyday office workflow, Kumar said. Without this insight at the beginning of the implementation, you risk investing a significant amount of money in technology that is not used by the staff, that negatively impacts the patient experience or, worst of all, gives inappropriate insights.

Collectively, incorporating staff early on may cause additional investment in manpower, but will result in an output that can be used effectively throughout the organization, he added.

On another technology optimization front, healthcare provider organizations have to be very careful with their data.

Data is precious, and healthcare data is at the outer extreme of sensitive information, said Petro of Nuance Communications. In the process of optimizing AI technology, we need to make sure the AI vendor is a trusted partner that acts as a caretaker of the PHI. We have all heard the horror stories in the press about the misuse of data. This is unacceptable.

Partnering with AI vendors that are legitimate custodians of the data and only use the data within the limitations and constraints of the contract and HIPAA guidelines is a table stakes-governing dynamic, he added.

Make sure to ask the hard questions, he advised. Ask about the use of the data, what is the PHI data flow, how does it move, where does it come to rest, who has access to it, what is it used for, and how long does the vendor keep it. Healthcare AI companies need to be experts in the area of data usage and the limitations around data usage. If a vendor wobbles in these areas, move on.

Another very important consideration for providers optimizing AI technology is the amount of variability in the processes and data they are working with, said Michael Neff, vice president of professional services at Recondo Technology.

For example, clinical AI models created for a population of patients with similar ethnic backgrounds and a small range of ages is most likely simpler than the same model created for an ethnically diverse population, he explained. In the latter population, there will probably be a lot more edge cases, which will either require more training data or will need to be excluded from the model.

Michael Neff, Recondo Technology

If the decision is made to exclude those cases, or if a model is built from a more cohesive data set, it will be very important to limit the use of the AI model to the cases where its predictions are valid, he continued.

The same argument, he said, holds for business variability: A model trained with data sent from a specific payer may not be valid for other payers that a provider works with.

When using AI approaches and especially natural language processing it is key to provide an audit trail to justify recommendations and findings, advised Dr. Elizabeth Marshall, associate director of clinical analytics at Linguamatics IQVIA.

Any insights or features taken from clinical notes and used in AI algorithms need to be easily traced to the exact place in the document they came from, she cautioned. This enables clinical staff to validate the findings and build confidence in AI.

Dr. Elizabeth Marshall, Linguamatics IQVIA

For example, when ensuring a hospital is receiving the right repayment for chronic condition comorbidities such as hepatitis (chronic viral B and C) and HIV/AIDS. It is not only important to capture the data but also to ensure one is able to link the data back to the patients EHR encounter where the information was obtained, she said.

Further, its critical to consider how any insights will be made actionable and incorporated into clinical workflow; having a series of AI algorithms with no way to actually improve patient care is not impactful, Marshall said. For example, clinicians may want to improve the identification of patients who might be missed in a busy emergency department. Time is of the essence, and manually re- reviewing every radiology report, to look for missed opportunities of follow-up, wastes precious time.

Instead, they could use natural language processing to review unstructured sections for critical findings within the reports such as identifying patients with incidental pulmonary nodules, she advised.

When high-risk patients are identified, its critical to have a process in place for appropriate follow-up, she said. To actually improve care, the results need to be flagged in a risk register for follow-up by care coordinators after the patients are no longer in immediate danger.

On the AI spectrum, full replication of human thought is sometimes referred to as strong or full AI. This does not exist yet, certainly not in medicine.

In healthcare, we primarily are focused on narrow or weak AI, which could be described as the use of software or algorithms to complete specific problem-solving or reasoning tasks, at various levels of complexity, said Dr. Ruben Amarasingham, president and CEO of Pieces Technologies. These include specific focal tasks like reading a chest X-ray, interpreting the stage of a skin wound, reading a doctors note and understanding the concerns.

Dr. Ruben Amarasingham, Pieces Technologies

One optimization best practice is to understand how the proposed AI technology is being inserted into the workflow, whether the insertion truly decreases friction in the workflow from the perspective of the stakeholder (provider, patient and family) and effectively measuring and evaluating that value-add immediately after go-live, he said.

If the AI is not reducing stress and complexity of the workflow, it is either not working, not optimized or not worth it, he added.

AI models built by third parties may well serve the local needs of an organization at least as a starting point, but that could be a risk and be an opportunity for optimization, said Kleinberg of Point-of-Care Partners.

As the number of prebuilt models proliferate for example, sepsis, length-of-stay prediction, no-shows it becomes more important to understand the quality of the training and test sets used and attempt to recognize what assumptions and biases the prebuilt model may contain, he said. There has been a ton of recent research on how easily AI particularly deep learning models can be fooled, for example, not paying enough attention to whats in the background of an image. Are the models validated by any independent parties?

Consider an application that recommends the most appropriate type of medical therapy management program for a patient with a goal of increasing medication adherence, Kleinberg advised. To what degree might the test set have been chosen for individuals affected by certain environmental factors (warm versus cold climate), fitness levels, ethnic/economic background, number of medications taken, etc., and how does that compare to the local population to be analyzed, he added.

Retraining and testing the model with a more tuned-to-local demographic data set will be a key practice to achieve more optimized results, he advised.

Amarasingham of Pieces Technologies offered another AI optimization best practice: Health systems should set up governance systems for their AI technology.

I am excited to see the development of AI committees at health systems similar to the development of evidence-based medicine, data governance or clinical order set committees in the recent past, he said. These groups should be involved with evaluating the performance of their AI systems in the healthcare workplace and not rely solely on vendors to oversee their products or services.

They also could be tasked with developing the AI roadmap for their institution over time and as new AI technologies emerge, he added. These committees could be a mix of clinicians, administrators, informaticists and information system team members, he suggested.

Implementing any artificial intelligence technology can require a little more investment than originally anticipated, but if a healthcare organization starts small and plans properly, it will see true returns on that capital and manpower, advised Kumar of HMS.

All healthcare organizations provider, payer and employer will attest that AI has the ability to help transform healthcare operations, he stated. However, AI by itself is not a silver bullet to revolutionizing the system it requires people, process and technology planning, workflow transformation, and time to make sure that it is successful.

This means that to correctly optimize the technology, one needs to go slowly and make sure that one considers all factors that will impact the output from the data going in to how the insights are reported back to providers for action, he said.

In order to ensure that you get the most out of your investment, he concluded, know that you will need to invest more and take longer to see the results.

Twitter:@SiwickiHealthITEmail the writer:bill.siwicki@himssmedia.comHealthcare IT News is a HIMSS Media publication.

View original post here:

Tech Optimization: Getting the most out of AI - Healthcare IT News

Posted in Ai | Comments Off on Tech Optimization: Getting the most out of AI – Healthcare IT News

Page 198«..1020..197198199200..210220..»