The Evolution of Artificial Intelligence and Future of National Security – The National Interest

Artificial intelligence is all the rage these days. In the popular media, regular cyber systems seem almost passe, as writers focus on AI and conjure up images of everything from real-life Terminator robots to more benign companions. In intelligence circles, Chinas uses of closed-circuit television, facial recognition technology, and other monitoring systems suggest the arrival of Big Brotherif not quite in 1984, then only about forty years later. At the Pentagon, legions of officers and analysts talk about the AI race with China, often with foreboding admonitions that the United States cannot afford to be second in class in this emerging realm of technology. In policy circles, people wonder about the ethics of AIsuch as whether we can really delegate to robots the ability to use lethal force against Americas enemies, however bad they may be. A new report by the Defense Innovation Board lays out broad principles for the future ethics of AI, but only in general terms that leave lots of further work to still be done.

What does it all really mean and is AI likely to be all its cracked up to be? We think the answer is complex and that a modest dose of cold water should be thrown on the subject. In fact, many of the AI systems being envisioned today will take decades to develop. Moreover, AI is often being confused with things it is not. Precision about the concept will be essential if we are to have intelligent discussions about how to research, develop, and regulate AI in the years ahead.

AI systems are basically computers that can learn how to do things through a process of trial and error with some mechanism for telling them when they are right and when they are wrongsuch as picking out missiles in photographs, or people in crowds, as with the Pentagon's "Project Maven"and then applying what they have learned to diagnose future data. In other words, with AI, the software is built by the machine itself, in effect. The broad computational approach for a given problem is determined in advance by real old-fashioned humans, but the actual algorithm is created through a process of trial and error by the computer as it ingests and processes huge amounts of data. The thought process of the machine is really not that sophisticated. It is developing artificial instincts more than intelligenceexamining huge amounts of raw data and figuring out how to recognize a cat in a photo or a missile launcher on a crowded highway rather than engaging in deep thought (at least for the foreseeable future).

This definition allows us quickly to identify some types of computer systems that are not, in fact, AI. They may be important, impressive, and crucial to the warfighter but they are not artificial intelligence because they do not create their own algorithms out of data and multiple iterations. There is no machine learning involved, to put it differently. As our colleague, Tom Stefanick, points out, there is a fundamental difference between advanced algorithms, which have been around for decades (though they are constantly improving, as computers get faster), and artificial intelligence. There is also a difference between an autonomous weapons system and AI-directed robotics.

For example, the computers that guide a cruise missile or a drone are not displaying AI. They follow an elaborate, but predetermined, script, using sensors to take in data and then putting it into computers, which then use software (developed by humans, in advance) to determine the right next move and the right place to detonate any weapons. This is autonomy. It is not AI.

Or, to use an example closer to home for most people, when your smartphone uses an app like Google Maps or Waze to recommend the fastest route between two points, this is not necessarily, AI either. There are only so many possible routes between two places. Yes, there may be dozens or hundredsbut the number is finite. As such, the computer in your phone can essentially look at each reasonable possibility separately, taking in data from the broader network that many other peoples phones contribute to factor traffic conditions into the computation. But the way the math is actually done is straightforward and predetermined.

Why is this important? For one thing, it should make us less breathless about AI, and see it as one element in a broader computer revolution that began in the second half of the twentieth century and picked up steam in this century. Also, it should help us see what may or may not be realistic and desirable to regulate in the realm of future warfare.

The former vice chairman of the joint chiefs of staff, Gen. Paul Selva, has recently argued that the United States could be about a decade away from having the capacity to build an autonomous robot that could decide when to shoot and whom to killthough he also asserted that the United States had no plans actually to build such a creature. But if you think about it differently, in some ways weve already had autonomous killing machines for a generation. That cruise missile we discussed above has been deployed since the 1970s. It has instructions to fly a given route and then detonate its warhead without any human in the loop. And by the 1990s, we knew how to build things like skeet submunitions that could loiter over a battlefield and look for warm objects like tanksusing software to decide when to then destroy them. So the killer machine was in effect already deciding for itself.

Even if General Selva's terminator is not built, robotics will in some cases likely be given greater decisionmaking authority to decide when to use force, since we have in effect already crossed over this threshold. This highly fraught subject requires careful ethical and legal oversight, to be sure, and the associated risks are serious. Yet the speed at which military operations must occur will create incentives not to have a person in the decisionmaking loop in many tactical settings. Whatever the United States may prefer, restrictions on automated uses of violent force would also appear relatively difficult to negotiate (even if desirable), given likely opposition from Russia and perhaps from other nations, as well as huge problems with verification.

For example, small robots that can operate as swarms on land, in the air or in the water may be given certain leeway to decide when to operate their lethal capabilities. By communicating with each other, and processing information about the enemy in real-time, they could concentrate attacks where defenses are weakest in a form of combat that John Allen and Amir Husain call hyperwar because of its speed and intensity. Other types of swarms could attack parked aircraft; even small explosives, precisely detonated, could disable wings or engines or produce secondary and much larger explosions. Many countries will have the capacity to do such things in the coming twenty years. Even if the United States tries to avoid using such swarms for lethal and offensive purposes, it may elect to employ them as defensive shields (perhaps against North Korean artillery attack against Seoul) or as jamming aids to accompany penetrating aircraft. With UAVs that can fly ten hours and one hundred kilometers now costing only in the hundreds of thousands of dollars, and quadcopters with ranges of a kilometer more or less costing in the hundreds of dollars, the trendlines are clearand the affordability of using many drones in an organized way is evident.

Where regulation may be possible, and ethically compelling, is limiting the geographic and temporal space where weapons driven by AI or other complex algorithms can use lethal force. For example, the swarms noted above might only be enabled near a ship, or in the skies near the DMZ in Korea, or within a small distance of a military airfield. It may also be smart to ban letting machines decide when to kill people. It might be tempting to use facial recognition technology on future robots to have them hunt the next bin Laden, Baghdadi, or Soleimani in a huge Mideastern city. But the potential for mistakes, for hacking, and for many other malfunctions may be too great to allow this kind of thing. It probably also makes sense to ban the use of AI to attack the nuclear command and control infrastructure of a major nuclear power. Such attempts could give rise to use them or lose them fears in a future crisis and thereby increase the risks of nuclear war.

We are in the early days of AI. We cant yet begin to foresee where its going and what it may make possible in ten or twenty or thirty years. But we can work harder to understand what it actually isand also think hard about how to put ethical boundaries on its future development and use. The future of warfare, for better or for worse, is literally at stake.

Retired Air Force Gen. Lori Robinson is a nonresident senior fellow on the Security and Strategy team in the Foreign Policy program at Brookings. She was commander of all air forces in the Pacific.

The rest is here:
The Evolution of Artificial Intelligence and Future of National Security - The National Interest

Venture Capitalist Tim Draper: Bitcoin, decentralization and artificial intelligence could transform global industries – FXStreet

According to the renowned billionaire venture capitalist Tim Draper, decentralization is revolutionizing the currency systems around the world using the largest cryptocurrency by market capitalization, Bitcoin. Bitcoin, riding on the decentralization is converging with another technology that is also going to have a big impact which is artificial intelligence. Draper added:

Those technologies now have the ability to transform the biggest industries in the world. It is not just currency. It is banking and finance, insurance, real estate, healthcare, government. All those industries all in the trillions of dollars, they are hugely valuable, have the potential to be transformed by these new technologies.

Putting his comments into perspective in a new episode of 415 stories, Draper said that he can develop an insurance firm featuring an AI to detect fraud and utilizing smart contracts and Bitcoin, allow it to run on blockchain.

Follow this link:
Venture Capitalist Tim Draper: Bitcoin, decentralization and artificial intelligence could transform global industries - FXStreet

If we use it correctly, artificial intelligence could help us fight the next epidemic – Genetic Literacy Project

It was an AI that first saw it coming, or so the story goes. On December 30, an artificial-intelligence company called BlueDot, which usesmachine learningto monitor outbreaks of infectious diseases around the world, alerted clientsincluding various governments, hospitals, and businessesto an unusual bump in pneumonia cases in Wuhan, China. It would be another nine days before the World Health Organization officially flagged what weve all come to know asCovid-19.

That AI could spot an outbreak on the other side of the world is pretty amazing, and early warnings save lives. But how much has AI really helped in tackling the current outbreak?

The hype outstrips the reality. In fact, the narrative that has appeared in many news reports and breathless press releasesthat AI is a powerful new weapon against diseasesis only partly true and risks becoming counterproductive. For example, too much confidence in AIs capabilities could lead to ill-informed decisions that funnel public money to unproven AI companies at the expense of proven interventions.

So heres a reality check: AI will not save us from the coronaviruscertainly not this time. But theres every chance it will play a bigger role in future epidemicsif we make some big changes.

Read the original post

Read the original:
If we use it correctly, artificial intelligence could help us fight the next epidemic - Genetic Literacy Project

IIT-M to reskill women in artificial intelligence – The Hindu

The Indian Institute of Technology-Madras is offering 150 hours of training to reskill women who have taken a break from their career.

The certification course includes artificial intelligence, machine learning, cyber security data science and big data.

The Career Back 2 Women is an initiative through the Institutes Digital Skills Academy. Candidates can choose the level of training.

The institute has tied up with the Forensic Intelligence Surveillance and Security Technologies to offer the programme.

IIT-M director Bhaskar Ramamurthi said, In the IT field, the technology changes are so rapid that they [women who take a break] are unable to get back to their careers as their skills are probably outdated. Despite this, their industry experience and knowledge about IT are immense and can be useful to many IT companies if they can fit into current requirements immediately. IIT-Madras is happy to pioneer this programme to help them get back to work and retrieve their careers.

Women who complete the advance module in select tracks would also receive assistance in job placement.

Digital Skills Academy, IIT-Madras, also plans to offer more courses at various levels for students and working professionals in association with NASSCOM and in partnership with training companies incubated at IIT Madras Research Park and industry partners.

K. Mangala Sunder, Head, Digital Skills Academy, said, IIT-M works with NASSCOM IT-ITeS Sector Skill Council to ensure that right industry partners are involved in training. Faculty from premier institutions provide fundamental knowledge to all learners.

According to C. Mohan Ram, Chief Mission Integrator and Innovator, FISST, all participants will take a 20-hour programme after which they can choose their area of specialisation. There are four tracks offered initially. Each track has basic and advanced modules.

You have reached your limit for free articles this month.

Register to The Hindu for free and get unlimited access for 30 days.

Find mobile-friendly version of articles from the day's newspaper in one easy-to-read list.

Enjoy reading as many articles as you wish without any limitations.

A select list of articles that match your interests and tastes.

Move smoothly between articles as our pages load instantly.

A one-stop-shop for seeing the latest updates, and managing your preferences.

We brief you on the latest and most important developments, three times a day.

Not convinced? Know why you should pay for news.

*Our Digital Subscription plans do not currently include the e-paper ,crossword, iPhone, iPad mobile applications and print. Our plans enhance your reading experience.

See the article here:
IIT-M to reskill women in artificial intelligence - The Hindu

Apple seeks patent on display-blurring technology using face detection and gaze tracking – Biometric Update

A recent patent application by Apple describes a system using face recognition and eye-tracking to prevent privacy invasion by people looking over the shoulder of an electronic device user.

The application filed for Gaze-Dependent Display Encryption, published by the United States Patent and Trademark Office (USPTO), was filed in September, 2019. It describes a system for obscuring images and other screen contents that are not being looked at in the moment by the rightful user. This means that an iPhone, iPad or Apple Watch could leave only a certain region of a display clear, and the patent suggests this can be done without disrupting the users visual experience, by using visual encryption which retains the overall look and structure of that region.

(A) single shift in ASCII codes can be used to hide the meaning of text content without changing the shape or the white space of the displayed text, the eight California-based researchers listed as inventors write in the application. Within-word shuffling of letters is another way of obscuring text that is not being read by the device owner without disrupting the overall look of the page. Color altering and image warping are also mentioned as possible techniques for visual encryption.

The primary user could be identified or authenticated with facial recognition, and display areas visually encrypted based on tracking the gaze location of the legitimate user, and possibly of onlookers.

Peripheral vision is limited compared to central vision, the researchers note, and visual clutter can impede human recognition, which enables the use of other techniques such as metamers for visual encryption. Metamers are visual stimuli that are perceptually indistinguishable, even though they are physically different from another stimulus present. The researchers note that metamers are challenging for rapidly changing display frames, such as in video.

The technology may also never see the light of day, as Apple applies for patents on many technologies that are not necessarily in development for commercial production. The company had a filing for biometric authentication to unlock multiple devices published by the USPTO in February.

Apple | biometrics | eye tracking | face detection | mobile device | patents | privacy

Read the rest here:
Apple seeks patent on display-blurring technology using face detection and gaze tracking - Biometric Update

With launch of COVID-19 data hub, the White House issues a call to action for AI researchers – TechCrunch

In a briefing on Monday, research leaders across tech, academia and the government joined the White House to announce an open data set full of scientific literature on the novel coronavirus. The COVID-19 Open Research Dataset, known as CORD-19, will also add relevant new research moving forward, compiling it into one centralized hub. The new data set is machine readable, making it easily parsed for machine learning purposes a key advantage according to researchers involved in the ambitious project.

In a press conference, U.S. CTO Michael Kratsios called the new data set the most extensive collection of machine readable coronavirus literature to date. Kratsios characterized the project as a call to action for the AI community, which can employ machine learning techniques to surface unique insights in the body of data. To come up with guidance for researchers combing through the data, the National Academies of Sciences, Engineering, and Medicine collaborated with the World Health Organization to come up with high priority questions about the coronavirus related to genetics, incubation, treatment, symptoms and prevention.

The partnership, announced today by the White House Office of Science and Technology Policy, brings together the Chan Zuckerberg Initiative, Microsoft Research, the Allen Institute for Artificial Intelligence, the National Institutes of Healths National Library of Medicine, Georgetown Universitys Center for Security and Emerging Technology, Cold Spring Harbor Laboratory and the Kaggle AI platform, owned by Google.

The database brings together nearly 30,000 scientific articles about the virus known as SARS-CoV-2. as well as related viruses in the broader coronavirus group. Around half of those articles make the full text available. Critically, the database will include pre-publication research from resources like medRxiv and bioRxiv, open access archives for pre-print health sciences and biology research.

Sharing vital information across scientific and medical communities is key to accelerating our ability to respond to the coronavirus pandemic, Chan Zuckerberg Initiative Head of Science Cori Bargmann said of the project.

The Chan Zuckerberg Initiative hopes that the global machine learning community will be able to help the science community connect the dots on some of the enduring mysteries about the novel coronavirus as scientists pursue knowledge around prevention, treatment and a vaccine.

For updates to the CORD-19 data set, the Chan Zuckerberg Initiative will track new research on a dedicated page on Meta, the research search engine the organization acquired in 2017.

The CORD-19 data set announcement is certain to roll out more smoothly than the White Houses last attempt at a coronavirus-related partnership with the tech industry. The White House came under criticism last week for President Trumps announcement that Google would build a dedicated website for COVID-19 screening. In fact, the site was in development by Verily, Alphabets life science research group, and intended to serve California residents, beginning with San Mateo and Santa Clara County. (Alphabet is the parent company of Google.)

The site, now live, offers risk screening through an online questionnaire to direct high-risk individuals toward local mobile testing sites. At this time, the project has no plans for a nationwide rollout.

Google later clarified that the company is undertaking its own efforts to bring crucial COVID-19 information to users across its products, but that may have become conflated with Verilys much more limited screening site rollout. On Twitter, Googles comms team noted that Google is indeed working with the government on a website, but not one intended to screen potential COVID-19 patients or refer them to local testing sites.

In a partial clarification over the weekend, Vice President Pence, one of the Trump administrations designated point people on the pandemic, indicated that the White House is working with Google but also working with many other tech companies. Its not clear if that means a central site will indeed launch soon out of a White House collaboration with Silicon Valley, but Pence hinted that might be the case. If that centralized site will handle screening and testing location referral is not clear.

Our best estimate is that some point early in the week we will have a website that goes up, Pence said.

See the original post here:
With launch of COVID-19 data hub, the White House issues a call to action for AI researchers - TechCrunch

AI and machine learning algorithms have made aptitude tests more accurate. Here’s how – EdexLive

The rapid advancements of technologies within the spheres of communication and education have enriched and streamlined career counselling services across the globe. One area that has gone from strength to strength is psychometric assessment. As a career coach, one is now able to gain profound insights into their clients personalities. The most advanced psychometric assessments are able to map the test takers across numerous dimensions, such as intellectual quotient, emotional quotient, and orientation style, just to name a few.

Powered by Artificial Intelligence and Machine Learning algorithms, psychometric and aptitude tests are now able to accurately gauge the test takers aptitudes and subsequently generate result reports that enable them to career counsellors to identify the best-suited career trajectories for their clients.

Technology has allowed professionals in the domain of career counselling to expand their horizons and reach larger audiences. Some of the ways that they are using to connect with their includes the following:

Dont let scepticism bog you down

With Artificial Intelligence and Machine Learning continuing to influence career counselling services, one may ponder the requirement for human intervention in the highly automated process. Are we required to partake in the process? Is our input important? Such questions might bother you. The simple answer to such nagging questions is YES!

Given the might of AI and ML, it is natural to grow sceptical about the nature of career counselling. However, be mindful that you, and only you, have the unique ability to empathize with other individuals. This is what gives us the upper hand over machines when it comes to counselling. Having said that, the intersection of advanced technologies and human thought is where career counselling thrives.

The best of both worlds

Leveraging this synergy, Mindler, an EdTech startup headquartered in New Delhi, is revolutionizing career counselling services and empowering individuals to enter this fulfilling line of work.

Their proprietary psychometric and aptitude assessment, that maps students across 56 dimensions and is being hailed as Indias most advanced psychometric assessment, coupled with the interactive career counselling sessions convened by eminent career coaches makes for a nourishing package that guides students to their ideal careers.In a nutshell, Mindler has identified a sweet spot that harnesses powerful technologies and synthesizes that with expert advice from seasoned career counsellors. Therefore, the startup is ahead of its time and promises a bright future for the young learners of this nation.

(Eesha Bagga is the Director (Partnerships & Alliances) of Mindler,a career guidance and mapping platform)

See the original post:
AI and machine learning algorithms have made aptitude tests more accurate. Here's how - EdexLive

Qeexo is making machine learning accessible to all – Stacey on IoT

A still from a Qeexo demonstration video for a package monitoring.

Every now and then I see technology thats so impressive, I cant wait to write about it, even if no one else finds it cool. I had that experience last week while watching a demonstration of a machine learning platform built byQeexo. In the demo, I watched CEO and Co-Founder Sang Won Lee spend roughly five minutes teaching Qeexos AutoML software to distinguish between the gestures associated with playing the drums and playing a violin.

The technology is designed to take data from existing sensors, synthesize the information in the cloud, and then spit out a machine learning model that could run on a low-end microcontroller. It could enable normal developers to train some types of machine learning models quickly and then deploy them in the real world.

The demonstration consisted of the Qeexo software running on a laptop, anSTMicroelectronics SensorTile.box acting as the sensor to gatherthe accelerometer and gyroscope data and sending it to the computer, and Lee holding the SensorTile and playing the air drums or air violin. First, Lee left the sensor on the table to get background data, and saved that to the Qeexo software. Then he played the drums for 20 seconds to teach the software what that motion looked like, and saved that. Finally, he played the violin for 20 seconds to let the software learn that motion and saved that.

After a little bit of processing, the models were ready to test. (Lee turned off a few software settings that would result in better models for the sake of time, but noted that in a real-world setting these would add about 30 minutes to the learning process.) I watch as the model easily switched back and forth, identifying Lees drumming hands or violin movements instantly.

When he stopped, the software identified the background setting. Its unclear how much subtlety the platform is capable of (drumming is very different from playing an imaginary violin), but even at relatively blunt settings, the opportunities for Qeexo are clear. You could use the technology to teach software to turn on a light with a series of knocks, as Qeexo did inthis video. You could use it to train a device to recognize different gestures (Lee says the company is in talks with a toy company to create a personal wand for which people could build customized gestures to control items in their home). And in industrial settings, it could be used for anomaly detection developed in-house, which would be especially useful for older machines or in companies where data scientists are hard to find. Lee says that while Qeexo has raised $4.5 million in funding so far, it is already profitable from working with clients, so its clear there is real demand for the platform.

The company started out trying to provide machine learning for companies, but quickly realized that the way it was trying to solve client problems wasnt scalable, so it transitioned to building a platform that could learn. It has been active since 2016, providing software that tracks various types of a finger touch on phone screens for Huawei. One of its competitive advantages is that the software takes what it learns and recompiles the Python code generated by the original models into C code, which is smaller and can run on constrained devices.

Lee says the models are designed to run on chips that have as little as 100 kilobytes of memory. Today those chips are only handling inference, or actually matching behavior against an existing model on the chip, but Lee says that the plan is to offer training on the chip itself later this year.

Thats a pretty significant claim, as it would allow someone to place the software on a device and do away with sending data to the cloud, which reduces the need for connectivity and helps boost privacy. For the last few years, it has been the holy grail of machine learning at the edge, but so far it hasnt been done. It will be, though, and well see if Qeexo is the one that will make it happen.

Related

Visit link:
Qeexo is making machine learning accessible to all - Stacey on IoT

Hey, Sparky: Confused by data science governance and security in the cloud? Databricks promises to ease machine learning pipelines – The Register

Databricks, the company behind analytics tool Spark, is introducing new features to ease the management of security, governance and administration of its machine learning platform.

Security and data access rights have been fragmented between on-premises data, cloud instances and data platforms, Databricks told us. And the new approach allows tech teams to manage policies from a single environment and have them replicated in the cloud, it added.

David Meyer, senior veep of product management at Databricks, said:

"Cloud companies have inherent native security controls, but it can be a very confusing journey for these customers moving from an on-premise[s] world where they have their own governance in place, controlling who has access to what, and then they move this up to the cloud and suddenly all the rules are different."

The idea behind the new features is to allow users to employ the controls they are familiar with, for example, Active Directory to control data policies in Databricks. The firm then pushes those controls out into the cloud, he said.

The new features include user-owned revocable data encryption keys and customised private networks run in cloud clusters, allowing companies to tailor the security services to their enterprise and compliance requirements.

To ease administration, users can audit and analyse all the activity in their account, and set policies to administer users, control budget and manage infrastructure.

Meanwhile, the new features allow customers to deploy analytics and machine learning by offering APIs for everything from user management, workspace provisioning, cluster policies to application and infrastructure monitoring, allowing data ops teams to automate the whole data and machine learning lifecycle, according to Databricks.

Meyer added: "All the rules of the workspaces have to be done programmatically because that's the only way you can run things at scale in an organisation."

Databricks is currently available on AWS and Azure, and although plans are in place to launch on Google Cloud Platform, "it was a question of timing," the exec added.

Dutch ecommerce and banking group Wehkamp has been using Databricks since 2016. In the last two years it has introduced a training programme to help users from across the business - from IT operations to marketing - do their own machine learning projects on Spark.

The new security and governance feature will help in support of such a large volume of users without creating a commensurate administration burden, said Tom Mulder, lead data scientist at Wehkamp. "We introduced a new strategy which was about teaching data science to everybody in the company which actually means we have about 400 active users and 600 jobs running in Databricks," Mulder said.

Examples of use cases include onboarding products for resale, by using natural language processing to help the retailer parse data from suppliers into its own product management system, avoiding onerous re-keying and saving time.

Wehkamp said he was looking forward to the new security and governance features to help manage such a wide pool of users. "The way Databricks is working to introduce the enterprise features and all the management tools, that will help a lot."

Managing data and users in a secure way, which complies with company policy and regulations, is a challenge as data science scales up from a back-room activity led by a handful of data scientists to something in which a broader community of users can participate. Databricks is hoping its new features addressing data governance and security will ease punters along that path.

Sponsored: Webcast: Why you need managed detection and response

Visit link:
Hey, Sparky: Confused by data science governance and security in the cloud? Databricks promises to ease machine learning pipelines - The Register

Insights into the E-Commerce Fraud Detection Solutions Market Overview – Machine Learning Tools Have Significantly Changed the Way Fraud is Detected -…

DUBLIN--(BUSINESS WIRE)--The "E-Commerce Fraud Detection Solutions: Market Overview" report has been added to ResearchAndMarkets.com's offering.

This report provides a foundational framework for evaluating fraud detection technologies in two categories. The first category includes 18 suppliers that have been identified as implementing more traditional systems that monitor e-commerce websites and payments, evaluating shopping, purchasing, shipping, payments, and disputes to detect fraud.

The second category includes 37 service providers that the publisher has identified as specializing in identity and authentication often utilizing biometrics as well as behavioral biometric data collected across multiple websites to establish risk scores and to detect account takeover attempts and bots. Note, however, that companies in both of these categories are adopting new technologies and their solutions are undergoing rapid change.

Machine learning tools have significantly changed the way fraud is detected. Even as machine learning technology advances at a dizzying rate, so do the models that fraud detection platforms deploy to recognize fraud. These models can now monitor and learn from activity across multiple sites operating the same platform or even from data received directly from the payment networks.

This ability to model and detect fraud activity across multiple merchants, multiple geographies, and from the payment networks enables improved detection and inoculation from new types of fraud attack as soon as they are discovered. What is more important is that this technology starts to connect identity, authentication, behavior, and payments in ways never possible before.

E-commerce fraud rates continue to increase at a rapid rate, with synthetic fraud growing faster than other fraud types. It is time for merchants to reevaluate the tools they currently deploy to prevent fraud, commented Steve Murphy, Director, Commercial and Enterprise Payments Advisory Service, co-author of the report.

Highlights of the report include:

Key Topics Covered:

1. Introduction

2. Determining the Cost of Fraud

3. The Business of Fraud

4. A Framework for Evaluating E-Commerce Fraud Detection Solutions

5. Selecting the Appropriate Tools

6. The Fraud Prevention Landscape

7. Conclusions

8. References

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/th2kms

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

Continued here:
Insights into the E-Commerce Fraud Detection Solutions Market Overview - Machine Learning Tools Have Significantly Changed the Way Fraud is Detected -...