Chilmark Research: The Promise of AI & ML in Healthcare Report – HIT Consultant

What You Need to Know:

New Chilmark Research report reveals artificial intelligence and machine learning (AI/ML) technologies are capturing the imagination of investors and healthcare organizationsand are poised to expand healthcare frontiers.

The latest report evaluates over 120 commercial AI/ML solutions in healthcare, explores future opportunities, and assesses obstacles to adoption at scale.

Interest and investment in healthcare AI/ML toolsis booming with approximately $4B in capital funding pouring into thishealthcare sector in 2019. Such investment is spurring a vast array of AI/MLtools for providers, patients, and payers accelerating the possibilities fornew solutions to improve diagnostic accuracy, improve feedback mechanisms, andreduce clinical and administrative errors, according to Chilmark Researchs last report.

The Promise of AI & ML in Healthcare ReportBackground

The report,The Promise of AI & ML in Healthcare, is the most comprehensive report published on this rapidly evolving market with nearly 120 vendors profiled. The report explores opportunities, trends, and the rapidly evolving landscape for vendors, tracing the evolution from early AI/ML use in medical imaging to todays rich array of vendor solutions in medical imaging, business operations, clinical decision support, research and drug development, patient-facing applications, and more. The report also reviews types and applications of AI/ML, explores the substantial challenges of health data collection and use, and considers issues of bias in algorithms, ethical and governance considerations, cybersecurity, and broader implications for business.

Health IT vendors, new start-up ventures, providers, payers,and pharma firms now offer (or are developing) a wide range of solutions for anequally wide range of industry challenges. Our extensive research for thisreport found that nearly 120 companies now offer AI-based healthcare solutionsin four main categories: hospital operations, clinical support, research anddrug development, and patient/consumer engagement.

Report Key Themes

This report features an overview of these major areas of AI/ML use in healthcare. Solutions for hospital operations include tools for revenue cycle management, applications to detect fraud detection and ensure payment integrity, administrative and supply chain applications to improve hospital operations, and algorithms to boost patient safety. Population health management is an area ripe in AI/ML innovation, with predictive analytics solutions devoted to risk stratification, care management, and patient engagement.

A significant development is underway in AI/ML solutions for clinical decision support, including NLP- and voice-enabled clinical documentation applications, sophisticated AI-based medical imaging and pathology tools, and electronic health records management tools to mitigate provider burnout. AI/ML-enabled tools are optimizing research and drug development by improving clinical trials and patient monitoring, modeling drug simulations, and enabling precision medicine advancement. A wealth of consumer-facing AI/ML applications, such as chatbots, wearables, and symptom checkers, are available and in development.

Provider organizations will find this report offers deep insight into current and forthcoming solutions that can help support business operations, population health management, and clinical decision support. Current and prospective vendors of AI/ML solutions and their investors will find this reports overview of the current market valuable in mapping their own product strategy. Researchers and drug developers will benefit from the discussion of current AI/ML applications and future possibilities in precision medicine, clinical trials, drug discovery, and basic research. Providers and patient advocates will gain valuable insight into patient-facing tools currently available and in development.

All stakeholders in healthcare technologyproviders, payers, pharmaceutical stakeholders, consultants, investors, patient advocates, and government representativeswill benefit from a thorough overview of current offerings as well as thoughtful discussions of bias in data collection and underlying algorithms, cyber-security, governance, and ethical concerns.

For more information about the report, please visit https://www.chilmarkresearch.com/chilmark_report/the-promise-of-ai-and-ml-in-healthcare-opportunities-challenges-and-vendor-landscape/

See the original post here:
Chilmark Research: The Promise of AI & ML in Healthcare Report - HIT Consultant

Navigating the New Landscape of AI Platforms – Harvard Business Review

Executive Summary

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tooling for AI systems than they do building the AI systems themselves. Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling, and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies.

Nearly two years ago, Seattle Sport Sciences, a company that provides data to soccer club executives, coaches, trainers and players to improve training, made a hard turn into AI. It began developing a system that tracks ball physics and player movements from video feeds. To build it, the company needed to label millions of video frames to teach computer algorithms what to look for. It started out by hiring a small team to sit in front of computer screens, identifying players and balls on each frame. But it quickly realized that it needed a software platform in order to scale. Soon, its expensive data science team was spending most of its time building a platform to handle massive amounts of data.

These are heady days when every CEO can see or at least sense opportunities for machine-learning systems to transform their business. Nearly every company has processes suited for machine learning, which is really just a way of teaching computers to recognize patterns and make decisions based on those patterns, often faster and more accurately than humans. Is that a dog on the road in front of me? Apply the brakes. Is that a tumor on that X-ray? Alert the doctor. Is that a weed in the field? Spray it with herbicide.

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tools for AI systems than they do building the systems themselves. A recent survey of 500 companies by the firm Algorithmia found that expensive teams spend less than a quarter of their time training and iterating machine-learning models, which is their primary job function.

Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies, like Seattle Sports Science.

Frustrated that its data science team was spinning its wheels, Seattle Sports Sciences AI architect John Milton finally found a commercial solution that did the job. I wish I had realized that we needed those tools, said Milton. He hadnt factored the infrastructure into their original budget and having to go back to senior management and ask for it wasnt a pleasant experience for anyone.

The AI giants, Google, Amazon, Microsoft and Apple, among others, have steadily released tools to the public, many of them free, including vast libraries of code that engineers can compile into deep-learning models. Facebooks powerful object-recognition tool, Detectron, has become one of the most widely adopted open-source projects since its release in 2018. But using those tools can still be a challenge, because they dont necessarily work together. This means data science teams have to build connections between each tool to get them to do the job a company needs.

The newest leap on the horizon addresses this pain point. New platforms are now allowing engineers to plug in components without worrying about the connections.

For example, Determined AI and Paperspace sell platforms for managing the machine-learning workflow. Determined AIs platform includes automated elements to help data scientists find the best architecture for neural networks, while Paperspace comes with access to dedicated GPUs in the cloud.

If companies dont have access to a unified platform, theyre saying, Heres this open source thing that does hyperparameter tuning. Heres this other thing that does distributed training, and they are literally gluing them all together, said Evan Sparks, cofounder of Determined AI. The way theyre doing it is really with duct tape.

Labelbox is a training data platform, or TDP, for managing the labeling of data so that data science teams can work efficiently with annotation teams across the globe. (The author of this article is the companys co-founder.) It gives companies the ability to track their data, spot, and fix bias in the data and optimize the quality of their training data before feeding it into their machine-learning models.

Its the solution that Seattle Sports Sciences uses. John Deere uses the platform to label images of individual plants, so that smart tractors can spot weeds and deliver pesticide precisely, saving money and sparing the environment unnecessary chemicals.

Meanwhile, companies no longer need to hire experienced researchers to write machine-learning algorithms, the steam engines of today. They can find them for free or license them from companies who have solved similar problems before.

Algorithmia, which helps companies deploy, serve and scale their machine-learning models, operates an algorithm marketplace so data science teams dont duplicate other peoples effort by building their own. Users can search through the 7,000 different algorithms on the companys platform and license one or upload their own.

Companies can even buy complete off-the-shelf deep learning models ready for implementation.

Fritz.ai, for example, offers a number of pre-trained models that can detect objects in videos or transfer artwork styles from one image to another all of which run locally on mobile devices. The companys premium services include creating custom models and more automation features for managing and tweaking models.

And while companies can use a TDP to label training data, they can also find pre-labeled datasets, many for free, that are general enough to solve many problems.

Soon, companies will even offer machine-learning as a service: Customers will simply upload data and an objective and be able to access a trained model through an API.

In the late 18th century, Maudslays lathe led to standardized screw threads and, in turn, to interchangeable parts, which spread the industrial revolution far and wide. Machine-learning tools will do the same for AI, and, as a result of these advances, companies are able to implement machine-learning with fewer data scientists and less senior data science teams. Thats important given the looming machine-learning, human resources crunch: According to a 2019 Dun & Bradstreet report, 40 percent of respondents from Forbes Global 2000 organizations say they are adding more AI-related jobs. And the number of AI-related job listings on the recruitment portal Indeed.com jumped 29 percent from May 2018 to May 2019. Most of that demand is for supervised-learning engineers.

But C-suite executives need to understand the need for those tools and budget accordingly. Just as Seattle Sports Sciences learned, its better to familiarize yourself with the full machine-learning workflow and identify necessary tooling before embarking on a project.

That tooling can be expensive, whether the decision is to build or to buy. As is often the case with key business infrastructure, there are hidden costs to building. Buying a solution might look more expensive up front, but it is often cheaper in the long run.

Once youve identified the necessary infrastructure, survey the market to see what solutions are out there and build the cost of that infrastructure into your budget. Dont fall for a hard sell. The industry is young, both in terms of the time that its been around and the age of its entrepreneurs. The ones who are in it out of passion are idealistic and mission driven. They believe they are democratizing an incredibly powerful new technology.

The AI tooling industry is facing more than enough demand. If you sense someone is chasing dollars, be wary. The serious players are eager to share their knowledge and help guide business leaders toward success. Successes benefit everyone.

See the rest here:
Navigating the New Landscape of AI Platforms - Harvard Business Review

How businesses and governments should embrace AI and Machine Learning – TechCabal

Leadership team of credit-as-a-service startup Migo, one of a growing number of businesses using AI to create consumer-facing products.

The ability to make good decisions is literally the reason people trust you with responsibilities. Whether you work for a government or lead a team at a private company, your decision-making process will affect lives in very real ways.

Organisations often make poor decisions because they fail to learn from the past. Wherever a data-collection reluctance exists, there is a fair chance that mistakes will be repeated. Bad policy goals will often be a consequence of faulty evidentiary support, a failure to sufficiently look ahead by not sufficiently looking back.

But as Daniel Kahneman, author of Thinking Fast and Slow, says:

The idea that the future is unpredictable is undermined every day by the ease with which the past is explained. If governments and business leaders will live up to their responsibilities, enthusiastically embracing methodical decision-making tools should be a no-brainer.

Mass media representations project artificial intelligence in futuristic, geeky terms. But nothing could be further from the truth.

While it is indeed scientific, AI can be applied in practical everyday life today. Basic interactions with AI include algorithms that recommend articles to you, friend suggestions on social media and smart voice assistants like Alexa and Siri.

In the same way, government agencies can integrate AI into regular processes necessary for society to function properly.

Managing money is an easy example to begin with. AI systems can be used to streamline data points required during budget preparations and other fiscal processes. Based on data collected from previous fiscal cycles, government agencies could reasonably forecast needs and expectations for future years.

With its large trove of citizen data, governments could employ AI to effectively reduce inequalities in outcomes and opportunities. Big Data gives a birds-eye view of the population, providing adequate tools for equitably distributing essential infrastructure.

Perhaps a more futuristic example is in drafting legislation. Though a young discipline, legimatics includes the use of artificial intelligence in legal and legislative problem-solving.

Democracies like Nigeria consider public input a crucial aspect of desirable law-making. While AI cannot yet be relied on to draft legislation without human involvement, an AI-based approach can produce tools for specific parts of legislative drafting or decision support systems for the application of legislation.

In Africa, businesses are already ahead of most governments in AI adoption. Credit scoring based on customer data has become popular in the digital lending space.

However, there is more for businesses to explore with the predictive powers of AI. A particularly exciting prospect is the potential for new discoveries based on unstructured data.

Machine learning could broadly be split into two sections: supervised and unsupervised learning. With supervised learning, a data analyst sets goals based on the labels and known classifications of the dataset. The resulting insights are useful but do not produce the sort of new knowledge that comes from unsupervised learning processes.

In essence, AI can be a medium for market-creating innovations based on previously unknown insight buried in massive caches of data.

Digital lending became a market opportunity in Africa thanks to growing smartphone availability. However, customer data had to be available too for algorithms to do their magic.

This is why it is desirable for more data-sharing systems to be normalised on the continent to generate new consumer products. Fintech sandboxes that bring the public and private sectors together aiming to achieve open data standards should therefore be encouraged.

Artificial intelligence, like other technologies, is neutral. It can be used for social good but also can be diverted for malicious purposes. For both governments and businesses, there must be circumspection and a commitment to use AI responsibly.

China is a cautionary tale. The Communist state currently employs an all-watching system of cameras to enforce round-the-clock citizen surveillance.

By algorithmically rating citizens on a so-called social credit score, Chinas ultra-invasive AI effectively precludes individual freedom, compelling her 1.3 billion people to live strictly by the Politburos ideas of ideal citizenship.

On the other hand, businesses must be ethical in providing transparency to customers about how data is harvested to create products. At the core of all exchange must be trust, and a verifiable, measurable commitment to do no harm.

Doing otherwise condemns modern society to those dystopian days everybody dreads.

How can businesses and governments use Artificial Intelligence to find solutions to challenges facing the continent? Join entrepreneurs, innovators, investors and policymakers in Africas AI community at TechCabals emerging tech townhall. At the event, stakeholders including telcos and financial institutions will examine how businesses, individuals and countries across the continent can maximize the benefits of emerging technologies, specifically AI and Blockchain. Learn more about the event and get tickets here.

Continue reading here:
How businesses and governments should embrace AI and Machine Learning - TechCabal

Cisco Enhances IoT Platform with 5G Readiness and Machine Learning – The Fast Mode

Cisco on Friday announced advancements to its IoT portfolio that enable service provider partners to offer optimized management of cellular IoT environments and new 5G use-cases.

Cisco IoT Control Center(formerly Jasper Control Center) is introducing new innovations to improve management and reduce deployment complexity. These include:

Using Machine Learning (ML) to improve management: With visibility into 3 billion events every day, Cisco IoT Control Center uses the industry's broadest visibility to enable machine learning models to quickly identify anomalies and address issues before they impact a customer. Service providers can also identify and alert customers of errant devices, allowing for greater endpoint security and control.

Smart billing to optimize rate plans:Service providers can improve customer satisfaction by enabling Smart billing to automatically optimize rate plans. Policies can also be created to proactively send customer notifications should usage changes or rate plans need to be updated to help save enterprises money.

Support for global supply chains: SIM portability is an enterprise requirement to support complex supply chains spanning multiple service providers and geographies. It is time-consuming and requires integrations between many different service providers and vendors, driving up costs for both. Cisco IoT Control Center now provides eSIM as a service, enabling a true turnkey SIM portability solution to deliver fast, reliable, cost-effective SIM handoffs between service providers.

Cisco IoT Control Center has taken steps towards 5G readiness to incubate and promote high value 5G business use cases that customers can easily adopt.

Vikas Butaney, VP Product Management IoT, CiscoCellular IoT deployments are accelerating across connected cars, utilities and transportation industries and with 5G and Wi-Fi 6 on the horizon IoT adoption will grow even faster. Cisco is investing in connectivity management, IoT networking, IoT security, and edge computing to accelerate the adoption of IoT use-cases.

Go here to read the rest:
Cisco Enhances IoT Platform with 5G Readiness and Machine Learning - The Fast Mode

PhD in Machine Learning and Computer Vision for Smart Maintenance of Road Infrastructure job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY…

About the project

The vision of Norwegian Public Roads Administration (NPRA, Norw.: Statens vegvesen) is to contribute to national goals for the transportation system. These goals are safety, promoting added value in society, and promoting change towards lower global emissions. The road system in Norway is large and complex, and the geography of Norway raises a range of challenges in this area with respect to maintenance which will be given priority compared to new large road investments in the coming years.

The Norwegian National Transport Plan is aimed towards promoting mobility, traffic safety, climatic and environmental conditions. To ensure a high-quality road infrastructure it is important to choose effective maintenance actions within the areas of operations, maintenance and rehabilitation. In particular, the development of new technology and new digital concepts is essential to enable more efficient monitoring and analysis of road traffic and road network conditions.

There is a technological shift taking place towards a more digitalized society. This technological shift has the potential to contribute to the overall goals of safety, low emissions and increased resource efficiency. The vision of NTNU is Knowledge for a Better World and is actively pursuing this goal across education, research and innovation. In the area of transportation, NTNU conducts extensive activity in several relevant engineering fields connected to infrastructure, maintenance and digitalization.

NPRA established a research and development project with the title "Smarter maintenance". This project on road maintenance and infrastructure will involve close cooperation between the areas of research expertise in civil, transport and structural engineering, technology, digitalization and maintenance, and economics. This cooperation is organized within three thematic areas: (1) Condition registration, data analysis and modelling; (2) Big data, artificial intelligence, strategic analysis and planning; and (3) Maintenance, social economics and innovation. There is both a substantial need and many opportunities for innovation in this research program which will bring together 7 PhD candidates across several engineering and cognate fields. Together, they will seek to solve specific challenges connected to the maintenance of transportation infrastructure.

These positions will be grouped into research clusters that will ensure close cooperation between PhD-candidates, supervisors, NPRA-experts and master/bachelor students.

We are seeking motivated candidates to work in a multidisciplinary and innovative setting of national and international importance.

About the position

We have a vacancy for a PhD position at the Department of Computer Science. The work will be carried out in close collaboration with domain experts from the Norwegian Public Roads Administration (NPRA) and the position will be affiliated with the Norwegian Open AI Lab (NAIL).

The candidate will perform research on next-generation AI and computer vision methods related to maintenance of road infrastructure. Key research topics that will be investigated in this PhD project are:

The position reports to Professor Frank Lindseth.

Main duties and responsibilities

Qualification requirements

Essential requirements:

The PhD-position's main objective is to qualify for work in research positions. The qualification requirement is completion of a masters degree or second degree (equivalent to 120 credits) with a strong academic background inComputer Scienceor equivalent education with a grade of B or better in terms of NTNUs grading scale. Applicants with no letter grades from previous studies must have an equally good academic foundation. Applicants who are unable to meet these criteria may be considered only if they can document that they are particularly suitable candidates for education leading to a PhD degree. Key qualifications are:

Candidates completing their MSc-degree in the Spring 2020 are encouraged to apply. The position is also open for integrated PhD for NTNU students starting their final year of their Masters Degree in Autumn 2020.

Desirable qualifications:

The appointment is to be made in accordance with the regulations in force concerning State Employees and Civil Servants and national guidelines for appointment as PhD, postdoctor and research assistant

NTNU is committed to following evaluation criteria for research quality according to The San Francisco Declaration on Research Assessment - DORA.

Personal characteristics

In the evaluation of which candidate is best qualified, emphasis will be placed on education, experience and personal suitability, as well as motivation, in terms of the qualification requirements specified in the advertisement

We offer

Salary and conditions

PhD candidates are remunerated in code 1017, and are normally remunerated at gross from NOK 479 600 before tax peryear. From the salary, 2 % is deducted as a contribution to the Norwegian Public Service Pension Fund.

The period of employment is3 years without required duties. Appointment to a PhD position requires admission to the PhD programme in Computer Sience.

As a PhD candidate, you undertake to participate in an organized PhD programme during the employment period. A condition of appointment is that you are in fact qualified for admission to the PhD programme within three months.

The engagement is to be made in accordance with the regulations in force concerning State Employees and Civil Servants, and the acts relating to Control of the Export of Strategic Goods, Services and Technology. Candidates who by assessment of the application and attachment are seen to conflict with the criterias in the latter law will be prohibited from recruitment to NTNU. After the appointment you must assume that there may be changes in the area of work.

General information

A good work environment is characterized by diversity. We encourage qualified candidates to apply, regardless of their gender, functional capacity or cultural background. Under the Freedom of Information Act (offentleglova), information about the applicant may be made public even if the applicant has requested not to have their name entered on the list of applicants.

The national labour force must reflect the composition of the population to the greatest possible extent, NTNU wants to increase the proportion of women in its scientific posts. Women are international schools) and possibilities to enjoy nature, culture and family life encouraged to apply. Furthermore, Trondheim offers great opportunities for education (including (http://trondheim.com/). Having a population of 200000, Trondheim is a small city by international standards with low crime rates and little pollution. It also has easy access to a beautiful countryside with mountains and a dramatic coastline.

Questions about the position can be directed to Professor Frank Lindseth, phone number+47 928 09 372, e-mail frankl@ntnu.no

The application must contain:

Publications and other academic works that the applicant would like to be considered in the evaluation must accompany the application. Joint works will be considered. If it is difficult to identify the individual applicant's contribution to joint works, the applicant must include a brief

Please submit your application electronically via jobbnorge.no with your CV, diplomas and certificates. Applications submitted elsewhere will not be considered. Diploma Supplement is required to attach for European Master Diplomas outside Norway. Chinese applicants are required to provide confirmation of Master Diploma from China Credentials Verification (CHSI): http://www.chsi.com.cn/en/).

Applicants invited for interview must include certified copies of transcripts and reference letters.

Please refer to the application number 2020/5928 when applying.

Application deadline:07.03.2020

NTNU - knowledge for a better world

The Norwegian University of Science and Technology (NTNU) creates knowledge for a better world and solutions that can change everyday life.

Faculty of Information Technology and Electrical Engineering

The Faculty of Information Technology and Electrical Engineering is Norways largest university environment in ICT, electrical engineering and mathematical sciences. Our aim is to contribute to a smart, secure and sustainable future. We emphasize high international quality in research, education,innovation, dissemination and outreach. The Faculty consists of seven departments and the Faculty Administration.

Deadline 7th March 2020EmployerNTNU - Norwegian University of Science and TechnologyMunicipality TrondheimScope FulltimeDuration TemporaryPlace of service Trondheim

More:
PhD in Machine Learning and Computer Vision for Smart Maintenance of Road Infrastructure job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY...

How to Train Your AI Soldier Robots (and the Humans Who Command Them) – War on the Rocks

Editors Note: This article was submitted in response to thecall for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the third question (part a.), which asks how institutions, organizational structures, and infrastructure will affect AI development, and will artificial intelligence require the development of new institutions or changes to existing institutions.

Artificial intelligence (AI) is often portrayed as a single omnipotent force the computer as God. Often the AI is evil, or at least misguided. According to Hollywood, humans can outwit the computer (2001: A Space Odyssey), reason with it (Wargames), blow it up (Star Wars: The Phantom Menace), or be defeated by it (Dr. Strangelove). Sometimes the AI is an automated version of a human, perhaps a human fighters faithful companion (the robot R2-D2 in Star Wars).

These science fiction tropes are legitimate models for military discussion and many are being discussed. But there are other possibilities. In particular, machine learning may give rise to new forms of intelligence; not natural, but not really artificial if the term implies having been designed in detail by a person. Such new forms of intelligence may resemble that of humans or other animals, and we will discuss them using language associated with humans, but we are not discussing robots that have been deliberately programmed to emulate human intelligence. Through machine learning they have been programmed by their own experiences. We speculate that some of the characteristics that humans have evolved over millennia will also evolve in future AI, characteristics that have evolved purely for their success in a wide range of situations that are real, for humans, or simulated, for robots.

As the capabilities of AI-enabled robots increase, and in particular as behaviors emerge that are both complex and outside past human experience, how will we organize, train, and command them and the humans who will supervise and maintain them? Existing methods and structures, such as military ranks and doctrine, that have evolved over millennia to manage the complexity of human behavior will likely be necessary. But because robots will evolve new behaviors we cannot yet imagine, they are unlikely to be sufficient. Instead, the military and its partners will need to learn new types of organization and new approaches to training. It is impossible to predict what these will be but very possible they will differ greatly from approaches that have worked in the past. Ongoing experimentation will be essential.

How to Respond to AI Advances

The development of AI, especially machine learning, will lead to unpredictable new types of robots. Advances in AI suggest that humans will have the ability to create many types of robots, of different shapes, sizes, or degrees of independence or autonomy. It is conceivable that humans may one day be able to design tiny AI bullets to pierce only designated targets, automated aircraft to fly as loyal wingmen alongside human pilots, or thousands of AI fish to swim up an enemys river. Or we could design AI not as a device but as a global grid that analyzes vast amounts of diverse data. Multiple programs funded by the Department of Defense are on their way to developing robots with varying degrees of autonomy.

In science fiction, robots are often depicted as behaving in groups (like the robot dogs in Metalhead). Researchers inspired by animal behaviors have developed AI concepts such as swarms, in which relatively simple rules for each robot can result in complex emergent phenomena on a larger scale. This is a legitimate and important area of investigation. Nevertheless, simply imitating the known behaviors of animals has its limits. After observing the genocidal nature of military operations among ants, biologists Bert Holldobler and E. O. Wilson wrote, If ants had nuclear weapons, they would probably end the world in a week. Nor would we want to limit AI to imitating human behavior. In any case, a major point of machine learning is the possibility of uncovering new behaviors or strategies. Some of these will be very different from all past experience; human, animal, and automated. We will likely encounter behaviors that, although not human, are so complex that some human language, such as personality, may seem appropriately descriptive. Robots with new, sophisticated patterns of behavior may require new forms of organization.

Military structure and scheme of maneuver is key to victory. Groups often fight best when they dont simply swarm but execute sophisticated maneuvers in hierarchical structures. Modern military tactics were honed over centuries of experimentation and testing. This was a lengthy, expensive, and bloody process.

The development of appropriate organizations and tactics for AI systems will also likely be expensive, although one can hope that through the use of simulation it will not be bloody. But it may happen quickly. The competitive international environment creates pressure to use machine learning to develop AI organizational structure and tactics, techniques, and procedures as fast as possible.

Despite our considerable experience organizing humans, when dealing with robots with new, unfamiliar, and likely rapidly-evolving personalities we confront something of a blank slate. But we must think beyond established paradigms, beyond the computer as all-powerful or the computer as loyal sidekick.

Humans fight in a hierarchy of groups, each soldier in a squad or each battalion in a brigade exercising a combination of obedience and autonomy. Decisions are constantly made at all levels of the organization. Deciding what decisions can be made at what levels is itself an important decision. In an effective organization, decision-makers at all levels have a good idea of how others will act, even when direct communication is not possible.

Imagine an operation in which several hundred underwater robots are swimming up a river to accomplish a mission. They are spotted and attacked. A decision must be made: Should they retreat? Who decides? Communications will likely be imperfect. Some mid-level commander, likely one of the robot swimmers, will decide based on limited information. The decision will likely be difficult and depend on the intelligence, experience, and judgment of the robot commander. It is essential that the swimmers know who or what is issuing legitimate orders. That is, there will have to be some structure, some hierarchy.

The optimal unit structure will be worked out through experience. Achieving as much experience as possible in peacetime is essential. That means training.

Training Robot Warriors

Robots with AI-enabled technologies will have to be exercised regularly, partly to test them and understand their capabilities and partly to provide them with the opportunity to learn from recreating combat. This doesnt mean that each individual hardware item has to be trained, but that the software has to develop by learning from its mistakes in virtual testbeds and, to the extent that they are feasible, realistic field tests. People learn best from the most realistic training possible. There is no reason to expect machines to be any different in that regard. Furthermore, as capabilities, threats, and missions evolve, robots will need to be continuously trained and tested to maintain effectiveness.

Training may seem a strange word for machine learning in a simulated operational environment. But then, conventional training is human learning in a controlled environment. Robots, like humans, will need to learn what to expect from their comrades. And as they train and learn highly complex patterns, it may make sense to think of such patterns as personalities and memories. At least, the patterns may appear that way to the humans interacting with them. The point of such anthropomorphic language is not that the machines have become human, but that their complexity is such that it is helpful to think in these terms.

One big difference between people and machines is that, in theory at least, the products of machine learning, the code for these memories or personalities, can be uploaded directly from one very experienced robot to any number of others. If all robots are given identical training and the same coded memories, we might end up with a uniformity among a units members that, in the aggregate, is less than optimal for the unit as a whole.

Diversity of perspective is accepted as a valuable aid to human teamwork. Groupthink is widely understood to be a threat. Its reasonable to assume that diversity will also be beneficial to teams of robots. It may be desirable to create a library of many different personalities or memories that could be assigned to different robots for particular missions. Different personalities could be deliberately created by using somewhat different sets of training testbeds to develop software for the same mission.

If AI can create autonomous robots with human-like characteristics, what is the ideal personality mix for each mission? Again, we are using the anthropomorphic term personality for the details of the robots behavior patterns. One could call it a robots programming if that did not suggest the existence of an intentional programmer. The robots personalities have evolved from the robots participation in a very large number of simulations. It is unlikely that any human will fully understand a given personality or be able to fully predict all aspects of a robots behavior.

In a simple case, there may be one optimum personality for all the robots of one type. In more complicated situations, where robots will interact with each other, having robots that respond differently to the same stimuli could make a unit more robust. These are things that military planners can hope to learn through testing and training. Of course, attributes of personality that may have evolved for one set of situations may be less than optimal, or positively dangerous, in another. We talk a lot about artificial intelligence. We dont discuss artificial mental illness. But there is no reason to rule it out.

Of course, humans will need to be trained to interact with the machines. Machine learning systems already often exhibit sophisticated behaviors that are difficult to describe. Its unclear how future AI-enabled robots will behave in combat. Humans, and other robots, will need experience to know what to expect and to deal with any unexpected behaviors that may emerge. Planners need experience to know which plans might work.

But the human-robot relationship might turn out to be something completely different. For all of human history, generals have had to learn their soldiers capabilities. They knew best exactly what their troops could do. They could judge the psychological state of their subordinates. They might even know when they were being lied to. But todays commanders do not know, yet, what their AI might prove capable of. In a sense, it is the AI troops that will have to train their commanders.

In traditional military services, the primary peacetime occupation of the combat unit is training. Every single servicemember has to be trained up to the standard necessary for wartime proficiency. This is a huge task. In a robot unit, planners, maintainers, and logisticians will have to be trained to train and maintain the machines but may spend little time working on their hardware except during deployment.

What would the units look like? What is the optimal unit rank structure? How does the human rank structure relate to the robot rank structure? There are a million questions as we enter uncharted territory. The way to find out is to put robot units out onto test ranges where they can operate continuously, test software, and improve machine learning. AI units working together can learn and teach each other and humans.

Conclusion

AI-enabled robots will need to be organized, trained, and maintained. While these systems will have human-like characteristics, they will likely develop distinct personalities. The military will need an extensive training program to inform new doctrines and concepts to manage this powerful, but unprecedented, capability.

Its unclear what structures will prove effective to manage AI robots. Only by continuous experimentation can people, including computer scientists and military operators, understand the developing world of multi-unit human and robot forces. We must hope that experiments lead to correct solutions. There is no guarantee that we will get it right. But there is every reason to believe that as technology enables the development of new and more complex patterns of robot behavior, new types of military organizations will emerge.

Thomas Hamilton is a Senior Physical Scientist at the nonprofit, nonpartisan RAND Corporation. He has a Ph.D. in physics from Columbia University and was a research astrophysicist at Harvard, Columbia, and Caltech before joining RAND. At RAND he has worked extensively on the employment of unmanned air vehicles and other technology issues for the Defense Department.

Image: Wikicommons (U.S. Air Force photo by Kevin L. Moses Sr.)

View original post here:
How to Train Your AI Soldier Robots (and the Humans Who Command Them) - War on the Rocks

From models of galaxies to atoms, simple AI shortcuts speed up simulations by billions of times – Science Magazine

Emulators speed up simulations, such as this NASA aerosol model that shows soot from fires in Australia.

By Matthew HutsonFeb. 12, 2020 , 2:35 PM

Modeling immensely complex natural phenomena such as how subatomic particles interact or how atmospheric haze affects climate can take many hours on even the fastest supercomputers. Emulators, algorithms that quickly approximate these detailed simulations, offer a shortcut. Now, work posted online shows how artificial intelligence (AI) can easily produce accurate emulators that can accelerate simulations across all of science by billions of times.

This is a big deal, says Donald Lucas, who runs climate simulations at Lawrence Livermore National Laboratory and was not involved in the work. He says the new system automatically creates emulators that work better and faster than those his team designs and trains, usually by hand. The new emulators could be used to improve the models they mimic and help scientists make the best of their time at experimental facilities. If the work stands up to peer review, Lucas says, It would change things in a big way.

A typical computer simulation might calculate, at each time step, how physical forces affect atoms, clouds, galaxieswhatever is being modeled. Emulators, based on a form of AI called machine learning, skip the laborious reproduction of nature. Fed with the inputs and outputs of the full simulation, emulators look for patterns and learn to guess what the simulation would do with new inputs. But creating training data for them requires running the full simulation many timesthe very thing the emulator is meant to avoid.

The new emulators are based on neural networksmachine learning systems inspired by the brains wiringand need far less training. Neural networks consist of simple computing elements that link into circuitries particular for different tasks. Normally the connection strengths evolve through training. But with a technique called neural architecture search, the most data-efficient wiring pattern for a given task can be identified.

The technique, called Deep Emulator Network Search (DENSE), relies on a general neural architecture search co-developed by Melody Guan, a computer scientist at Stanford University. It randomly inserts layers of computation between the networks input and output, and tests and trains the resulting wiring with the limited data. If an added layer enhances performance, its more likely to be included in future variations. Repeating the process improves the emulator. Guan says its exciting to see her work used toward scientific discovery. Muhammad Kasim, a physicist at the University of Oxford who led the study, which was posted on the preprint server arXiv in January, says his team built on Guans work because it balanced accuracy and efficiency.

The researchers used DENSE to develop emulators for 10 simulationsin physics, astronomy, geology, and climate science. One simulation, for example, models the way soot and other atmospheric aerosols reflect and absorb sunlight, affecting the global climate. It can take a thousand of computer-hours to run, so Duncan Watson-Parris, an atmospheric physicist at Oxford and study co-author, sometimes uses a machine learning emulator. But, he says, its tricky to set up, and it cant produce high-resolution outputs, no matter how many data you give it.

The emulators that DENSE created, in contrast, excelled despite the lack of data. When they were turbocharged with specialized graphical processing chips, they were between about 100,000 and 2 billion times faster than their simulations. That speedup isnt unusual for an emulator, but these were highly accurate: In one comparison, an astronomy emulators results were more than 99.9% identical to the results of the full simulation, and across the 10 simulations the neural network emulators were far better than conventional ones. Kasim says he thought DENSE would need tens of thousands of training examples per simulation to achieve these levels of accuracy. In most cases, it used a few thousand, and in the aerosol case only a few dozen.

Its a really cool result, said Laurence Perreault-Levasseur, an astrophysicist at the University of Montreal who simulates galaxies whose light has been lensed by the gravity of other galaxies. Its very impressive that this same methodology can be applied for these different problems, and that they can manage to train it with so few examples.

Lucas says the DENSE emulators, on top of being fast and accurate, have another powerful application. They can solve inverse problemsusing the emulator to identify the best model parameters for correctly predicting outputs. These parameters could then be used to improve full simulations.

Kasim says DENSE could even enable researchers to interpret data on the fly. His team studies the behavior of plasma pushed to extreme conditions by a giant x-ray laser at Stanford, where time is precious. Analyzing their data in real timemodeling, for instance, a plasmas temperature and densityis impossible, because the needed simulations can take days to run, longer than the time the researchers have on the laser. But a DENSE emulator could interpret the data fast enough to modify the experiment, he says. Hopefully in the future we can do on-the-spot analysis.

Read more:
From models of galaxies to atoms, simple AI shortcuts speed up simulations by billions of times - Science Magazine

Reinforcement Learning (RL) Market Report & Framework, 2020: An Introduction to the Technology – Yahoo Finance

Dublin, Feb. 04, 2020 (GLOBE NEWSWIRE) -- The "Reinforcement Learning: An Introduction to the Technology" report has been added to ResearchAndMarkets.com's offering.

These days, machine learning (ML), which is a subset of computer science, is one of the most rapidly growing fields in the technology world. It is considered to be a core field for implementing artificial intelligence (AI) and data science.

The adoption of data-intensive machine learning methods like reinforcement learning is playing a major role in decision-making across various industries such as healthcare, education, manufacturing, policing, financial modeling and marketing. The growing demand for more complex machine working is driving the demand for learning-based methods in the ML field. Reinforcement learning also presents a unique opportunity to address the dynamic behavior of systems.

This study was conducted in order to understand the current state of reinforcement learning and track its adoption along various verticals, and it seeks to put forth ways to fully exploit the benefits of this technology. This study will serve as a guide and benchmark for technology vendors, manufacturers of the hardware that supports AI, as well as the end-users who will finally use this technology. Decisionmakers will find the information useful in developing business strategies and in identifying areas for research and development.

The report includes:

Key Topics Covered

Chapter 1 Reinforcement Learning

Chapter 2 Bibliography

List of TablesTable 1: Reinforcement Learning vs. Supervised Learning vs. Unsupervised LearningTable 2: Global Machine Learning Market, by Region, Through 2024

List of FiguresFigure 1: Reinforcement Learning ProcessFigure 2: Reinforcement Learning WorkflowFigure 3: Artificial Intelligence vs. Machine Learning vs. Reinforcement LearningFigure 4: Machine Learning ApplicationsFigure 5: Types of Machine LearningFigure 6: Reinforcement Learning Market DynamicsFigure 7: Global Machine Learning Market, by Region, 2018-2024

For more information about this report visit https://www.researchandmarkets.com/r/g0ad2f

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

CONTACT: ResearchAndMarkets.comLaura Wood, Senior Press Managerpress@researchandmarkets.comFor E.S.T Office Hours Call 1-917-300-0470For U.S./CAN Toll Free Call 1-800-526-8630For GMT Office Hours Call +353-1-416-8900

The rest is here:
Reinforcement Learning (RL) Market Report & Framework, 2020: An Introduction to the Technology - Yahoo Finance

REPLY: European Central Bank Explores the Possibilities of Machine Learning With a Coding Marathon Organised by Reply – Business Wire

TURIN, Italy--(BUSINESS WIRE)--The European Central Bank (ECB), in collaboration with Reply, leader in digital technology innovation, is organising the Supervisory Data Hackathon, a coding marathon focussing on the application of Machine Learning and Artificial Intelligence.

From 27 to 29 February 2020, at the ECB in Frankfurt, more than 80 participants from the ECB, Reply and further companies explore possibilities to gain deeper and faster insights into the large amount of supervisory data gathered by the ECB from financial institutions through regular financial reporting for risk analysis. The coding marathon provides a protected space to co-creatively develop new ideas and prototype solutions based on Artificial Intelligence within a short timeframe.

Ahead of the event, participants submit projects in the areas of data quality, interlinkages in supervisory reporting and risk indicators. The most promising submissions will be worked on for 48 hours during the event by the multidisciplinary teams composed of members from the ECB, Reply and other companies.

Reply has proven its Artificial Intelligence and Machine Learning capabilities with numerous projects in various industries and combines this technological expertise with in-depth knowledge of the financial services industry and its regulatory environment.

Coding marathons using the latest technologies are a substantial element in Replys toolset for sparking innovation through training and knowledge transfer internally and with clients and partners.

ReplyReply [MTA, STAR: REY] specialises in the design and implementation of solutions based on new communication channels and digital media. As a network of highly specialised companies, Reply defines and develops business models enabled by the new models of big data, cloud computing, digital media and the internet of things. Reply delivers consulting, system integration and digital services to organisations across the telecom and media; industry and services; banking and insurance; and public sectors. http://www.reply.com

Continued here:
REPLY: European Central Bank Explores the Possibilities of Machine Learning With a Coding Marathon Organised by Reply - Business Wire

Federated machine learning is coming – here’s the questions we should be asking – Diginomica

A few years ago, I wondered how edge data would ever be useful given the enormous cost of transmitting all the data to either the centralized data center or some variant of cloud infrastructure. (It is said that 5G will solve that problem).

Consider, for example, applications of vast sensor networks that stream a great deal of data at small intervals. Vehicles on the move are a good example.

There is telemetry from cameras, radar, sonar, GPS and LIDAR, the latter about 70MB/sec. This could quickly amount to four terabytes per day (per vehicle). How much of this data needs to be retained? Answers I heard a few years ago were along two lines:

My counterarguments at the time were:

Introducing TensorFlow federated, via The TensorFlow Blog:

This centralized approach can be problematic if the data is sensitive or expensive to centralize. Wouldn't it be better if we could run the data analysis and machine learning right on the devices where that data is generated, and still be able to aggregate together what's been learned?

Since I looked at this a few years ago, the distinction between an edge device and a sensor has more or less disappeared. Sensors can transmit via wifi (though there is an issue of battery life, and if they're remote, that's a problem); the definition of the edge has widened quite a bit.

Decentralized data collection and processing have become more powerful and able to do an impressive amount of computing. The case is point in Intel's Introducing the Intel Neural Compute Stick 2 computer vision and deep learning accelerator powered by the Intel Movidius Myriad X VPU, that can stick into a Pi for less than $70.00.

But for truly distributed processing, the Apple A13 chipset in the iPhone 11 has a few features that boggle the mind: From Inside Apple's A13 Bionic system-on-chip Neural Engine, a custom block of silicon separate from the CPU and GPU, focused on accelerating Machine Learning computations. The CPU has a set of "machine learning accelerators" that perform matrix multiplication operations up to six times faster than the CPU alone. It's not clear how exactly this hardware is accessed, but for tasks like machine learning (ML) that use lots of matrix operations, the CPU is a powerhouse. Note that this matrix multiplication hardware is part of the CPU cores and separate from the Neural Engine hardware.

This should beg the question, "Why would a smartphone have neural net and machine learning capabilities, and does that have anything to do with the data transmission problem for the edge?" A few years ago, I thought the idea wasn't feasible, but the capability of distributed devices has accelerated. How far-fetched is this?

Let's roll the clock back thirty years. The finance department of a large diversified organization would prepare in the fall a package of spreadsheets for every part of the organization that had budget authority. The sheets would start with low-level detail, official assumptions, etc. until they all rolled up to a small number of summary sheets that were submitted headquarters. This was a terrible, cumbersome way of doing things, but it does, in a way, presage the concept of federated learning.

Another idea that vanished is Push Technology that shared the same network load as centralizing sensor data, just in the opposite direction. About twenty-five years, when everyone had a networked PC on their desk, the PointCast Network used push technology. Still, it did not perform as well as expected, often believed to be because its traffic burdened corporate networks with excessive bandwidth use, and was banned in many places. If Federated Learning works, those problems have to be addressed

Though this estimate changes every day, there are 3 billion smartphones in the world and 7 billion connected devices.You can almost hear the buzz in the air of all of that data that is always flying around. The canonical image of ML is that all of that data needs to find a home somewhere so that algorithms can crunch through it to yield insights. There are a few problems with this, especially if the data is coming from personal devices, such as smartphones, Fitbit's, even smart homes.

Moving highly personal data across the network raises privacy issues. It is also costly to centralize this data at scale. Storage in the cloud is asymptotically approaching zero in cost, but the transmission costs are not. That includes both local WiFi from the devices (or even cellular) and the long-distance transmission from the local collectors to the central repository. This s all very expensive at this scale.

Suppose, large-scale AI training could be done on each device, bringing the algorithm to the data, rather than vice-versa? It would be possible for each device to contribute to a broader application while not having to send their data over the network. This idea has become respectable enough that it has a name - Federated Learning.

Jumping ahead, there is no controversy that training a network without compromising device performance and user experience, or compressing a model and resorting to a lower accuracy are not alternatives. In Federated Learning: The Future of Distributed Machine Learning:

To train a machine learning model, traditional machine learning adopts a centralized approach that requires the training data to be aggregated on a single machine or in a datacenter. This is practically what giant AI companies such as Google, Facebook, and Amazon have been doing over the years. This centralized training approach, however, is privacy-intrusive, especially for mobile phone usersTo train or obtain a better machine learning model under such a centralized training approach, mobile phone users have to trade their privacy by sending their personal data stored inside phones to the clouds owned by the AI companies.

The federated learning approach decentralizes training across mobile phones dispersed across geography. The presumption is that they collaboratively develop machine learning while keeping their personal data on their phones. For example, building a general-purpose recommendation engine for music listeners. While the personal data and personal information are retained on the phone, I am not at all comfortable that data contained in the result sent to the collector cannot be reverse-engineered - and I havent heard a convincing argument to the contrary.

Here is how it works. A computing group, for example, is a collection of mobile devices that have opted to be part of a large scale AI program. The device is "pushed" a model and executes it locally and learns as the model processes the data. There are some alternatives to this. Homogeneous models imply that every device is working with the same schema of data. Alternatively, there are heterogeneous models where harmonization of the data happens in the cloud.

Here are some questions in my mind.

Here is the fuzzy part: federated learning sends the results of the learning as well as some operational detail such as model parameters and corresponding weights back to the cloud. How does it do that and preserve your privacy and not clog up your network? The answer is that the results are a fraction of the data, and since the data itself is not more than a few Gb, that seems plausible. The results sent to the cloud can be encrypted with, for example, homomorphic encryption (HE). An alternative is to send the data as a tensor, which is not encrypted because it is not understandable by anything but the algorithm. The update is then aggregated with other user updates to improve the shared model. Most importantly, all the training data remains on the user's devices.

In CDO Review, The Future of AI. May Be In Federated Learning:

Federated Learning allows for faster deployment and testing of smarter models, lower latency, and less power consumption, all while ensuring privacy. Also, in addition to providing an update to the shared model, the improved (local) model on your phone can be used immediately, powering experiences personalized by the way you use your phone.

There is a lot more to say about this. The privacy claims are a little hard to believe. When an algorithm is pushed to your phone, it is easy to imagine how this can backfire. Even the tensor representation can create a problem. Indirect reference to real data may be secure, but patterns across an extensive collection can surely emerge.

Read more here:
Federated machine learning is coming - here's the questions we should be asking - Diginomica

Iguazio Deployed by Payoneer to Prevent Fraud with Real-time Machine Learning – Business Wire

NEW YORK--(BUSINESS WIRE)--Iguazio, the data science platform for real time machine learning applications, today announced that Payoneer, the digital payment platform empowering businesses around the world to grow globally, has selected Iguazios platform to provide its 4 million customers with a safer payment experience. By deploying Iguazio, Payoneer moved from a reactive fraud detection method to proactive prevention with real-time machine learning and predictive analytics.

Payoneer overcomes the challenge of detecting fraud within complex networks with sophisticated algorithms tracking multiple parameters, including account creation times and name changes. However, prior to using Iguazio, fraud was detected retroactively, enabling customers to only block users after damage had already been done. Payoneer is now able to take the same sophisticated machine learning models built offline and serve them in real-time against fresh data. This ensures immediate prevention of fraud and money laundering with predictive machine learning models identifying suspicious patterns continuously. The cooperation was facilitated by Belocal, a leading Data and IT solution integrator for mid and enterprise companies.

Weve tackled one of our most elusive challenges with real-time predictive models, making fraud attacks almost impossible on Payoneer noted Yaron Weiss, VP Corporate Security and Global IT Operations (CISO) at Payoneer. With Iguazios Data Science Platform, we built a scalable and reliable system which adapts to new threats and enables us to prevent fraud with minimum false positives.

Payoneer is leading innovation in the industry of digital payments and we are proud to be a part of it said Asaf Somekh, CEO, Iguazio. Were glad to see Payoneer accelerating its ability to develop new machine learning based services, increasing the impact of data science on the business.

Payoneer and Iguazio are a great example of technology innovation applied in real-world use-cases and addressing real market gaps said Hugo Georlette, CEO, Belocal. We are eager to continue selling and implementing Iguazios Data Science Platform to make business impact across multiple industries.

Iguazios Data Science Platform enables Payoneer to bring its most intelligent data science strategies to life. Designed to provide a simple cloud experience deployed anywhere, it includes a low latency serverless framework, a real-time multi-model data engine and a modern Python eco-system running over Kubernetes.

Earlier today, Iguazio also announced having raised $24M from existing and new investors, including Samsung SDS and Kensington Capital Partners. The new funding will be used to drive future product innovation and support global expansion into new and existing markets.

About Iguazio

The Iguazio Data Science Platform enables enterprises to develop, deploy and manage AI applications at scale. With Iguazio, companies can run AI models in real time, deploy them anywhere; multi-cloud, on-prem or edge, and bring to life their most ambitious data-driven strategies. Enterprises spanning a wide range of verticals, including financial services, manufacturing, telecoms and gaming, use Iguazio to create business impact through a multitude of real-time use cases. Iguazio is backed by top financial and strategic investors including Samsung, Verizon, Bosch, CME Group, and Dell. The company is led by serial entrepreneurs and a diverse team of innovators in the USA, UK, Singapore and Israel. Find out more on http://www.iguazio.com

About Belocal

Since its inception in 2006, Belocal has experienced consistent and sustainable growth by developing strong long-term relationships with its technology partners and by providing tremendous value to its clients. We pride ourselves on delivering the most innovative technology solutions enabling our customers to lead their market segments and stay ahead of the competition. At Belocal, we pride ourselves in our ability to listen, our attention to detail and our expertise in innovation. Such strengths have enabled us to develop new solutions and services, to suit the changing needs of our clients and acquire new businesses by tailoring all our solutions and services to the specific needs of each client.

Continue reading here:
Iguazio Deployed by Payoneer to Prevent Fraud with Real-time Machine Learning - Business Wire

3 books to get started on data science and machine learning – TechTalks

Image credit: Depositphotos

This post is part of AI education, a series of posts that review and explore educational content on data science and machine learning.

With data science and machine learning skills being in high demand, theres increasing interest in careers in both fields. But with so many educational books, video tutorials and online courses on data science and machine learning, finding the right starting point can be quite confusing.

Readers often ask me for advice on the best roadmap for becoming a data scientist. To be frank, theres no one-size-fits-all approach, and it all depends on the skills you already have. In this post, I will review three very good introductory books on data science and machine learning.

Based on your background in math and programming, the two fundamental skills required for data science and machine learning, youll surely find one of these books a good place to start.

Data scientists and machine learning engineers sit at the intersection of math and programming. To become a good data scientist, you dont need to be a crack coder who knows every single design pattern and code optimization technique. Neither do you need to have an MSc in math. But you must know just enough of both to get started. (You do need to up your skills in both fields as you climb the ladder of learning data science and machine learning.)

If you remember your high school mathematics, then you have a strong base to begin the data science journey. You dont necessarily need to recall every formula they taught you in school. But concepts of statistics and probability such as medians and means, standard deviations, and normal distributions are fundamental.

On the coding side, knowing the basics of popular programming languages (C/C++, Java, JavaScript, C#) should be enough. You should have a solid understanding of variables, functions, and program flow (if-else, loops) and a bit of object-oriented programming. Python knowledge is a strong plus for a few reasons: First, most data science books and courses use Python as their language of choice. Second, the most popular data science and machine learning libraries are available for Python. And finally, Pythons syntax and coding conventions are different from other languages such as C and Java. Getting used to it takes a bit of practice, especially if youre used to coding with curly brackets and semicolons.

Written by Sinan Ozdemir, Principles of Data Science is one of the best intros to data science that Ive read. The book keeps the right balance between math and coding, theory and practice.

Using examples, Ozdemir takes you through the fundamental concepts of data science such as different types of data and the stages of data science. You will learn what it means to clean your data, normalize it and split it between training and test datasets.

The book also contains a refresher on basic mathematical concepts such as vector math, matrices, logarithms, Bayesian statistics, and more. Every mathematical concept is interspersed with coding examples and introduction to relevant Python data science libraries for analyzing and visualizing data. But you have to bring your own Python skills. The book doesnt have any Python crash course or introductory chapter on the programming language.

What makes the learning curve of this book especially smooth is that it doesnt go too deep into the theories. It gives you just enough knowledge so that you can make optimal uses of Python libraries such as Pandas and NumPy, and classes such as DataFrame and LinearRegression.

Granted, this is not a deep dive. If youre the kind of person who wants to get to the bottom of every data science and machine learning concept and learn the logic behind every library and function, Principles of Data Science will leave you a bit disappointed.

But again, as I mentioned, this is an intro, not a book that will put you on a data science career level. Its meant to familiarize you with what this growing field is. And it does a great job at that, bringing together all the important aspects of a complex field in less than 400 pages.

At the end of the book, Ozdemir introduces you to machine learning concepts. Compared to other data science textbooks, this section of Principles of Data Science falls a bit short, both in theory and practice. The basics are there, such as the difference between supervised and unsupervised learning, but I would have liked a bit more detail on how different models work.

The book does give you a taste of different ML algorithms such as regression models, decision trees, K-means, and more advanced topics such as ensemble techniques and neural networks. The coverage is enough to whet your appetite to learn more about machine learning.

As the name suggests, Data Science from Scratch takes you through data science from the ground up. The author, Joel Grus, does a great job of showing you all the nitty-gritty details of coding data science. And the book has plenty of examples and exercises to go with the theory.

The book provides a Python crash course, which is good for programmers who have good knowledge of another programming language but dont have any background in Python. Whats really good about Gruss intro to Python is that aside from the very basic stuff, he takes you through some of the advanced features for handling arrays and matrices that you wont find in general Python tutorial textbooks, such as list comprehensions, assertions, iterables and generators, and other very useful tools.

Moreover, the Second Edition of Data Science from Scratch, published in 2019, leverages some of the advanced features of Python 3.6, including type annotations (which youll love if you come from a strongly typed language like C++).

What makes Data Science from Scratch a bit different from other data science textbooks is its unique way to do everything from scratch. Instead of introducing you to NumPy and Pandas functions that will calculate coefficients and, say, mean absolute errors (MAE) and mean square errors (MSE), Grus shows you how to code it yourself.

He does, of course, remind you that the books sample code is meant for practice and education and will not match the speed and efficiency of professional libraries. At the end of each chapter, he provides references to documentation and tutorials of the Python libraries that correspond to the topic you have just learned. But the from-scratch approach is fun nonetheless, especially if youre one of those I-have-to-know-what-goes-on-under-the-hood type of people.

One thing youll have to consider before diving into this book is, youll need to bring your math skills with you. In the book, Grus codes fundamental math functions, starting from simple vector math to more advanced statistic concepts such as calculating standard deviations, errors, and gradient descent. However, he assumes that you already know how the math works. I guess its okay if youre fine with just copy-pasting the code and seeing it work. But if youve picked up this book because you want to make sense of everything, then have your calculus textbook handy.

After the basics, Data Science from Scratch goes into machine learning, covering various algorithms, including the different flavors of regression models and decision trees. You also get to delve into the basics of neural networks followed by a chapter on deep learning and an introduction to natural language processing.

In short, I would describe Data Science with Python as a fully hands-on introduction to data science and machine learning. Its the most practice-driven book on data science and machine learning that Ive read. The authors have done a great job of bringing together the right data samples and practice code to get you acquainted with the principles of data science and machine learning.

The book contains minimal theoretical content and mostly teaches you by taking you through coding labs. If you have a decent computer and an installation of Anaconda or another Python package that has comes bundled with Jupyter Notebooks, then you can probably go through all the exercises with minimal effort. I highly recommend writing the code yourself and avoiding copy-pasting it from the book or sample files, since the entire goal of the book is to learn through practice.

Youll find no Python intro here. Youll dive straight into NumPy, Pandas, and scikit-learn. Theres also no deep dive into mathematical concepts such as correlations, error calculations, z-scores, etc., so youll need to get help from your math book whenever you need a refresher on any of the topics.

Alternatively, you can just type in the code and see Pythons libraries work their magic. Data Science with Python does a decent job of showing you how to put together the right pieces for any data science and machine learning project.

Data Science with Python provides a solid intro to data preparation and visualization, and then takes you through a rich assortment of machine learning algorithms as well as deep learning. There are plenty of good examples and templates you can use for other projects. The book also gives an intro on XGBoost, a very useful optimization library, and the Keras neural network library. Youll also get to fiddle around with convolutional neural networks (CNN), the cornerstone of current advances in computer vision.

Before starting this book, I strongly recommend that you go through a gentler introductory book that covers more theory, such as Ozdemirs Principles of Data Science. It will make the ride less confusing. The combination of the two will leave you with a very strong foundation to tackle more advanced topics.

These are just three of the many data science books that are out there. If youve read other awesome books on the topic, please share your experience in the comments section. There are also plenty of great interactive online courses, like Udemys Machine Learning A-Z: Hands-On Python & R In Data Science (I will be reviewing this one in the coming weeks).

While an intro to data science will give you a good foothold into the world of machine learning and the broader field of artificial intelligence, theres a lot of room for expanding that knowledge.

To build on this foundation, you can take a deeper dive into machine learning. There are plenty of good books and courses out there. One of my favorites is Aurelien Gerons Hands-on Machine Learning with Scikit-Learn, Keras & TensorFlow (also scheduled for review in the coming months). You can also go deeper on one of the sub-disciplines of ML and deep learning such as CNNs, NLP or reinforcement learning.

Artificial intelligence is complicated, confusing, and exciting at the same time. The best way to understand it is to never stop learning.

View original post here:
3 books to get started on data science and machine learning - TechTalks

2020: The year of seeing clearly on AI and machine learning – ZDNet

Tom Foremski

Late last year, I complained toRichard Socher, chief scientist at Salesforce and head of its AI projects, about the term "artificial intelligence" and that we should use more accurate terms such as machine learning or smart machine systems, because "AI" creates unreasonably high expectations when the vast majority of applications are essentially extremely specialized machine learning systems that do specific tasks -- such as image analysis -- very well but do nothing else.

Socher said that when he was a post-graduate it rankled him also, and he preferred other descriptions such as statistical machine learning. He agrees that the "AI" systems that we talk about today are very limited in scope and misidentified, but these days he thinks of AI as being "Aspirational Intelligence." He likes the potential for the technology even if it isn't true today.

I like Socher's designation of AI as Aspirational Intelligence but I'd prefer not to further confuse the public, politicians and even philosophers about what AI is today: It is nothing more than software in a box -- a smart machine system that has no human qualities or understanding of what it does. It's a specialized machine that is nothing to do with systems that these days are called Artificial General Intelligence (AGI).

Before ML systems co-opted it, the term AI was used to describe what AGI is used to describe today: computer systems that try to mimic humans, their rational and logical thinking, and their understanding of language and cultural meanings to eventually become some sort of digital superhuman, which is incredibly wise and always able to make the right decisions.

There has been a lot of progress in developing ML systems but very little progress on AGI. Yet the advances in ML are being attributed to advances in AGI. And that leads to confusion and misunderstanding of these technologies.

Machine learning systems unlike AGI, do not try to mimic human thinking -- they use very different methods to train themselves on large amounts of specialist data and then apply their training to the task at hand. In many cases, ML systems make decisions without any explanation and it's difficult to determine the value of their black box decisions. But if those results are presented as artificial intelligence then they get far higher respect from people than they likely deserve.

For example, when ML systems are being used in applications such as recommending prison sentences but are described as artificial intelligence systems -- they gain higher regard from the people using them. It implies that the system is smarter than any judge. But if the term machine learning is used it would underline that these are fallible machines and allow people to treat the results with some skepticism in key applications.

Even if we do develop future advanced AGI systems we should continue to encourage skepticism and we should lower our expectations for their abilities to augment human decision making. It is difficult enough to find and apply human intelligence effectively -- how will artificial intelligence be any easier to identify and apply? Dumb and dumber do not add up to a genius. You cannot aggregate IQ.

As things stand today, the mislabeled AI systems are being discussed as if they were well on their way of jumping from highly specialized non-human tasks to becoming full AGI systems that can mimic human thinking and logic. This has resulted in warnings from billionaires and philosophers that those future AI systems will likely kill us all -- as if a sentient AI would conclude that genocide is rational and logical. It certainly might appear to be a winning strategy if the AI system was trained on human behavior across recorded history but that would never happen.

There is no rational logic for genocide. Future AI systems would be designed to love humanity and be programmed to protect and avoid human injury. They would likely operate very much in the vein of Richard Brautigan's 1967 poemAll Watched Over By Machines Of Loving Grace--the last stanza:

I like to think(it has to be!)of a cybernetic ecologywhere we are free of our laborsand joined back to nature,returned to our mammalbrothers and sisters,and all watched overby machines of loving grace.

Let us not fear AI systems and in 2020, let's be clear and call them machine learning systems -- because words matter.

Original post:
2020: The year of seeing clearly on AI and machine learning - ZDNet

Machine learning and eco-consciousness key business trends in 2020 – Finfeed

In 2020, small to medium sized businesses (SMBs) are likely to focus more on supporting workers to travel and collaborate in ways that suit them, while still facing a clear economic imperative to keep costs under control.

This will likely involve increased use of technologies such as machine learning and automation to: help determine and enforce spending policies; ensure people travelling for work can optimise, track, and analyse their spend; and prioritise travel options that meet goals around environmental responsibility and sustainability.

Businesses that recognise and respond to these trends will be better-placed to save money while improving employee engagement and performance, according to SAP Concur.

Fabian Calle, General Manager, Small to Medium Business, ANZ, SAP Concur, said, As the new decade begins, the business environment will be subject to the same economic ups and downs seen in the previous decade. However, with new technologies and approaches, most businesses will be able to leverage automation and even artificial intelligence to smooth out those peaks and troughs.

SAP Concur has identified the top five 2020 predictions for SMBs, covering economics, technology, business, travel, the environment, diversity, and corporate social responsibility:

Calle said, 2020 will continue to drive significant developments as organisations of all sizes look to optimise efficiency and productivity through employee operations and satisfaction. Australian businesses need to be aware of these trends and adopt cutting edge technology to facilitate their workers need to travel and collaborate more effectively and with less effort.

Original post:
Machine learning and eco-consciousness key business trends in 2020 - Finfeed

Can machine learning take over the role of investors? – TechHQ

As we dive deeper into the Fourth Industrial Revolution, there is no disputing how technology serves as a catalyst for growth and innovation for many businesses across a range of functions and industries.

But one technology that is steadily gaining prominence across organizations includes machine learning (ML).

In the simplest terms, ML is the science of getting computers to learn and act like humans do without being programmed. It is a form of artificial intelligence (AI) and entails feeding machine data, enabling the computer program to learn autonomously and enhance its accuracy in analyzing data.

The proliferation of technology means AI is now commonplace in our daily lives, with its presence in a panoply of things, such as driverless vehicles, facial recognition devices, and in the customer service industry.

Currently, asset managers are exploring the potential that AI/ML systems can bring to the finance industry; close to 60 percent of managers predict that ML will have a medium-to-large impact across businesses.

MLs ability to analyze large data sets and continuously self-develop through trial and error translates to increased speed and better performance in data analysis for financial firms.

For instance, according to the Harvard Business Review, ML can spot potentially outperforming equities by identifying new patterns in existing data sets and examine the collected responses of CEOs in quarterly earnings calls of the S&P 500 companies for the past 20 years.

Following this, ML can then formulate a review of good and bad stocks, thus providing organizations with valuable insights to drive important business decisions. This data also paves the way for the system to assess the trustworthiness of forecasts from specific company leaders and compare the performance of competitors in the industry.

Besides that, ML also has the capacity to analyze various forms of data, including sound and images. In the past, such formats of information were challenging for computers to analyze, but todays ML algorithms can process images faster and better than humans.

For example, analysts use GPS locations from mobile devices to pattern foot traffic at retail hubs or refer to the point of sale data to trace revenues during major holiday seasons. Hence, data analysts can leverage on this technological advancement to identify trends and new areas for investment.

It is evident that ML is full of potential, but it still has some big shoes to fil if it were to replace the role of an investor.

Nishant Kumar aptly explained this in Bloomberg, Financial data is very noisy, markets are not stationary and powerful tools require deep understanding and talent thats hard to get. One quantitative analyst, or quant, estimates the failure rate in live tests at about 90 percent. Man AHL, a quant unit of Man Group, needed three years of workto gain enough confidence in a machine-learning strategy to devote client money to it. It later extended its use to four of its main money pools.

In other words, human talent and supervision are still essential to developing the right algorithm and in exercising sound investment judgment. After all, the purpose of a machine is to automate repetitive tasks. In this context, ML may seek out correlations of data without understanding their underlying rationale.

One ML expert said, his team spends days evaluating if patterns by ML are sensible, predictive, consistent, and additive. Even if a pattern falls in line with all four criteria, it may not bear much significance in supporting profitable investment decisions.

The bottom line is ML can streamline data analysis steps, but it cannot replace human judgment. Thus, active equity managers should invest in ML systems to remain competitive in this innovate or die era. Financial firms that successfully recruit professionals with the right data skills and sharp investment judgment stands to be at the forefront of the digital economy.

See original here:

Can machine learning take over the role of investors? - TechHQ

How Will Your Hotel Property Use Machine Learning in 2020 and Beyond? | – Hotel Technology News

Every hotel should ask the same question. How will our property use machine learning? Its not just a matter of gaining a competitive advantage; its imperative in order to stay in business.By Jason G. Bryant, Founder and CEO, Nor1 - 1.9.2020

Artificial intelligence (AI) implementation has grown 270% over the past four years and 37% in the past year alone, according to Gartners 2019 CIO Survey of more than 3,000 executives. About the ubiquity of AI and machine learning (ML) Gartner VP Chris Howard notes, If you are a CIO and your organization doesnt use AI, chances are high that your competitors do and this should be a concern, (VentureBeat). Hotels may not have CIOs, but any business not seriously considering the implications of ML throughout the organization will find itself in multiple binds, from the inability to offer next-level guest service to operational inefficiencies.

Amazon is the poster child for a sophisticated company that is committed to machine learning both in offers (personalized commerce) as well as behind the scenes in their facilities. Amazon Founder & CEO Jeff Bezos attributes much of Amazons ongoing financial success and competitive dominance to machine learning. Further, he has suggested that the entire future of the company rests on how well it uses AI. However, as Forbes contributor Kathleen Walsh notes, There is no single AI group at Amazon. Rather, every team is responsible for finding ways to utilize AI and ML in their work. It is common knowledge that all senior executives at Amazon plan, write, and adhere to a six-page business plan. A piece of every business plan for every business function is devoted to answering the question: How will you utilize machine learning this year?

Every hotel should ask the same question. How will our property use machine learning? Its not just a matter of gaining a competitive advantage; its imperative in order to stay in business. In the 2017 Deloitte State of Cognitive Survey, which canvassed 1,500 mostly C-level executives, not a single survey respondent believed that cognitive technologies would not drive substantive change. Put more simply: every executive in every industry knows that AI is fundamentally changing the way we do business, both in services/products as well as operations. Further, 94% reported that artificial intelligence would substantially transform their companies within five years, most believing the transformation would occur by 2020.

Playing catch-up with this technology can be competitively dangerous as there is significant time between outward-facing results (when you realize your competition is outperforming you) and how long it will take you to achieve similar results and employ a productive, successful strategy. Certainly, revenue management and pricing will be optimized by ML, but operations, guest service, maintenance, loyalty, development, energy usage, and almost every single aspect of the hospitality enterprise will be impacted as well. Any facility where the speed and precision of tactical decision making can be improved will be positively impacted.

Hotels are quick to think that when ML means robotic housekeepers and facial recognition kiosks. While these are possibilities, ML can do so much more. Here are just a few of the ways hotels are using AI to save money, improve service, and become more efficient.

Hiltons Energy Program

The LightStay program at Hilton predicts energy, water, and waste usage and costs. The company can track actual consumption against predictive models, which allows them to manage year-over-year performance as well as performance against competitors. Further, some hotel brands can link in-room energy to the PMS so that when a room is empty, the air conditioner automatically turns off. The future of sustainability in the hospitality industry relies on ML to shave every bit off of energy usage and budget. For brands with hundreds and thousands of properties, every dollar saved on energy can affect the bottom line in a big way.

IHG & Human Resources

IHG employs 400,000 people across 5,723 hotels. Holding fast to the idea that the ideal guest experience begins with staff, IHG implemented AI strategies tofind the right team member who would best align and fit with each of the distinct brand personalities, notes Hazel Hogben, Head of HR, Hotel Operations, IHG Europe. To create brand personas and algorithms, IHG assessed its top customer-facing senior managers across brands using cognitive, emotional, and personality assessments. They then correlated this with KPI and customer data. Finally, this was cross-referenced with values at the different brands. The algorithms are used to create assessments to test candidates for hire against the personas using gamification-based tools, according to The People Space. Hogben notes that in addition to improving the candidate experience (they like the gamification of the experience), it has also helped in eliminating personal or preconceived bias among recruiters. Regarding ML uses for hiring, Harvard Business Review says in addition to combatting human bias by automatically flagging biased language in job descriptions, ML also identifies highly qualified candidates who might have been overlooked because they didnt fit traditional expectations.

Accor Hotels Upgrades

A 2018 study showed that 70% of hotels say they never or only sometimes promote upgrades or upsells at check-in (PhocusWire). In an effort to maximize the value of premium inventory and increase guest satisfaction, Accor Hotels partnered with Nor1 to implement eStandby Upgrade. With the ML-powered technology, Accor Hotels offers guests personalized upgrades based on previous guest behavior at a price that the guest has shown a demonstrated willingness to pay at booking and during the pre-arrival period, up to 24 hours before check-in. This allows the brand to monetize and leverage room features that cant otherwise be captured by standard room category definitions and to optimize the allocation of inventory available on the day of arrival. ML technology can create offers at any point during the guest pathway, including the front desk. Rather than replacing agents as some hotels fear, it helps them make better, quicker decisions about what to offer guests.

Understanding Travel Reviews

The luxury Dorchester Collection wanted to understand what makes their high-end guests tick. Instead of using the traditional secret shopper methods, which dont tell hotels everything they need to know about their experience, Dorchester Collection opted to analyze traveler feedback from across major review sites using ML. Much to their surprise, they discovered Dorchesters guests care a great deal more about breakfast than they thought. They also learned that guests want to customize breakfast, so they removed the breakfast menu and allowed guests to order whatever they like. As it turns out, guests love this.

In his May 2019 Google I/O Address, Google CEO Sundar Pichai said, Thanks to advances in AI, Google is moving beyond its core mission of organizing the worlds information. We are moving from a company that helps you find answers to a company that helps you get things done (ZDNet). Pichai has long held that we no longer live in a mobile-first world; we now inhabit an AI-first world. Businesses must necessarily pivot with this shift, evolving processes and products, sometimes evolving the business model, as in Googles case.

Hotels that embrace ML across operations will find that the technologies improve processes in substantive ways. ML improves the guest experience and increases revenue with precision decisioning and analysis across finance, human resources, marketing, pricing and merchandising, and guest services. Though the Hiltons, Marriotts, and IHGs of the hotel world are at the forefront of adoption, ML technologies are accessibleboth in price and implementationfor the full range of properties. The time has come to ask every hotel department: How will you use AI this year?

For more about Machine Learning and the impact on the hotel industry, download NOR1s ebook The Hospitality Executives Guide to Machine Learning: Will You Be a Leader, Follower, or Dinosaur?

Jason G. Bryant, Nor1 Founder and CEO, oversees day-to-day operations, provides visionary leadership and strategic direction for the upsell technology company. With Jason at the helm, Nor1 has matured into the technology leader in upsell solutions. Headquartered in Silicon Valley, Nor1 provides innovative revenue enhancement solutions to the hospitality industry that focus on the intersection of machine learning, guest engagement and operational efficiency. A seasoned entrepreneur, Jason has over 25 years experience building and leading international software development and operations organizations.

Related

Read more:
How Will Your Hotel Property Use Machine Learning in 2020 and Beyond? | - Hotel Technology News

Pioneering deep learning in the cyber security space: the new standard? – Information Age

Applying deep learning in the cyber security space has many benefits, such as the prediction of unknown threats and zero time classification

Will cyber security solutions move from machine learning to deep learning?

The use of deep learning in the cyber security space is an emerging trend. But, it has the potential to transform a security model that is currently broken, by predicting new attacks before theyve breached an organisations network or device.

Cyber security has a coronavirus situation every day, said Jonathan Kaftzan, VP Marketing at Deep Instinct, during his presentation as part of the latest IT Press Tour.

Deep learning neural models can predict new variations of existing cyber attacks that occur daily, while the majority of current solutions in the market can only detect infected systems or anomalies, contain and remediate them this is costly and unsustainable.

Deep learning technology, a subset of machine learning algorithms (which is itself a subset of artificial intelligence algorithms), can predict and protect organisations from known and unknown cyber attacks in real-time, while mitigating the problem of false positives. It is changing how organisations build and manage their cyber security stack.

Many traditional solutions, as well, can only protect specific domains or operating systems (one vendor for Windows and Android, for example). A single platform that can handle any threat at any time is more viable.

Before delving into deep learning and cyber security, its important to identify why the cyber security model is broken.

From 2008 to 2018, the number of data breaches has doubled, from 636 to 1,244. Files and records exposed has also jumped from 35.7 million to 446.5 million over that same time period.

The problem is getting worse, despite investment in cyber security increasing by 30%. Gartner has predicted that the market will be worth $248.6 billion by 2023, according to Statista.

There has been a huge growth in cyber security investment, but nothing has improved. In fact, its got worse. The cyber security model is not working, stated Kaftzan.

As part of Information Ages Cyber Security Month, we look at cyber security best practice everything from defining it to the importance of training. Read here

1. Volume more than 350,000 new malicious programmes are created every day (mostly by machines). It is easy to modify existing malware and create a completely new cyber attack. It is overwhelming.

2. A question of when not if 67%of CIOs thought the possibility of their company experiencing a data breach or cyber attack in 2018 was a given, according to a survey from thePonemon Institute.

3. Cost a big breach can cost an enterprise between $40-350 million.

Detection time is costly.

4. Skills shortage in cyber 69% of organisations say their cyber security teams are understaffed, while there will be as many as 3.5 million unfilled positions in the industry by 2021.

5. Complexity of cyber attacks the level of sophistication and complexity of cyber attacks is increasing. AI-based malware, and adversarial learning (using a neural network (DL or ML) to attack another neural network) is also beginning to threaten networks.

By using a deep learning neural network algorithm, organisations can detect and prevent known and unknown cyber security threats in real-time.

Referring to Deep Instincts platform, Kaftzan said: The time it takes us to analyse a file before youve even clicked it is 20 milliseconds. Weve never seen it before to assess whether it is malicious or not. In another 50 milliseconds well be able to tell you where the attack has come from and what it is autonomously, without any human being involved in the process. The time it takes to remediate and contain the attack is under a minute.

SE Labs tested the solution and found it had a 100% prevention score with 0% false positives (when system flags a security vulnerability that you do not have).

HP is an investor and strategic partner of Deep Instinct. It has installed the technology in all the laptops they sell to the enterprise market millions of new laptops [HP Sure Sense powered by Deep Instinct] are protected using our technology, added Kaftzan.

What is AI? Information Age has created a simple guide to AI, machine learning, neural networks, deep learning and random forests. Read here

Predictions of unknown threats.

Zero time prediction and detection.

Zero time classification.

Works across any device, operating system or file.

Doesnt rely on connection (edge deployment).

Deep learning is a sub category in a family of algorithims under machine learning, while machine learning is a broad set of algorithms under artificial intelligence.

Everyone is talking about AI, but it has been around for many years. You can define the technology as a system that mimics human intelligence by making decisions. There are many forms of human intelligence, such as if you do a. then b. will happen many systems are already using this type of rule-based decision-making.

In this definition, using AI is the norm, continued Kaftzan.

Machine learning was developed in the 1980s. Here, the algorithms could learn from datasets and make decisions based on that.

Machine learning couldnt improve human challenges until 10 years ago, when deep learning neural networks were introduced. They became available because of better infrastructure (GPUs), explained Kaftzan.

Machine learning is reliant on the human stack. It limited, therefore, by the data (under 2% of data is analysed), the human (lack of knowledge and expertise), adversaries (mutating and growing cyber attacks) and size of the datasets.

Attackers can also hide the malicious features through things like encryption, commonly known as feature obfuscation.

An end-to-end deep learning framework, however, is the only algorithm in the AI family that can analyse and make assumptions from all the raw data without the involvement of the human.

Deep learning solutions produce over 99% accuracy with unknown malware and 0.0001% false positives, compared to traditional ML, which produces 50-70% accuracy with unknown malware and 1-2% false positives.

There is a growing prevalence of deep learning in real-world solutions, but the technology is lagging in cyber security:

Computer vision: 98% deep learning, 2% traditional machine learning.

Speech recognition: 80% deep learning, 20% traditional machine learning.

Text understand: 65% deep learning, 35% traditional machine learning.

Cyber security: 2% deep learning, 98% traditional machine learning.

An end-to-to-end deep learning framework can predict the mutations of existing malware and prevent them in real-time, before the impact can be felt on a device or network. The algorithm is developed entirely on C/C++ and is optimised using NVIDIAs GPU training (before it is deployed on endpoints using regular CPUs).

Deep Instinct is pioneering deep learning in the cyber security space. But, there are a number challenges. The main one is that there are not enough experts that can build deep learning algorithms theyre all snapped up by the big companies, such as Baidu (speech recognition) and Google (NLP) after university.

Read the original here:
Pioneering deep learning in the cyber security space: the new standard? - Information Age

Machine Learning as a Service Market 2020 Size, Share, Technological Innovations & Growth Forecast To 2026 – Daily Science

Machine Learning as a Service Market report provide pin-point analysis of theMachine Learning as a Service industry: Capacity, Production, Value, Consumption and Status(2014-2019) and Six- Year Forecast (2020-2026). BedsidesMachine Learning as a Service marketresearch report enriched on worldwide competition by topmost prime manufactures (Amazon, Oracle Corporation, IBM, Microsoft Corporation, Google Inc., Salesforce.Com, Tencent, Alibaba, UCloud, Baidu, Rackspace, SAP AG, Century Link Inc., CSC (Computer Science Corporation), Heroku, Clustrix, Xeround) which providing information such asCompany Profiles, Product Picture and Specification, Product Details, Capacity, Price, Cost, Gross Consumption, Revenue and contact information is provided for better understanding. In addition, this report discusses the key drivers influencing Market Growth, Opportunities, The Challenges and the Risks faced by key manufacturers and the market as a whole.

Machine Learning as a Service Market Major Factors: Machine Learning as a Service Market Overview, Machine Learning as a Service Market Analysis by Application, Economic Impact on Market, Market Competition, Industrial Chain, Sourcing Strategy and Downstream Buyers, Machine Learning as a Service Market Effect, Factors, Analysis, Machine Learning as a Service Market Forecast, Marketing Strategy Analysis, Distributors/Traders.

Get Free Sample PDF (including full TOC, Tables and Figures)of Machine Learning as a Service[emailprotected]https://www.researchmoz.us/enquiry.php?type=S&repid=2302143

Summation of Machine Learning as a Service Market:Machine learning is a field of artificial intelligence that uses statistical techniques to give computer systems the ability to learn (e.g., progressively improve performance on a specific task) from data, without being explicitly programmed.

Based onProduct Type, Machine Learning as a Service market report displays the manufacture, profits, value, and market segment and growth rate of each type, covers:

Private clouds Public clouds Hybrid cloud

Based onend users/applications, Machine Learning as a Service market report focuses on the status and outlook for major applications/end users, sales volume, market share and growth rate for each application, this can be divided into:

Personal Business

Do You Have Any Query Or Specific Requirement? Ask to Our Industry[emailprotected]https://www.researchmoz.us/enquiry.php?type=E&repid=2302143

The report offers in-depth assessment of the growth and other aspects of the Machine Learning as a Service market in important countries (regions), including:

The key insights of the Machine Learning as a Service Market report:

The report provides Key Statistics on the Market Status of the Machine Learning as a Service market manufacturers and is a valuable source of guidance and direction for companies and individuals interested in the industry.

The Machine Learning as a Service market report provides a basic overview of the industry including its definition, applications and manufacturing technology.

The report presents the Company Profile, Product Specifications, Capacity, Production Value, and 2013-2020 market shares for key vendors.

The total Machine Learning as a Service market is further divided By Company, By Country, And By Application/Type for the competitive landscape analysis.

The report estimates 2020-2026 market Development Trends of Machine Learning as a Service industry.

Analysis of Upstream Raw Materials, Downstream Demand, And Current Market Dynamics is also carried out

The report makes some important proposals for a new project of Machine Learning as a Service Industry before evaluating its feasibility.

Contact:

ResearchMozMr. Nachiket Ghumare,Tel: +1-518-621-2074USA-Canada Toll Free: 866-997-4948Email:[emailprotected]

Browse More Reports Visit @https://www.mytradeinsight.blogspot.com/

Read more:
Machine Learning as a Service Market 2020 Size, Share, Technological Innovations & Growth Forecast To 2026 - Daily Science

Machine Learning in Finance Market Provides in-depth analysis of the Industry, with Current Trends and Future Estimations to Elucidate the Investment…

TheGlobal Machine Learning in Finance MarketResearch report provided by Market Expertz is a detailed study report of theGlobal Machine Learning in Finance Market, which covers all the necessary information required by a new market entrant as well as the existing players to gain a deeper understanding of the market. The Global Machine Learning in Finance Marketreport is segmented in terms of regions, product type, applications, key players, and several other essential factors. The report also covers the global market scenario, providing deep insights into the cost structure of the product, production, and manufacturing processes, and other essential factors.

The report also covers the global market scenario, highlighting the pricing of the product, production and consumption volume, cost analysis, industry value, barriers and growth drivers, dominant market players, demand and supply ratio of the market, the growth rate of the market and forecast till 2026.

Get PDFSample copy of Machine Learning in Finance Market Report2020, Click [emailprotected] https://www.marketexpertz.com/sample-enquiry-form/86930

The report includes accurately drawn facts and figures, along with graphical representations of vital market data. The research report sheds light on the emerging market segments and significant factors influencing the growth of the industry to help investors capitalize on the existing growth opportunities.

In market segmentation by manufacturers, the report covers the following companies-

Ignite LtdYodleeTrill A.I.MindTitanAccentureZestFinanceOthers

Get to know the business better:The global Machine Learning in Finance market research is carried out at the different stages of the business lifecycle from the production of a product, cost, launch, application, consumption volume and sale. The research offers valuable insights into the marketplace from the beginning including some sound business plans chalked out by prominent market leaders to establish a strong foothold and expand their products into one thats better than others.

In market segmentation by types of Machine Learning in Finance, the report covers-

Supervised LearningUnsupervised LearningSemi Supervised LearningReinforced LeaningOthers

In market segmentation by applications of the Machine Learning in Finance, the report covers the following uses-

BanksSecurities CompanyOthers

Order Your Copy Now (Customized report delivered as per your specific requirement) @ https://www.marketexpertz.com/checkout-form/86930

A conscious effort is made by the subject matter experts to analyze how some business owners succeed in maintaining a competitive edge while the others fail to do so makes the research interesting. A quick review of the realistic competitors makes the overall study a lot more interesting. Opportunities that are helping product owners size up their business further add value to the overall study.

With this global Machine Learning in Finance market research report, all the manufacturers and vendors will be aware of the growth factors, shortcomings, opportunities, and threats that the market has to offer in the forecast period. The report also highlights the revenue, industry size, types, applications, players share, production volume, and consumption to gain a proper understanding of the demand and supply chain of the market.

Years that have been considered for the study of this report are as follows:

Major Geographies mentioned in this report are as follows:

Avail discounts while purchasing this report, Click[emailprotected] https://www.marketexpertz.com/discount-enquiry-form/86930

The complete downstream and upstream essentials and value chains are carefully studied in this report. Current trends that are impacting and controlling the global Machine Learning in Finance market growth like globalization, industrialization, regulations, and ecological concerns are mentioned extensively. The Global Machine Learning in Finance market research report also contains technical data, raw materials, volumes, and manufacturing analysis of Machine Learning in Finance. It explains which product has the highest penetration in which market, their profit margins, break-even analysis, and R&D status. The report makes future projections for the key opportunities based on the analysis of the segment of the market.

Key features of the report:

What does the report offer?

For more details on the Machine Learning in Finance Report, click here @ https://www.marketexpertz.com/industry-overview/machine-learning-in-finance-market

Well-versed in economics and mergers and acquisitions, Jashi writes about companies and their corporate stratagem. She has been recognized for her near-accurate predictions by the business world, garnering trust in her written word.

Visit link:
Machine Learning in Finance Market Provides in-depth analysis of the Industry, with Current Trends and Future Estimations to Elucidate the Investment...

Google open-sources framework that reduces AI training costs by up to 80% – VentureBeat

Google researchers recently published a paper describing a framework SEED RL that scales AI model training to thousands of machines. They say that it could facilitate training at millions of frames per second on a machine while reducing costs by up to 80%, potentially leveling the playing field for startups that couldnt previously compete with large AI labs.

Training sophisticated machine learning models in the cloud remains prohibitively expensive. According to a recent Synced report, the University of Washingtons Grover, which is tailored for both the generation and detection of fake news, cost $25,000 to train over the course of two weeks. OpenAI racked up $256 per hour to train its GPT-2 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.

SEED RL, which is based on Googles TensorFlow 2.0 framework, features an architecture that takes advantage of graphics cards and tensor processing units (TPUs) by centralizing model inference. To avoid data transfer bottlenecks, it performs AI inference centrally with a learner component that trains the model using input from distributed inference. The target models variables and state information are kept local, while observations are sent to the learner at every environment step and latency is kept to a minimum thanks to a network library based on the open source universal RPC framework.

SEED RLs learner component can be scaled across thousands of cores (e.g., up to 2,048 on Cloud TPUs), and the number of actors which iterate between taking steps in the environment and running inference on the model to predict the next action can scale up to thousands of machines. One algorithm V-trace predicts an action distribution from which an action can be sampled, while another R2D2 selects an action based on the predicted future value of that action.

To evaluate SEED RL, the research team benchmarked it on the commonly used Arcade Learning Environment, several DeepMind Lab environments, and the Google Research Football environment. They say that they managed to solve a previously unsolved Google Research Football task and that they achieved 2.4 million frames per second with 64 Cloud TPU cores, representing an improvement over the previous state-of-the-art distributed agent of 80 times.

This results in a significant speed-up in wall-clock time and, because accelerators are orders of magnitude cheaper per operation than CPUs, the cost of experiments is reduced drastically, wrote the coauthors of the paper. We believe SEED RL, and the results presented, demonstrate that reinforcement learning has once again caught up with the rest of the deep learning field in terms of taking advantage of accelerators.

Read more:
Google open-sources framework that reduces AI training costs by up to 80% - VentureBeat