10 Business Functions That Are Ready To Use Artificial Intelligence – Forbes

In the grand scheme of things, artificial intelligence (AI) is still in the very early stages of adoption by most organizations. However, most leaders are quite excited to implement AI into the companys business functions to start realizing its extraordinary benefits. While we have no way of knowing all the ways artificial intelligence and machine learning will ultimately impact business functions, here are 10 business functions that are ready to use artificial intelligence.

10 Business Functions That Are Ready To Use Artificial Intelligence

Marketing

If your company isnt using artificial intelligence in marketing, it's already behind. Not only can AI help to develop marketing strategies, but it's also instrumental in executing them. Already AI sorts customers according to interest or demographic, can target ads to them based on browsing history, powers recommendation engines, and is a critical tool to give customers what they want exactly when they want it. Another way AI is used in marketing is through chatbots. These bots can help solve problems, suggest products or services, and support sales. Artificial intelligence also supports marketers by analyzing data on consumer behavior faster and more accurately than humans. These insights can help businesses make adjustments to marketing campaigns to make them more effective or plan better for the future.

Sales

There is definitely a side of selling products and services that is uniquely human, but artificial intelligence can arm sales professionals with insights that can improve the sales function. AI helps improve sales forecasting, predict customer needs, and improve communication. And intelligent machines can help sales professionals manage their time and identify who they need to follow-up with and when as well as what customers might be ready to convert.

Research and Development (R&D)

What about artificial intelligence as a tool of innovation? It can help us build a deeper understanding in nearly any industry, including healthcare and pharmaceuticals, financial, automotive, and more, while collecting and analyzing tremendous amounts of information efficiently and accurately. This and machine learning can help us research problems and develop solutions that weve never thought of before. AI can automate many tasks, but it will also open the door to novel discoveries, ways of improving products and services as well as accomplishing tasks. Artificial intelligence helps R&D activities be more strategic and effective.

IT Operations

Also called AIOps, AI for IT operations is often the first experience many organizations have with implementing artificial intelligence internally. Gartner defines the term AIOps as the application of machine learning and data science to IT operations problems. AI is commonly used for IT system log file error analysis, with IT systems management functions as well as to automate many routine processes. It can help identify issues so the IT team can proactively fix them before any IT systems go down. As the IT systems to support our businesses become more complex, AIOps helps the IT improve system performance and services.

Human Resources

In a business function with human in the name, is there a place for machines? Yes! Artificial intelligence really has the potential to transform many human resources activities from recruitment to talent management. AI can certainly help improve efficiency and save money by automating repetitive tasks, but it can do much more. PepsiCo used a robot, Robot Vera, to phone and interview candidates for open sales positions. Talent is going to expect a personalized experience from their employer just as they have been accustomed to when shopping and for their entertainment. Machine learning and AI solutions can help provide that. In addition, AI can help human resources departments with data-based decision-making and make candidate screening and the recruitment process easier. Chatbots can also be used to answer many common questions about company policies and benefits.

Contact Centers

The contact center of an organization is another business area where artificial intelligence is already in use. Organizations that use AI technology to enhance rather than replace humans with these tasks are the ones that are incorporating artificial intelligence in the right way. These centers collect a tremendous amount of data that can be used to learn more about customers, predict customer intent, and improve the "next best action" for the customer for better customer engagement. The unstructured data collected from contact centers can also be analyzed by machine learning to uncover customer trends and then improve products and services.

Building Maintenance

Another way AI is already at work in businesses today is helping facilities managers optimize energy use and the comfort of occupants. Building automation, the use of artificial intelligence to help manage buildings and control lighting and heating/cooling systems, uses internet-of-things devices and sensors as well as computer vision to monitor buildings. Based upon the data that is collected, the AI system can adjust the building's systems to accommodate for the number of occupants, time of day, and more. AI helps facilities managers improve energy efficiency of the building. An additional component of many of these systems is building security as well.

Manufacturing

Heineken, along with many other companies, uses data analytics at every stage of the manufacturing process from the supply chain to tracking inventory on store shelves. Predictive intelligence can not only anticipate demand and ramp production up or down, but sensors on equipment can predict maintenance needs. AI helps flag areas of concern in the manufacturing process before costly issues erupt. Machine vision can also support the quality control process at manufacturing facilities.

Accounting and Finance

Many organizations are finding the promise of cost reductions and more efficient operations the major appeal for artificial intelligence in the workplace, and according to Accenture Consulting, robotic process automation can produce amazing results in these areas for the accounting and finance industry and departments. Human finance professionals will be freed-up from repetitive tasks to be able to focus on higher-level activities while the use of AI in accounting will reduce errors. AI is also able to provide real-time status of financial matters to organizations because it can monitor communication through natural language processing.

Customer Experience

Another way artificial intelligence technology and big data are used in business today is to improve the customer experience. Luxury fashion brand Burberry uses big data and AI to enhance sales and customer relationships. The company gathers shopper's data through loyalty and reward programs that they then use to offer tailored recommendations whether customers are shopping online or in brick-and-mortar stores. Innovative uses of chatbots during industry events are another way to provide a stellar customer experience.

For more on AI and technology trends, see Bernard Marrs bookArtificial Intelligence in Practice: How 50 Companies Used AI and Machine Learning To Solve Problemsand his forthcoming bookTech Trends in Practice: The 25 Technologies That Are Driving The 4ThIndustrial Revolution, which is available to pre-order now.

Excerpt from:
10 Business Functions That Are Ready To Use Artificial Intelligence - Forbes

What are the top AI platforms? – Gigabit Magazine – Technology News, Magazine and Website

Business Overview

Microsoft AI is a platform used to develop AI solutions in conversational AI, machine learning, data sciences, robotics, IoT, and more.

Microsoft AI prides itself on driving innovation through; protecting wildlife, better brewing, feeding the world and preserving history.

Its Cognitive Services is described as a comprehensive family of AI services and cognitive APIs to help you build intelligent apps.

Executives

Tom Bernard Krake is the Azure Cloud Executive at Microsoft, responsible for leveraging and evaluating the Azure platform. Tom is joined by a team of experienced executives to optimise the Azure platform and oversee the many cognitive services that it provides.

Notable customers

Uber uses Cognitive Services to boost its security through facial recognition to ensure that the driver using the app matches the user that is on file.

KPMG helps financial institutions save millions in compliance costs through the use of Microsofts Cognitive Services. They do this through transcribing and logging thousands of hours of calls, reducing compliance costs by as much as 80 per cent.

Jet.com uses Cognitive Services to provide answers to its customers by infusing its customer chatbot with the intelligence to communicate using natural language.

The services:

Decision - Make smarter decisions faster through anomaly detectors, content moderators and personalizers.

Language - Extract meaning from unstructured text through the immersive reader, language understanding, Q&A maker, text analytics and translator text.

Speech - Integrate speech processing into apps and services through Speech-to-text, Text to speech, Speech translation and Speaker recognition.

Vision - Identify and analyse content within images, videos and digital ink through computer vision, custom vision, face, form recogniser, ink recogniser and video indexer.

Web Search -Find what you are looking for through the world-wide-web through autosuggest, custom search, entity search, image search, news search, spell check, video search, visual search and web search.

Read more from the original source:
What are the top AI platforms? - Gigabit Magazine - Technology News, Magazine and Website

AI Is Changing Work and Leaders Need to Adapt – Harvard Business Review

Executive Summary

Recent empirical research by the MIT-IBM Watson AI Lab provides new insight into how work is changing in the face of AI. Based on this research, the author provides a roadmap for leaders intent on adapting their workforces and reallocating capital, while also delivering profitability. They argue that the key to unlocking the productivity potential while delivering on business objectives lies in three key strategies: rebalancing resources, investing in workforce reskilling and, on a larger scale, advancing new models of education and lifelong learning.

As AI is increasingly incorporated into our workplaces and daily lives, it is poised to fundamentally upend the way we live and work. Concern over this looming shift is widespread. A recent survey of 5,700 Harvard Business School alumni found that 52% of even this elite group believe the typical company will employ fewer workers three years from now.

The advent of AI poses new and unique challenges for business leaders. They must continue to deliver financial performance, while simultaneously making significant investments in hiring, workforce training, and new technologies that support productivity and growth. These seemingly competing business objectives can make for difficult, often agonizing, leadership decisions.

Against this backdrop, recent empirical research by our team at the MIT-IBM Watson AI Lab provides new insight into how work is changing in the face of AI. By examining these findings, we can create a roadmap for leaders intent on adapting their workforces and reallocating capital, while also delivering profitability.

The stakes are high. AI is an entirely new kind of technology, one that has the ability to anticipate future needs and provide recommendations to its users. For business leaders, that unique capability has the potential to increase employee productivity by taking on administrative tasks, providing better pricing recommendations to sellers, and streamlining recruitment, to name a few examples.

For business leaders navigating the AI workforce transition, the key to unlocking the productivity potential while delivering on business objectives lies in three key strategies: rebalancing resources, investing in workforce reskilling and, on a larger scale, advancing new models of education and lifelong learning.

Our research report, offers a window into how AI will change workplaces through the rebalancing and restructuring of occupations. Using AI and machine learning techniques, our MIT-IBM Watson AI Lab team analyzed 170 million online job posts between 2010 and 2017. The studys first implication: While occupations change slowly over years and even decades tasks become reorganized at a much faster pace.

Jobs are a collection of tasks. As workers take on jobs in various professions and industries, it is the tasks they perform that create value. With the advancement of technology, some existing tasks will be replaced by AI and machine learning. But our research shows that only 2.5% of jobs include a high proportion of tasks suitable for machine learning. These include positions like usher, lobby attendant, and ticket taker, where the main tasks involve verifying credentials and allowing only authorized people to enter a restricted space.

Most tasks will still be best performed by humans whether craft workers like plumbers, electricians and carpenters, or those who do design or analysis requiring industry knowledge. And new tasks will emerge that require workers to exercise new skills.

As this shift occurs, business leaders will need to reallocate capital accordingly. Broad adoption of AI may require additional research and development spending. Training and reskilling employees will very likely require temporarily removing workers from revenue-generating activities.

More broadly, salaries and other forms of employee compensation will need to reflect the shifting value of tasks all along the organization chart. Our research shows that as technology reduces the cost of some tasks because they can be done in part by AI, the value workers bring to the remaining tasks increases. Those tasks tend to require grounding in intellectual skill and insightsomething AI isnt as good at as people.

In high-wage business and finance occupations, for example, compensation for tasks requiring industry knowledge increased by more than $6,000, on average, between 2010 and 2017. By contrast, average compensation for manufacturing and production tasks fell by more than $5,000 during that period. As AI continues to reshape the workplace, business leaders who are mindful of this shifting calculus will come out ahead.

Companies today are held accountable not only for delivering shareholder value, but for positively impacting stakeholders such as customers, suppliers, communities and employees. Moreover, investment in talent and other stakeholders is increasingly considered essential to delivering long-term financial results. These new expectations are reflected in the Business Roundtables recently revised statement on corporate governance, which underscores corporations obligation to support employees through training and education that help develop new skills for a rapidly changing world.

Millions of workers will need to be retrained or reskilled as a result of AI over the next three years, according to a recent IBM Institute for Business Value study. Technical training will certainly be a necessary component. As tasks requiring intellectual skill, insight and other uniquely human attributes rise in value, executives and managers will also need to focus on preparing workers for the future by fostering and growing people skills such as judgement, creativity and the ability to communicate effectively. Through such efforts, leaders can help their employees make the shift to partnering with intelligent machines as tasks transform and change in value.

As AI continues to scale within businesses and across industries, it is incumbent upon innovators and business leaders to understand not only the business process implications, but also the societal impact. Beyond the need for investment in reskilling within organizations today, executives should work alongside policymakers and other public and private stakeholders to provide support for education and job training, encouraging investment in training and reskilling programs for all workers.

Our research shows that technology can disproportionately impact the demand and earning potential for mid-wage workers, causing a squeeze on the middle class. For every five tasks that shifted out of mid-wage jobs, we found, four tasks moved to low-wage jobs and one moved to a high-wage job. As a result, wages are rising faster in the low- and high-wage tiers than in the mid-wage tier.

New models of education and pathways to continuous learning can help address the growing skills gap, providing members of the middle class, as well as students and a broad array of mid-career professionals, with opportunities to build in-demand skills. Investment in all forms of education is key: community college, online learning, apprenticeships, or programs like P-TECH, a public-private partnership designed to prepare high school students for new collar technical jobs like cloud computing and cybersecurity.

Whether it is workers who are asked to transform their skills and ways of working, or leaders who must rethink everything from resource allocation to workforce training, fundamental economic shifts are never easy. But if AI is to fulfill its promise of improving our work lives and raising living standards, senior leaders must be ready to embrace the challenges ahead.

See more here:
AI Is Changing Work and Leaders Need to Adapt - Harvard Business Review

Neural networks facilitate optimization in the search for new materials – MIT News

When searching through theoretical lists of possible new materials for particular applications, such as batteries or other energy-related devices, there are often millions of potential materials that could be considered, and multiple criteria that need to be met and optimized at once. Now, researchers at MIT have found a way to dramatically streamline the discovery process, using a machine learning system.

As a demonstration, the team arrived at a set of the eight most promising materials, out of nearly 3 million candidates, for an energy storage system called a flow battery. This culling process would have taken 50 years by conventional analytical methods, they say, but they accomplished it in five weeks.

The findings are reported in the journal ACS Central Science, in a paper by MIT professor of chemical engineering Heather Kulik, Jon Paul Janet PhD 19, Sahasrajit Ramesh, and graduate student Chenru Duan.

The study looked at a set of materials called transition metal complexes. These can exist in a vast number of different forms, and Kulik says they are really fascinating, functional materials that are unlike a lot of other material phases. The only way to understand why they work the way they do is to study them using quantum mechanics.

To predict the properties of any one of millions of these materials would require either time-consuming and resource-intensive spectroscopy and other lab work, or time-consuming, highly complex physics-based computer modeling for each possible candidate material or combination of materials. Each such study could consume hours to days of work.

Instead, Kulik and her team took a small number of different possible materials and used them to teach an advanced machine-learning neural network about the relationship between the materials chemical compositions and their physical properties. That knowledge was then applied to generate suggestions for the next generation of possible materials to be used for the next round of training of the neural network. Through four successive iterations of this process, the neural network improved significantly each time, until reaching a point where it was clear that further iterations would not yield any further improvements.

This iterative optimization system greatly streamlined the process of arriving at potential solutions that satisfied the two conflicting criteria being sought. This kind of process of finding the best solutions in situations, where improving one factor tends to worsen the other, is known as a Pareto front, representing a graph of the points such that any further improvement of one factor would make the other worse. In other words, the graph represents the best possible compromise points, depending on the relative importance assigned to each factor.

Training typical neural networks requires very large data sets, ranging from thousands to millions of examples, but Kulik and her team were able to use this iterative process, based on the Pareto front model, to streamline the process and provide reliable results using only the few hundred samples.

In the case of screening for the flow battery materials, the desired characteristics were in conflict, as is often the case: The optimum material would have high solubility and a high energy density (the ability to store energy for a given weight). But increasing solubility tends to decrease the energy density, and vice versa.

Not only was the neural network able to rapidly come up with promising candidates, it also was able to assign levels of confidence to its different predictions through each iteration, which helped to allow the refinement of the sample selection at each step. We developed a better than best-in-class uncertainty quantification technique for really knowing when these models were going to fail, Kulik says.

The challenge they chose for the proof-of-concept trial was materials for use in redox flow batteries, a type of battery that holds promise for large, grid-scale batteries that could play a significant role in enabling clean, renewable energy. Transition metal complexes are the preferred category of materials for such batteries, Kulik says, but there are too many possibilities to evaluate by conventional means. They started out with a list of 3 million such complexes before ultimately whittling that down to the eight good candidates, along with a set of design rules that should enable experimentalists to explore the potential of these candidates and their variations.

Through that process, the neural net both gets increasingly smarter about the [design] space, but also increasingly pessimistic that anything beyond what weve already characterized can further improve on what we already know, she says.

Apart from the specific transition metal complexes suggested for further investigation using this system, she says, the method itself could have much broader applications. We do view it as the framework that can be applied to any materials design challenge where you're really trying to address multiple objectives at once. You know, all of the most interesting materials design challenges are ones where you have one thing you're trying to improve, but improving that worsens another. And for us, the redox flow battery redox couple was just a good demonstration of where we think we can go with this machine learning and accelerated materials discovery.

For example, optimizing catalysts for various chemical and industrial processes is another kind of such complex materials search, Kulik says. Presently used catalysts often involve rare and expensive elements, so finding similarly effective compounds based on abundant and inexpensive materials could be a significant advantage.

This paper represents, I believe, the first application of multidimensional directed improvement in the chemical sciences, she says. But the long-term significance of the work is in the methodology itself, because of things that might not be possible at all otherwise. You start to realize that even with parallel computations, these are cases where we wouldn't have come up with a design principle in any other way. And these leads that are coming out of our work, these are not necessarily at all ideas that were already known from the literature or that an expert would have been able to point you to.

This is a beautiful combination of concepts in statistics, applied math, and physical science that is going to be extremely useful in engineering applications, says George Schatz, a professor of chemistry and of chemical and biological engineering at Northwestern University, who was not associated with this work. He says this research addresses how to do machine learning when there are multiple objectives. Kuliks approach uses leading edge methods to train an artificial neural network that is used to predict which combination of transition metal ions and organic ligands will be best for redox flow battery electrolytes.

Schatz says this method can be used in many different contexts, so it has the potential to transform machine learning, which is a major activity around the world.

The work was supported by the Office of Naval Research, the Defense Advanced Research Projects Agency (DARPA), the U.S. Department of Energy, the Burroughs Wellcome Fund, and the AAAS Mar ion Milligan Mason Award.

Follow this link:
Neural networks facilitate optimization in the search for new materials - MIT News

Deep Learning: What You Need To Know – Forbes

AI (artificial Intelligence) concept.

During the past decade, deep learning has seen groundbreaking developments in the field of AI (Artificial Intelligence). But what is this technology? And why is it so important?

Well, lets first get a definition of deep learning.Heres how Kalyan Kumar, who is the Corporate Vice President & Chief Technology Officer of IT Services at HCL Technologies, describes it:Have you ever wondered how our brain can recognize the face of a friend whom you had met years ago or can recognize the voice of your mother among so many other voices in a crowded marketplace or how our brain can learn, plan and execute complex day-to-day activities? The human brain has around 100 billion cells called neurons. These build massively parallel and distributed networks, through which we learn and carry out complex activities. Inspired from these biological neural networks, scientists started building artificial neural networks so that computers could eventually learn and exhibit intelligence like humans.

Think of it this way:You first will start with a huge amount of unstructured data, say videos.Then you will use a sophisticated model that will process this information and try to determine underlying patterns, which are often not detectable by people.

During training, you define the number of neurons and layers your neural network will be comprised of and expose it to labeled training data, said Brian Cha, who is a Product Manager and Deep Learning evangelist at FLIR Systems.With this data, the neural network learns on its own what is good or bad. For example, if you want the neural network to grade fruits, you would show it images of fruits labeled Grade A, Grade B, Grade C, and so on. The neural network uses this training data to extract and assign weights to features that are unique to fruits labelled good, such as ideal size, shape, color, consistency of color and so on. You dont need to manually define these characteristics or even program what is too big or too small, the neural network trains itself using the training data. The process of evaluating new images using a neural network to make decisions on is called inference. When you present the trained neural network with a new image, it will provide an inference, such as Grade A with 95% confidence.

What about the algorithms?According to Bob Friday, who is the CTO of Mist Systems, a Juniper Networks company, There are two kinds of popular neural network models for different use cases: the Convolutional Neural Network (CNN) model is used in image related applications, such as autonomous driving, robots and image search. Meanwhile, the Recurrent Neural Network (RNN) model is used in most of the Natural Language Processing-based (NLP) text or voice applications, such as chatbots, virtual home and office assistants and simultaneous interpreters and in networking for anomaly detection.

Of course, deep learning requires lots of sophisticated tools.But the good news is that there are many available and some are even free like TensorFlow, PyTorch and Keras.

There are also cloud-based server computer services, said Ali Osman rs, who is the Director of AI Strategy and Strategic Partnerships for ADAS at NXP Semiconductors.These are referred to as Machine Learning as a Service (MLaaS) solutions. The main providers include Amazon AWS, Microsoft Azure, and Google Cloud.

Because of the enormous data loads and complex algorithms, there is usually a need for sophisticated hardware infrastructure.Keep in mind that it can sometimes take days to train a model

The unpredictable process of training neural networks requires rapid on-demand scaling of virtual machine pools, said Brent Schroeder, who is the Chief Technology Officer at SUSE. Container based deep learning workloads managed by Kubernetes can easily be deployed to different infrastructure depending upon the specific needs. An initial model can be developed on a small local cluster, or even an individual workstation with a Jupyter Notebook. But then as training needs to scale, the workload can be deployed to large, scalable cloud resources for the duration of the training. This makes Kubernetes clusters a flexible, cost-effective option for training different types of deep learning workloads.

Deep learning has been shown to be quite efficient and accurate with models.Probably the biggest advantage of deep learning over most other machine learning approaches is that the user does not need to worry about trimming down the number of features used, said Noah Giansiracusa, who is an Assistant Professor of Mathematical Sciences at Bentley University.With deep learning, since the neurons are being trained to perform conceptual taskssuch as finding edges in a photo, or facial features within a facethe neural network is in essence figuring out on its own which features in the data itself should be used.

Yet there are some notable drawbacks to deep learning.One is cost.Deep learning networks may require hundreds of thousands or millions of hand-labeled examples, said Evan Tann, who is the CTO and co-founder of Thankful.It is extremely expensive to train in fast timeframes, as serious players will need commercial-grade GPUs from Nvidia that easily exceed $10k each.

Deep learning is also essentially a black box.This means it can be nearly impossible to understand how the model really works!

This can be particularly problematic in applications that require such documentation like FDA approval of drugs and medical devices, said Dr. Ingo Mierswa, who is the Founder of RapidMiner.

And yes, there are some ongoing complexities with deep learning models, which can create bad outcomes.Say a neural network is used to identify cats from images, said Yuheng Chen, who is the COO of rct studio.It works perfectly, but when we want it to identify cats and dogs at the same time, its performance collapses.

But then again, there continues to be rapid progress, as companies continue to invest substantial amounts into deep learning.For the most part, things are still very much in the nascent stages.

The power of deep learning is what allows seamless speech recognition, image recognition, and automation and personalization across every possible industry today, so it's safe to say that you are already experiencing the benefits of deep learning, said Sajid Sadi, who is the VP of Research at Samsung and the Head of Think Tank Team.

Tom (@ttaulli) is the author of Artificial Intelligence Basics: A Non-Technical Introduction and The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems.

Read the original:
Deep Learning: What You Need To Know - Forbes

What Researches says on Machine learning with COVID-19 – Techiexpert.com – TechiExpert.com

COVID-19 will change how most of us live and work, at any rate temporarily. Its additionally making a test for tech organizations, for example, Facebook, Twitter, and Google, that usually depend on parcels and heaps of personal work to direct substance. Are AI furthermore, AI propelled enough to enable these organizations to deal with the interruption?

Its essential that, even though Facebook has initiated ageneral work-from-home strategy to ensure its laborers (alongside Google and arising number of different firms), it at first required its contractual workerswho moderate substance to keep on coming into the workplace. That circumstancejust changed after fights, as per The Intercept.

Presently, Facebook is paying those contractual workers. At thesame time, they sit at home since the idea of their work (examining peoplegroups posts for content that damages Facebooks terms of administration) isamazingly security delicate. Heres Facebooks announcement:

For both our full-time representatives and agreementworkforce, there is some work that is impossible from home because ofwellbeing, security, and legitimate reasons. We have played it safe to secureour laborers by chopping down the number of individuals in some random office,executing prescribed work from home all-inclusive, truly spreading individualsout at some random office, and doing extra cleaning. Given the quicklydeveloping general wellbeing concerns, we are finding a way to ensure ourgroups. We will be working with our accomplices throughout this week to sendall contractors who perform content survey home, until further notification.Well guarantee the payment of all employees during this time.

Facebook, Twitter, Reddit, and different organizations are inthe equivalent world-renowned pontoon: Theres an expanding need to politicizetheir stages, just to take out counterfeit news about COVID-19. Yetthe volunteers who handle such assignments cant do as such from home,particularly on their workstations. The potential arrangement? Human-madereasoning (AI) and AI calculations intended to examine the flawed substance andsettle on a choice about whether to dispense with it.

Heres Googles announcement on the issue, using its YouTube Creator Blog.

Our Community Guidelines requirement today depends on ablend of individuals and innovation: Machine learning recognizes possiblydestructive substance and afterward sends it to human analysts for evaluation.Because of the new estimates were taking, we will incidentally begin dependingmore on innovation to help with a portion of the work regularly done bycommentators. This implies computerized frameworks will begin evacuating somesubstance without human audit, so we can keep on acting rapidly to expelviolative substances and ensure our environment. At the same time, we have aworking environment assurances set up.

Also, the tech business has been traveling right now sometime.Depending on the multitudes of individuals to peruse each bit of substance onthe web is costly, tedious, and inclined to mistake. Be that as it may, AI,whats more, AI is as yet early, despite the promotion. Google itself, in thepreviously mentioned blog posting, brought up how its computerized frameworksmay hail inappropriate recordings. Facebook is additionally getting analysisthat its robotized against spam framework is whacking inappropriate posts,remembering those that offer essential data for the spread of COVID-19.

In the case of the COVID-19 emergency delay, more organizationswill not surely turn to machine learning as a potential answer forinterruptions in their work process and different procedures. That will drive aprecarious expectation to absorb information; over and over, the rollout of AIstages has exhibited that, while the capability of the innovation is there,execution is regularly an unpleasant and costly proceduresimply see GoogleDuplex.

In any case, a forceful grasp of AI will likewise make more opendoors for those technologists who have aced AI, whats more, AI aptitudes ofany kind; these people may wind up entrusted with making sense of how tomechanize center procedures to keep organizations running.

Before the infection developed, Burning Glass (which breaks downa great many activity postings from over the US), evaluated that employmentsthat include AI would grow 40.1 percent throughout the following decade. Thatrate could increase considerably higher if the emergency on a fundamental levelchanges how individuals over the world live and work. (The average compensationfor these positions is $105,007; for those with a Ph.D., it floats up to$112,300.)

With regards to irresistible illnesses, counteraction, surveillance,and fast reaction endeavors can go far toward easing back or slowing downflare-ups. At the point when a pandemic, for example, the ongoing coronavirusepisode occurs, it can make enormous difficulties for the administration andgeneral wellbeing authorities to accumulate data rapidly and facilitate areaction.

In such a circumstance, machine learning can assume an immensejob in foreseeing a flare-up and limiting or slowing down its spread.

Human-made intelligence calculations can help mine through newsreports and online substances from around the globe, assisting specialists inperceiving oddities even before it arrives at pestilence extents. The crownepisode itself is an extraordinary model where specialists applied AI toexamine flight voyager information to anticipate where the novel coronaviruscould spring up straightaway. A National Geographic report shows how checkingthe web or online life can help identify the beginning periods.

Practical usage of prescient demonstrating could speak to asignificant jump forward in the battle to free the universe of probably themost irresistible maladies. Substantial information examination can enablede-to to concentrate the procedure and empower the convenient investigation offar-reaching informational collections created through the Internet of Things(IoT) and cell phones progressively.

Artificial intelligence and colossal information examination have a significant task to carry out in current genome sequencing techniques. High.

As of late, weve all observed great pictures of medicinalservices experts over the globe working vigorously to treat COVID-19 patients,frequently putting their own lives in danger. Computer-based intelligence couldassume a critical job in relieving their burden while guaranteeing that thenature of care doesnt endure. For example, the Tampa General Hospital inFlorida is utilizing AI to recognize fever in guests with a primary facialoutput. Human-made intelligence is additionally helping specialists at theSheba Medical Center.

The job of AI and massive information in treating worldwidepandemics and other social insurance challenges is just set to develop. Hence,it does not shock anyone that interest for experts with AI aptitudes hasdramatically increased in recent years. Experts working in social insuranceinnovations, getting taught on the uses of AI in medicinal services, andbuilding the correct ranges of abilities will end up being critical.

As AI rapidly becomes standard, medicinal services isundoubtedly a territory where it will assume a significant job in keeping usmore secure and more advantageous.

The subject of how machine learning can add to controlling theCOVID-19 pandemic is being presented to specialists in human-made consciousness(AI) everywhere throughout the world.

Artificial intelligence instruments can help from multiplepoints of view. They are being utilized to foresee the spread of thecoronavirus, map its hereditary advancement as it transmits from human tohuman, accelerate analysis, and in the improvement of potential medications,while additionally helping policymakers adapt to related issues, for example,the effect on transport, nourishment supplies, and travel.

In any case, in every one of these cases, AI is just potent onthe off chance that it has adequate guides. As COVID-19 has brought the worldinto the unchartered domain, the profound learning frameworks,which PCs use to obtain new capacities, dont have the information they have todeliver helpful yields.

Machine leaning is acceptable at anticipating nonexclusiveconduct, yet isnt truly adept at extrapolating that to an emergencycircumstance when nearly everything that happens is new, alerts LeoKrkkinen, a teacher at the Department of Electrical Engineering andAutomation in Aalto University, Helsinki and an individual with Nokias BellLabs. On the off chance that individuals respond in new manners, at thatpoint AI cant foresee it. Until you have seen it, you cant gain fromit.

Regardless of this clause, Krkkinen says powerful AI-basednumerical models are assuming a significant job in helping policymakers see howCOVID-19 is spreading and when the pace of diseases is set to top. Bydrawing on information from the field, for example, the number of passings, AImodels can assist with identifying what number of contaminations areuninformed, he includes, alluding to undetected cases that are as yetirresistible. That information would then be able to be utilized to advise thefoundation regarding isolate zones and other social removing measures.

It is likewise the situation that AI-based diagnostics that arebeing applied in related zones can rapidly be repurposed for diagnosingCOVID-19 contaminations. Behold.ai, which has a calculation for consequentlyrecognizing both malignant lung growth and fallen lungs from X-beams, provideddetails regarding Monday that the count can rapidly distinguish chest X-beamsfrom COVID-19 patients as unusual. Right now, triage might accelerate findingand guarantee assets are dispensed appropriately.

The dire need to comprehend what sorts of approach intercessionsare powerful against COVID-19 has driven different governments to grant awardsto outfit AI rapidly. One beneficiary is David Buckeridge, a teacher in theDepartment of Epidemiology, Biostatistics and Occupational Health at McGillUniversity in Montreal. Equipped with an award of C$500,000 (323,000), hisgroup is joining ordinary language preparing innovation with AI devices, forexample, neural systems (a lot of calculations intended to perceive designs),to break down more than 2,000,000 customary media and internet-based lifereports regarding the spread of the coronavirus from everywhere throughout theworld. This is unstructured free content traditional techniques cantmanage it, Buckeridge said. We need to remove a timetable fromonline media, that shows whats working where, accurately.

The group at McGill is utilizing a blend of managed and solo AI techniques to distill the key snippets of data from the online media reports. Directed learning includes taking care of a neural system with information that has been commented on, though solo adapting just utilizes crude information. We need a structure for predisposition various media sources have an alternate point of view, and there are distinctive government controls, says Buckeridge. People are acceptable at recognizing that, yet it should be incorporated with the AI models.

The data obtained from the news reports will be joined withother information, for example, COVID-19 case answers, to give policymakers andwellbeing specialists a significantly more complete image of how and why theinfection is spreading distinctively in various nations. This is appliedresearch in which we will hope to find significant solutions quick,Buckeridge noted. We ought to have a few consequences of significance togeneral wellbeing in April.

Simulated intelligence can likewise be utilized to helprecognize people who may be accidentally tainted with COVID-19. Chinese techorganization Baidu says its new AI-empowered infrared sensor framework canscreen the temperature of individuals in the nearness and rapidly decide ifthey may have a fever, one of the indications of the coronavirus. In an 11March article in the MIT Technology Review, Baidu said the innovation is beingutilized in Beijings Qinghe Railway Station to recognize travelers who areconceivably contaminated, where it can look at up to 200 individuals in asingle moment without upsetting traveler stream. A report given out fromthe World Health Organization on how China has reacted to the coronavirus saysthe nation has additionally utilized essential information and AI to reinforcecontact following and the administration of need populaces.

Human-made intelligence apparatuses are additionally being sent to all the more likely comprehend the science and science of the coronavirus and prepare for the advancement of viable medicines and an immunization. For instance, fire up Benevolent AI says its man-made intelligence determined information diagram of organized clinical data has empowered the recognizable proof of a potential restorative. In a letter to The Lancet, the organization depicted how its calculations questioned this chart to recognize a gathering of affirmed sedates that could restrain the viral disease of cells. Generous AI inferred that the medication baricitinib, which is endorsed for the treatment of rheumatoid joint inflammation, could be useful in countering COVID-19 diseases, subject to fitting clinical testing.

So also, US biotech Insilico Medicine is utilizing AI calculations to structure new particles that could restrict COVID-19s capacity to duplicate in cells. In a paper distributed in February, the organization says it has exploited late advances in profound figuring out how to expel the need to physically configuration includes and learn nonlinear mappings between sub-atomic structures and their natural and pharmacological properties. An aggregate of 28 AI models created atomic structures and upgraded them with fortification getting the hang of utilizing a scoring framework that mirrored the ideal attributes, the analysts said.

A portion of the worlds best-resourced programmingorganizations is likewise thinking about this test. DeepMind, the London-basedAI pro possessed by Googles parent organization Alphabet, accepts its neuralsystems that can accelerate the regularly painful procedure of settling thestructures of viral proteins. It has created two strategies for preparingneural networks to foresee the properties of a protein from its hereditaryarrangement. We would like to add to the logical exertion bydischarging structure forecasts of a few under-contemplated proteins related toSARS-CoV-2, the infection that causes COVID-19, the organization said.These can assist scientists with building comprehension of how the infectioncapacities and be utilized in medicate revelation.

The pandemic has driven endeavor programming organizationSalesforce to differentiate into life sciences, in an investigation showingthat AI models can gain proficiency with the language of science, similarly asthey can do discourse and picture acknowledgment. The thought is that the AIframework will, at that point, have the option to plan proteins, or recognizecomplex proteins, that have specific properties, which could be utilized totreat COVID-19.

Salesforce took care of the corrosive amino arrangements ofproteins and their related metadata into its ProGen AI framework. The frameworktakes each preparation test and details a game where it attempts to foresee thefollowing amino corrosive in succession.

Before the finish of preparing, ProGen has gotten aspecialist at foreseeing the following amino corrosive by playing this gameroughly one trillion times, said Ali Madani, an analyst at Salesforce.ProGen would then be able to be utilized practically speaking for proteinage by iteratively anticipating the following doubtlessly amino corrosive andproducing new proteins it has never observed. Salesforce is presentlylooking to collaborate with scholars to apply the innovation.

As governments and wellbeing associations scramble to containthe spread of coronavirus, they need all the assistance they with canning get,including from machine learning. Even though present AI innovations are a longway from recreating human knowledge, they are ending up being useful infollowing the episode, diagnosing patients, sanitizing regions, andaccelerating the way toward finding a remedy for COVID-19.

Information science and AI maybe two of the best weapons we havein the battle against the coronavirus episode.

Not long before the turn of the year, BlueDot, a human-madeconsciousness stage that tracks irresistible illnesses around the globe, haileda group of bizarre pneumonia cases occurring around a market inWuhan, China. After nine days, the World Health Organization (WHO) dischargedan announcement proclaiming the disclosure of a novel coronavirusin a hospitalized individual with pneumonia in Wuhan.

BlueDot utilizes everyday language preparation and AIcalculations to scrutinize data from many hotspots for early indications ofirresistible pestilences. The AI takes a gander at articulations from wellbeingassociations, business flights, animal wellbeing reports, atmosphere informationfrom satellites, and news reports. With so much information being created oncoronavirus consistently, the AI calculations can help home in on the bits thatcan give appropriate data on the spread of the infection. It can likewisediscover significant connections betweens information focuses, for example,the development examples of the individuals who are living in the zonesgenerally influenced by the infection.

The organization additionally utilizes many specialists who havesome expertise in the scope of orders, including geographic data frameworks,spatial examination, information perception, PC sciences, just as clinicalspecialists in irresistible clinical ailments, travel and tropical medication,and general wellbeing. The specialists audit the data that has been hailed bythe AI and convey writes about their discoveries.

Joined with the help of human specialists, BlueDots AI cananticipate the beginning of a pandemic, yet additionally, conjecture how itwill spread. On account of COVID-19, the AI effectively recognized the urbancommunities where the infection would be moved to after it surfaced in Wuhan.AI calculations considering make a trip design had the option to foresee wherethe individuals who had contracted coronavirus were probably going to travel.

Presently, AI calculations can play out the equivalenteverywhere scale. An AI framework created by Chinese tech monster Baiduutilizes cameras furnished with PC vision and infrared sensors to foreseeindividuals temperatures in open territories. The frame can screen up to 200individuals for every moment and distinguish their temperature inside the scopeof 0.5 degrees Celsius. The AI banners any individual who has a temperatureabove 37.3 degrees. The innovation is currently being used in Beijings QingheRailway Station.

Alibaba, another Chinese tech monster, has built up an AI framework that can recognize coronavirus in chest CT filters. As indicated by the analysts who built up the structure, the AI has a 96-percent exactness. The AI was prepared on information from 5,000 coronavirus cases and can play out the test in 20 seconds instead of the 15 minutes it takes a human master to analyze patients. It can likewise differentiate among coronavirus and common viral pneumonia. The calculation can give a lift to the clinical focuses that are as of now under a ton of strain to screen patients for COVID-19 disease. The framework is supposedly being embraced in 100 clinics in China.

A different AI created by specialists from Renmin Hospital ofWuhan University, Wuhan EndoAngel Medical Technology Company, and the ChinaUniversity of Geosciences purportedly shows 95-percent precision ondistinguishing COVID-19 in chest CT checks. The framework is a profoundlearning calculation prepared on 45,000 anonymized CT checks. As per a preprintpaper distributed on medRxiv, the AIs exhibition is practically identical tomaster radiologists.

One of the fundamental approaches to forestall the spread of thenovel coronavirus is to decrease contact between tainted patients andindividuals who have not gotten the infection. To this end, a few organizationsand associations have occupied with endeavors to robotize a portion of themethods that recently required wellbeing laborers and clinical staff tocooperate with patients.

Chinese firms are utilizing automatons and robots to performcontactless conveyance and to splash disinfectants in open zones to limit thedanger of cross-contamination. Different robots are checking individuals forfever and other COVID-19 manifestations and administering free hand sanitizerfoam and gel.

Inside emergency clinics, robots are conveying nourishment andmedication to patients and purifying their rooms to hinder the requirement forthe nearness of attendants. Different robots are caught up with cooking ricewithout human supervision, decreasing the quantity of staff required to run theoffice.

In Seattle, specialists utilized a robot to speak with and treatpatients remotely to limit the introduction of clinical staff to contaminatedindividuals.

By the days end, the war on the novel coronavirus isnt overuntil we build up an immunization that can vaccinate everybody against theinfection. Be that as it may, growing new medications and medication is anexceptionally protracted and expensive procedure. It can cost more than abillion dollars and take as long as 12 years. That is the sort of period wedont have as the infection keeps on spreading at a quickening pace.

Luckily, AI can assist speed with increasing the procedure.DeepMind, the AI investigate lab procured by Google in 2014, as of lateannounced that it has utilized profound figuring out how to discover new dataabout the structure of proteins related to COVID-19. This is a procedure thatcould have taken a lot more months.

Understanding protein structures can give significant insightsinto the coronavirus immunization recipe. DeepMind is one of a few associationsthat are occupied with the race to open the coronavirus immunization. It hasutilized the consequence of many years of AI progress, just as research onprotein collapsing.

Its imperative to take note of that our structureforecast framework is still being developed, and we cant be sure of theprecision of the structures we are giving, even though we are sure that theframework is more exact than our prior CASP13 framework, DeepMindsscientists composed on the AI labs site. We affirmed that our frameworkgave an exact forecast to the tentatively decided SARS-CoV-2 spike proteinstructure partook in the Protein Data Bank, and this gave us the certainty thatour model expectations on different proteins might be valuable.

Even though it might be too soon to tell whether were going thecorrect way, the endeavors are excellent. Consistently spared in finding thecoronavirus antibody can save hundredsor thousandsof lives.

Here is the original post:
What Researches says on Machine learning with COVID-19 - Techiexpert.com - TechiExpert.com

PSD2: How machine learning reduces friction and satisfies SCA – The Paypers

Andy Renshaw, Feedzai: It crosses borders but doesnt have a passport. Its meant to protect people but can make them angry. Its competitive by nature but doesnt want you to fail. What is it?

If the PSD2 regulations and Strong Customer Authentication (SCA) feel like a riddle to you, youre not alone. SCA places strict two-factor authentication requirements upon financial institutions (FIs) at a time when FIs are facing stiff competition for customers. On top of that, the variety of payment types, along with the sheer number of transactions, continue to increase.

According to UK Finance, the number of debit card transactions surpassed cash transactions since 2017, while mobile banking surged over the past year, particularly for contactless payments. The number of contactless payment transactions per customer is growing; this increase in transactions also raises the potential for customer friction.

The number of transactions isnt the only thing thats shown an exponential increase; the speed at which FIs must process them is too. Customers expect to send, receive, and access money with the swipe of a screen. Driven by customer expectations, instant payments are gaining traction across the globe with no sign of slowing down.

Considering the sheer number of transactions combined with the need to authenticate payments in real-time, the demands placed on FIs can create a real dilemma. In this competitive environment, how can organisations reduce fraud and satisfy regulations without increasing customer friction?

For countries that fall under PSD2s regulation, the answer lies in the one known way to avoid customer friction while meeting the regulatory requirement: keep fraud rates at or below SCA exemption thresholds.

How machine learning keeps fraud rates below the exemption threshold to bypass SCA requirements

Demonstrating significantly low fraud rates allows financial institutions to bypass the SCA requirement. The logic behind this is simple: if the FIs systems can prevent fraud at such high rates, they've demonstrated their systems are secure without authentication.

SCA exemption thresholds are:

Exemption Threshold Value

Remote electronic card-based payment

Remote electronic credit transfers

EUR 500

below 0.01% fraud rate

below 0.01% fraud rate

EUR 250

below 0.06% fraud rate

below 0.01% fraud rate

EUR 100

below 0.13% fraud rate

below 0.015% fraud rate

Looking at these numbers, you might think that achieving SCA exemption thresholds is impossible. After all, bank transfer scams rose 40% in the first six months of 2019. But state-of-the-art technology rises to the challenge of increased fraud. Artificial intelligence, and more specifically machine learning, makes achieving SCA exemption thresholds possible.

How machine learning achieves SCA exemption threshold values

Every transaction has hundreds of data points, called entities. Entities include time, date, location, device, card, cardless, sender, receiver, merchant, customer age the possibilities are almost endless. When data is cleaned and connected, meaning it doesnt live in siloed systems, the power of machine learning to provide actionable insights on that data is historically unprecedented.

Robust machine learning technology uses both rules and models and learns from both historical and real-time profiles of virtually every data point or entity in a transaction. The more data we feed the machine, the better it gets at learning fraud patterns. Over time, the machine learns to accurately score transactions in less than a second without the need for customer authentication.

Machine learning creates streamlined and flexible workflows

Of course, sometimes, authentication is inevitable. For example, if a customer who generally initiates a transaction in Brighton, suddenly initiates a transaction from Mumbai without a travel note on the account, authentication should be required. But if machine learning platforms have flexible data science environments that embed authentication steps seamlessly into the transaction workflow, the experience can be as customer-centric as possible.

Streamlined workflows must extend to the fraud analysts job

Flexible workflows arent just important to instant payments theyre important to all payments. And they cant just be a back-end experience in the data science environment. Fraud analysts need flexibility in their workflows too. They're under pressure to make decisions quickly and accurately, which means they need a full view of the customer not just the transaction.

Information provided at a transactional level doesnt allow analysts to connect all the dots. In this scenario, analysts are left opening up several case managers in an attempt to piece together a complete and accurate fraud picture. Its time-consuming and ultimately costly, not to mention the wear and tear on employee satisfaction. But some machine learning risk platforms can show both authentication and fraud decisions at the customer level, ensuring analysts have a 360-degree view of the customer.

Machine learning prevents instant payments from becoming instant losses

Instant payments can provide immediate customer satisfaction, but also instant fraud losses. Scoring transactions in real-time means institutions can increase the security around the payments going through their system before its too late.

Real-time transaction scoring requires a colossal amount of processing power because it cant use batch processing, an efficient method when dealing with high volumes of data. Thats because the lag time between when a customer transacts and when a batch is processed makes this method incongruent with instant payments. Therefore, scoring transactions in real-time requires supercomputers with super processing powers. The costs associated with this make hosting systems on the cloud more practical than hosting at the FIs premises, often referred to as on prem. Of course, FIs need to consider other factors, including cybersecurity concerns before determining where they should host their machine learning platform.

Providing exceptional customer experiences by keeping fraud at or below PSD2s SCA threshold can seem like a magic trick, but its not. Its the combined intelligence of humans and machines to provide the most effective method we have today to curb and prevent fraud losses. Its how we solve the friction-security puzzle and deliver customer satisfaction while satisfying SCA.

About Andy Renshaw

Andy Renshaw, Vice President of Banking Solutions at Feedzai, has over 20 years of experience in banking and the financial services industry, leading large programs and teams in fraud management and AML. Prior to joining Feedzai, Andy held roles in global financial institutions such as Lloyds Banking Group, Citibank, and Capital One, where he helped fight against the ever-evolving financial crime landscape as a technical expert, fraud prevention expert, and a lead product owner for fraud transformation.

About Feedzai

Feedzai is the market leader in fighting fraud with AI. Were coding the future of commerce with todays most advanced risk management platform powered by big data and machine learning. Founded and developed by data scientists and aerospace engineers, Feedzai has one mission: to make banking and commerce safe. The worlds largest banks, processors, and retailers use Feedzais fraud prevention and anti-money laundering products to manage risk while improving customer experience.

Read this article:
PSD2: How machine learning reduces friction and satisfies SCA - The Paypers

Udacity offers free tech training to laid-off workers due to the coronavirus pandemic – CNBC

A nanodegree in autonomous vehicles is just one of 40 programs that Udacity is offering for free to workers laid off in the wake of the COVID-19 pandemic.

Udacity

Online learning platform Udacity is responding to the COVID-19 pandemic by offering free tech training to workers laid off as a result of the crisis.

On Thursday the Mountain View, California-based company revealed that in the wake of layoffs and furloughs by major U.S. corporations, including Marriott International, Hilton Hotels and GE Aviation, it will offer its courses known as nanodegrees for free to individuals in the U.S. who have been let go because of the coronavirus. The average price for an individual signing up for a nanodegree is about $400 a month, and the degrees take anywhere from four to six months to complete, according to the company.

The hope is that while individuals wait to go back to work, or in the event that the layoff is permanent, they can get training in fields that are driving so much of today's digital transformation. Udacity's courses include artificial intelligence, machine learning, digital marketing, product management, data analysis, cloud computing, autonomous vehicles, among others.

Gabe Dalporto, CEO of Udacity, said that over the past few weeks, as he and his senior leadership team heard projections of skyrocketing unemployment numbers as a result of COVID-19, he felt the need to act. "I think those reports were a giant wake-up call for everybody," he says. "This [virus] will create disruption across the board and in many industries, and we wanted to do our part to help."

A nanodegree in autonomous vehicles is just one of 40 programs that Udacity is offering for free to workers laid off in the wake of the COVID-19 pandemic.

Udacity

Dalporto says Udacity is funding the scholarships completely and that displaced workers can apply for them at udacity.com/pledge-to-americas-workers beginning March 26. Udacity will take the first 50 eligible applicants from each company that applies, and within 48 hours individuals should be able to begin the coursework. Dalporto says the offer will be good for the first 20 companies that apply and that "after that we'll evaluate and figure out how many more scholarships we are going to fund."

The company also announced this week that any individual, regardless of whether they've been laid off, can enroll for free in any one of Udacity's 40 different nanodegree programs. Users will get the first month free when they enroll in a monthly subscription, but Dalporto pointed out that many students can complete a course in a month if they dedicate enough time to it.

Udacity's offerings at this time underscore the growing disconnect between the skills workers have and the talent that organizations need today and in the years ahead. The company recently signed a deal with Royal Dutch Shell, for instance, to provide training in artificial intelligence. Shell says about 2,000 of its 82,000 employees have either expressed interest in the AI offerings or have been approached by their managers about taking the courses on everything from Python programming to training neural networks. Shell says the training is completely voluntary.

We have to be asking how are we going to help them get the skills they need to be successful in their careers moving forward when this is all behind us.

Gabe Dalporto

CEO of Udacity

And as more workers lose their jobs in the wake of the COVID-19 pandemic, it will be even more crucial that they're able to reenter the job market armed with the skills companies are looking for. According to the World Economic Forum's Future of Jobs report, at least 54% of all employees will need reskilling and upskilling by 2022. Yet only 30% of employees at risk of job displacement because of technological change received any training over the past year.

"America is facing a massive shortage of workers with the right technical skills, and as employers, retraining your existing workforce to address that shortage is the most efficient, cost-effective way to fill those gaps in an organization," Dalporto says. "The great irony in the world right now is that at the same time that a lot of people are going to lose their jobs, there are areas in corporations where managers just can't hire enough people for jobs in data analytics, cloud computing and AI."

Dalporto, who grew up in West Virginia, says he sees this point vividly every time he revisits his hometown. "When I go back, I see so many businesses and companies boarded up and people laid off because they didn't keep pace with automation and people didn't upskill," he says. As a result, many of these workers wind up in minimum wage jobs and that "just creates a lot of pain for them and their families," he adds. What's happening now is only fueling that cycleone that Dalporto says can be minimized with the right action.

"Laying people off is never an easy decision, but companies have to move the conversation beyond how many weeks of severance they're going to offer," he says. "We have to be asking how are we going to help them get the skills they need to be successful in their careers moving forward when this is all behind us."

The rest is here:
Udacity offers free tech training to laid-off workers due to the coronavirus pandemic - CNBC

Noble.AI Contributes to TensorFlow, Google’s Open-Source AI Library and the Most Popular – AiThority

Noble.AI, whose artificial intelligence (AI) software is purpose-built for engineers, scientists, and researchers and enables them to innovate and make discoveries faster, announced that it had completed contributions to TensorFlow, the worlds most popular open-source framework for deep learning created by Google.

Part of Nobles mission is building AI thats accessible to engineers, scientists and researchers, anytime and anywhere, without needing to learn or re-skill into computer science or AI theory, said Dr.Matthew C. Levy, Founder and CEO of Noble.AI. He continued, The reason why were making this symbolic contribution open-source is so people have greater access to tools amenable to R&D problems.

Recommended AI News: The Environmental Impact of Your Favorite Movies And Shows

TensorFlow is an end-to-end open source platform for machine learning originally developed by the Google Brain team. Today it is used by more than 60,000 GitHub developers and has achieved more than 140,000 stars and 80,000 forks of the codebase.

Recommended AI News: Chainalysis And Paxful Create New Compliance Standard For Peer-To-Peer Cryptocurrency Exchanges

Noble.AIs specific contribution helps to augment the sparse matrix capabilities of TensorFlow. Often, matrices represent mathematical operations that need to be performed on input data, such as in calculating the temporal derivative of time-series data. In many common physics and R&D scenarios these matrices can be sparsely populated such that a tiny fraction, often less than one percent, of all elements in the matrix are non-zero. In this setting, storing the entire matrix in a computers memory is cumbersome and often impossible all together at R&D industrial scale. In these cases, it often becomes advantageous to use sparse matrix operations.

Recommended AI News: 5 Technical Things you Should Know about Regression Testing and Retesting

See the original post here:
Noble.AI Contributes to TensorFlow, Google's Open-Source AI Library and the Most Popular - AiThority

IIIT-Hyderabad professor uses machine learning to predict the spread of Coronavirus – Free Press Journal

It all started with an aim to create a game around Coronavirus, but later Professor Vikram Pudi of the Data Sciences and Analytics Centre at IIIT-Hyderabad (III-H) adapted his idea to create an experimental simulation.

Seeing is believing that is the whole point of the experimental simulation, said Pudi. This simulation, which is developed with the help of machine learning, displays the way Coronavirus can be transmitted among the people across the world. Through this, Pudi is trying to explain the importance of social distancing and its need in such times.

He added that the close distance travel undertaken by an individual infects far more people than in case of distance travel. This increase could also be because the number of people in real-life who travel is much less than those who do not travel. So, hover-distance is more critical than travel probability.

However, this is based on a simulation experiment. If there were some real data that was accessible then this experiment could have been proven. Pudi added if real data is used then the scope to understand the spread will be more accurate.

The real data can help understand the speed of the spread and at what parameters the transmission of the disease stops, said Pudi, who developed this system on his own. He added he hopes to get access to real data to prove the experiment and use this simulator in the real world.

There is no backend server for this webpage. So, there will not be an issue in case it has to be scaled up and even a large number of people visit the site, professors revealed.

When quizzed what prompted him to try this, he said, I was mulling over creating a game. But I realised that Google stopped accepting any android app around Coronavirus in order to prevent any form of misinformation that could arise.

See the rest here:
IIIT-Hyderabad professor uses machine learning to predict the spread of Coronavirus - Free Press Journal