Advanced Analytics and Machine Learning Boost Bee Populations – Transmission & Distribution World

As part of its commitment to using data and analytics to solve the world's most pressing problems, SAS' recent work includes helping to save the world's No. 1 food crop pollinator the honey bee. With the number of bee colonies drastically declining around the world, SAS is using technology such as theInternet of Things (IoT), machine learning and visual analytics to help maintain and support healthy bee populations.

In honor of World Bee Day, SAS is highlighting three separate projects where technology is monitoring, tracking and improving pollinator populations around the globe. First, researchers at SAS have developed a noninvasive way to monitor real-time conditions of beehives through auditory data and machine learning algorithms. SAS is also working withAppalachian State Universityon the World Bee Count to visualize world bee population data and understand the best ways to save them. Lastly, recent SASViyaHackathon winners decoded bee communication through machine learning in order to maximize their food access and boost human food supplies.

"SAS has always looked for ways to use technology for a better world," said Oliver Schabenberger, COO and CTO of SAS. "By applying advanced analytics and artificial intelligence to beehive health, we have a better shot as a society to secure this critically important part of our ecosystem and, ultimately, our food supply."

Noninvasively Monitoring Beehive HealthResearchers from the SAS IoT Division are developing abioacoustic monitoring systemto noninvasively track real-time conditions of beehives using digital signal processing tools and machine learning algorithms available in SASEvent Stream Processingand SAS Viya software. This system helps beekeepers better understand and predict hive problems which could lead to colony failure, including the emergence of new queens something they would not ordinarily be able to detect.

Annual loss rates of U.S. beehives exceed 40%, and between 25% and 40% of these losses are due to queen failure. Acoustic analysis can alert beekeepers to queen disappearances immediately, which is vitally important to significantly reducing colony loss rates. With this system, beekeepers will have a deeper understanding of their hives without having to conduct time-consuming and disruptive manual inspections.

"As a beekeeper myself, I know the magnitude of bees' impact on our ecosystem, and I'm inspired to find innovative ways to raise healthier bees to benefit us all," saidAnya McGuirk, Distinguished Research Statistician Developer in the IoT division at SAS. "And as a SAS employee, I'm proud to have conducted this experiment with SAS software at our very own campus beehives, demonstrating both the power of our analytical capabilities and our commitment to innovation and sustainability."

By connecting sensors to SAS' four Bee Downtown hives at its headquarters inCary, NC, the team startedstreaming hive datadirectly to the cloud to continuously measure data points in and around the hive, including weight, temperature, humidity, flight activity and acoustics. In-stream machine learning models were used to "listen" to the hive sounds, which can indicate health, stress levels, swarming activities and the status of the queen bee. To ensure only the hum of the hive was being used to determine bees' health and happiness, researchers used robust principal component analysis (RPCA), a machine learning technique, to separate extraneous or irrelevant noises from the inventory of sounds collected by hive microphones.

The researchers found that with RPCA capabilities, they could detect worker bees "piping" at the same frequency range at which a virgin queen pipes after a swarm, likely to assess whether a queen was present. The researchers then designed an automated pipeline to detect either queen piping following a swarm or worker piping that occurs when the colony is queenless. This is greatly beneficial to beekeepers, warning them that a new queen may be emerging and giving them the opportunity to intervene before significant loss occurs.

The researchers plan to implement the acoustic streaming system very soon and are continuing to look for ways to broaden the usage of technology to help honey bees and ultimately humankind.

Visualizing the World's Pollinator PopulationsOn World Bee Day, SAS is launching a data visualization that maps out bees "counted" around the globe for theWorld Bee Count, an initiative co-founded by theCenter for Analytics Research and Education(CARE) atAppalachian State University. The goal of a World Bee Count is to engage citizens across the world to take pictures of bees as a first step toward understanding the reasons for their alarming decline.

"The World Bee Count allows us to crowdsource bee data to both visualize our planet's bee population and create one of the largest, most informative data sets about bees to date," saidJoseph Cazier, Professor and Executive Director atAppalachian State University'sCARE. "SAS' data visualization will show the crowdsourced location of bees and other pollinators. In a later phase of the project, researchers can overlay key data points like crop yield, precipitation and other contributing factors of bee health, gathering a more comprehensive understanding of our world's pollinators." Bayer has agreed to help sponsor CARE to allow its students and faculty to perform research on the World Bee Count data and other digital pollinator data sources.

In early May, the World Bee Count app was launched for users both beekeepers and the general public, aka "citizen data scientists" to add data points to the Global Pollinator Map. Within the app, beekeepers can enter the number of hives they have, and any user can submit pictures of pollinators from their camera roll or through the in-app camera. Through SAS Visual Analytics, SAS has created avisualization mapto display the images users submit via the app. In addition to showing the results of the project, the visualizations can potentially provide insights about the conditions that lead to the healthiest bee populations.

In future stages of this project, the robust data set created from the app could help groups like universities and research institutes better strategize ways to save these vital creatures.

Using Machine Learning to Maximize Bees' Access to FoodRepresenting the Nordic region, a team from Amesto NextBridgewon the 2020 SAS EMEA Hackathon, which challenged participants to improve sustainability using SAS Viya. Their winning project used machine learning to maximize bees' access to food, which would in turn benefit mankind's food supply. In partnership withBeefutures, the team successfully accomplished this by developing a system capable of automatically detecting, decoding and mapping bee "waggle" dances using Beefutures' observation hives and SAS Viya.

Bees are responsible for pollinating nearly 75% of all plant species directly used for human food, but the number of bee colonies are declining, which will lead to a devastating loss for human food supply. A main reason for the decline of bee populations is a lack of access to food due to an increase in monoculture farming. When bees do find a good food source, they come back to the hive to communicate its exact location through a "waggle dance." By observing these dances, beekeepers can better understand where their bees are getting food and then consider establishing new hives in these locations to help maintain strong colonies.

"Observing all of these dances manually is virtually impossible, but by using video footage from inside the hives and training machine learning algorithms to decode the dance, we will be able to better understand where bees are finding food," said Kjetil Kalager, lead of the Amesto NextBridge and Beefutures team. "We implemented this information, along with hive coordinates, sun angle, time of day and agriculture around the hives into an interactive map in SAS Viya and then beekeepers can easily decode this hive information and relocate to better suited environments if necessary."

This systematic real-time monitoring of waggle dances allows bees to act as sensors for their ecosystems. Further research using this technology may uncover other information bees communicate through dance that could help us save and protect their population, which ultimately benefits us all.

See thiswaggle dance project in actionand learn about howSAS is committed to corporate social responsibility.

Read more from the original source:
Advanced Analytics and Machine Learning Boost Bee Populations - Transmission & Distribution World

Evolve your career with upGrads Machine Learning and Cloud program in association with IIT Madras – Economic Times

Amongst technologies that have revolutionised industries in the last two decades, Machine Learning holds a significant place. Machine Learning has not only made its way into versatile industry applications but has also allowed businesses to transform their operations by reducing costs, boosting efficiency, and transforming customer experience. Currently, Machine Learning is at a crucial crossroad where research is on to take automation to a stage where it requires no human intervention at all. This will pave the path towards a fully automated workflow which is achievable by integrating it with Cloud Computing. For predictive analysis to take over industries, the vast amount of data that has to be processed in Machine Learning models need a scalable distributed system for storage. This is where the relevance of Cloud comes in. ML, when paired with Cloud, forms an Intelligent Cloud that becomes a suitable destination for all Machine Learning projects and becomes handy for data collection, data optimization, data distribution, and managing a data transport network and deployment of Machine Learning models. With almost every business looking to deploy AI in their operations in the near future, the demand for skilled ML and Cloud professionals is more than ever before. A report by the World Economic Forum also suggests that this industry will create about 58 million new jobs by 2022. This clearly indicates the importance of upskilling oneself with a strongly connected ML and Cloud program.To cater to this growing demand and to help young professionals understand and develop packaged ML solutions, upGrad has collaborated with IIT Madras to develop an Advanced Certification in Machine Learning and Cloud program. The 9-month long program recognises the importance of taking ML to Cloud to realise full-scale AI implementations across verticals. upGrad understands the relevance of data and insights in business operations. The program covers the deployment of advanced Machine Learning models on Cloud, giving individuals an opportunity to cater to data demands across multiple industry domains like e-commerce, retail, healthcare, banking, manufacturing, transport, NBFC, and finance among others.'; var randomNumber = Math.random(); var isIndia = (window.geoinfo && window.geoinfo.CountryCode === 'IN') && (window.location.href.indexOf('outsideindia') === -1 ); //console.log(isIndia && randomNumber A Highly Selective & Exclusive ProgramTo ensure that the program is exciting as well as challenging, upGrads Advanced Certification in Machine Learning and Cloud is highly selective & exclusive and admits only 70 individuals in one cohort to ensure focused learning and individual growth. For this, applicants have to go through the All India Aptitude Test from IIT Madras, a comprehensive entrance test, an interview round, and a final panel selection before they are allowed admittance to the program. This ensures that each academic batch consists of highly skilled individuals who are capable of carrying the IIT batch forward and can later help their employers take high-stake data risks with confidence. The time investment for this program on a weekly basis is about 12-14 hours which further makes it an ideal upskilling programme for working individuals.Learn from the best in the business

With data being the operative word for every sector, every organization is currently scaling up its AI and ML workforce. upGrads Advanced Certification in Machine Learning and Cloud is helping learners become vital to their companys success by training them efficiently. upGrad learners deploy machine learning models using PySpark on Cloud and they get an opportunity to learn from a set of experienced Machine Learning faculty and industry leaders. The prestigious program also has about 300+ hiring partners, ensuring that learners can land up in the industry of their choice by the end of the program. The program has been largely successful in building employability of learners and boosting their annual packages. The current demand for ML engineers is at an all-time high, with even freshers getting hired at astounding pay packages. Considering this shift, upGrads Advanced Program in Machine Learning and Cloud is the best way to flag off ones ML journey.

Specifically designed for data analysts, business analysts, cloud engineers, software engineers, application developers, and product managers among others, the program will be highly beneficial in learning about the following aspects:Programming: Learn core and necessary languages like Python, which is required for ML operations and SQL, which is a vital language of the Cloud along with deployment of Machine Learning models using Cloud.

Machine learning concepts: Learn both basic and advanced subjects within ML. This will help learners to understand the application of appropriate ML algorithms to categorize unknown data or make predictions about it. The program also helps learners modify and craft algorithms of their own.

Foundations of Cloud and Hadoop: Learn about Hadoop, Hive, and HDFS along with the implementation of ML algorithms in the cloud on Spark/ PySpark (AWS/ Azure/ GCP).

Why choose upGrad?upGrads Advanced Certification Program in Machine Learning and Cloud will provide learners with a PG Certification from IIT Madras, one of Indias top IITs. This teaching panel includes faculty from IIT Madras and leading industry experts who seamlessly integrate online lectures, offline engagement, case studies, and interactive networking sessions. It provides 360-degree support to young professionals by taking care of career counselling, dedicated student success mentors, resume feedback, interview preparation, and job assistance. Over the years, the program has seen 500+ career transitions, with an average salary hike of 58%. Many of these learners have been placed in companies like KPMG, Uber, Big Basket, Bain & Co, Pwc, Zivame, Fractal Analytics, Microsoft etc. with impressive salary shifts.

upGrads Advanced Certification in Machine Learning and Cloud is also one of the most cost-effective methods for professionals looking to hop onto the Machine Learning bandwagon. The program fee is 2,00,000 and it is also available at a no cost EMI of 29,166/- per month. By uniting upGrads data expertise with IIT Madras academic excellence, it provides a unique opportunity to learners to scale up.

If you want to fast-track your career and make yourself readily employable, its time you take the All India Test for the Advanced Certification in Machine Learning and Cloud. The program commences on June 30, 2020, with admissions closing on June 7, 2020, owing to a mandatory pre-prep course spanning across 3 weeks before the start of the program. Its time to take the big leap with upGrad. Apply for the All India Aptitude Test today.

Click here for more information.

More here:
Evolve your career with upGrads Machine Learning and Cloud program in association with IIT Madras - Economic Times

Reality Check: The Benefits of Artificial Intelligence – AiThority

Gartner believes Artificial Intelligence (AI) security will be a top strategic technology trend in 2020, and that enterprises must gain awareness of AIs impact on the security space. However, many enterprise IT leaders still lack a comprehensive understanding of the technology and what the technology can realistically achieve today. It is important for leaders to question exasperated Marketing claims and over-hyped promises associated with AI so that there is no confusion as to the technologys defining capabilities.

IT leaders should take a step back and consider if their company and team is at a high enough level of security maturity to adopt advanced technology such as AI successfully. The organizations business goals and current focuses should align with the capabilities that AI can provide.

A study conducted by Widmeyer revealed that IT executives in the U.S. believe that AI will significantly change security over the next several years, enabling IT teams to evolve their capabilities as quickly as their adversaries.

Of course, AI can enhance cybersecurity and increase effectiveness, but it cannot solve every threat and cannot replace live security analysts yet. Today, security teams use modern Machine Learning (ML) in conjunction with automation, to minimize false positives and increase productivity.

As adoption of AI in security continues to increase, it is critical that enterprise IT leaders face the current realities and misconceptions of AI, such as:

AI is not a solution; it is an enhancement. Many IT decision leaders mistakenly consider AI a silver bullet that can solve all their current IT security challenges without fully understanding how to use the technology and what its limitations are. We have seen AI reduce the complexity of the security analysts job by enabling automation, triggering the delivery of cyber incident context, and prioritizing fixes. Yet, security vendors continue to tout further, exasperated AI-enabled capabilities of their solution without being able to point to AIs specific outcomes.

If Artificial Intelligence is identified as the key, standalone method for protecting an organization from cyberthreats, the overpromise of AI coupled with the inability to clearly identify its accomplishments, can have a very negative impact on the strength of an organizations security program and on the reputation of the security leader. In this situation, Chief Information Security Officers (CISO) will, unfortunately, realize that AI has limitations and the technology alone is unable to deliver aspired results.

This is especially concerning given that 48% of enterprises say their budgets for AI in cybersecurity will increase by 29 percent this year, according to Capgemini.

Read more:Improve Your Bottom Line With Contract Automation and AI

We have seen progress surrounding AI in the security industry, such as the enhanced use of ML technology to recognize behaviors and find security anomalies. In most cases, security technology can now correlate the irregular behavior with threat intelligence and contextual data from other systems. It can also use automated investigative actions to provide an analyst with a strong picture of something being bad or not with minimal human intervention.

A security leader should consider the types of ML models in use, the biases of those models, the capabilities possible through automation, and if their solution is intelligent enough to build integrations or collect necessary data from non-AI assets.

AI can handle a bulk of the work of a Security Analyst but not all of it. As a society, we still do not have enough trust in AI to take it to the next level which would be fully trusting AI to take corrective actions towards those anomalies it identified. Those actions still require human intervention and judgment.

Read more:The Nucleus of Statistical AI: Feature Engineering Practicalities for Machine Learning

It is important to consider that AI can make bad or wrong decisions. Given that humans themselves create and train the models that achieve AI, it can make biased decisions based on the information it receives.

Models can produce a desired outcome for an attacker, and security teams should prepare for malicious insiders to try to exploit AI biases. Such destructive intent to influence AIs bias can prove to be extremely damaging, especially in the legal sector.

By feeding AI false information, bad actors can trick AI to implicate someone of a crime more directly. As an example, just last year, a judge ordered Amazon to turn over Echo recordings in a double murder case. In instances such as these, a hacker has the potential to wrongfully influence ML models and manipulate AI to put an innocent person in prison. In making AI more human, the likelihood of mistakes will increase.

Whats more, IT decision-makers must take into consideration that attackers are utilizing AI and ML as an offensive capability. AI has become an important tool for attackers, and according to Forresters Using AI for Evil report, mainstream AI-powered hacking is just a matter of time.

AI can be leveraged for good and for evil, and it is important to understand the technologys shortcomings and adversarial potential.

Though it is critical to acknowledge AIs realistic capabilities and its current limitations, it is also important to consider how far AI can take us. Applying AI throughout the threat lifecycle will eventually automate and enhance entire categories of Security Operations Center (SOC) activity. AI has the potential to provide clear visibility into user-based threats and enable increasingly effective detection of real threats.

There are many challenges IT decision-makers face when over-estimating what Artificial Intelligence alone can realistically achieve and how it impacts their security strategies right now. Security leaders must acknowledge these challenges and truths if organizations wish to reap the benefits of AI today and for years to come.

Read more:AI in Cybersecurity: Applications in Various Fields

Share and Enjoy !

See original here:
Reality Check: The Benefits of Artificial Intelligence - AiThority

IBM’s The Weather Channel app using machine learning to forecast allergy hotspots – TechRepublic

The Weather Channel is now using artificial intelligence and weather data to help people make better decisions about going outdoors based on the likelihood of suffering from allergy symptoms.

Amid the COVID-19 pandemic, most people are taking precautionary measures in an effort to ward off coronavirus, which is highly communicable and dangerous. It's no surprise that we gasp at every sneeze, cough, or even sniffle, from others and ourselves. Allergy sufferers may find themselves apologizing awkwardly, quickly indicating they don't have COVID-19, but have allergies, which are often treated with sleep-inducing antihistamines that cloud critical thinking.

The most common culprits and indicators to predict symptomsragweed, grass, and tree pollen readingsare often inconsistently tracked across the country. But artificial intelligence (AI) innovation from IBM's The Weather Channel is coming to the rescue of those roughly 50 million Americans that suffer from allergies.

The Weather Channel's new tool shows a 15-day allergy forecast based on ML.

Image: Teena Maddox/TechRepublic

IBM's The Weather Channel is now using machine learning (ML) to forecast allergy symptoms. IBM data scientists developed a new tool on The Weather Channel app and weather.com, "Allergy Insights with Watson" to predict your risk of allergy symptoms.

Weather can also drive allergy behaviors. "As we began building this allergy model, machine learning helped us teach our models to use weather data to predict symptoms," said Misha Sulpovar, product leader, consumer AI and ML, IBM Watson media and weather. Sulpovar's role is focused on using machine learning and blockchain to develop innovative and intuitive new experiences for the users of the Weather Channel's digital properties, specifically, weather.com and The Weather Channel smart phone apps.

SEE: IBM's The Weather Channel launches coronavirus map and app to track COVID-19 infections (TechRepublic)

Any allergy sufferer will tell you it can be absolutely miserable. "If you're an allergy sufferer, you understand that knowing in advance when your symptom risk might change can help anyone plan ahead and take action before symptoms may flare up," Sulpovar said. "This allergy risk prediction model is much more predictive around users' symptoms than other allergy trackers you are used to, which mostly depend on pollenan imperfect factor."

Sulpovar said the project has been in development for about a year, and said, "We included the tool within The Weather Channel app and weather.com because digital users come to us for local weather-related information," and not only to check weather forecasts, "but also for details on lifestyle impacts of weather on things like running, flu, and allergy."

He added, "Knowing how patients feel helps improve the model. IBM MarketScan (research database) is anonymized data from doctor visits of 100 million patients."

Daily pollen counts are also available on The Weather Channel app.

Image: Teena Maddox/TechRepublic

"A lot of what drives allergies are environmental factors like humidity, wind, and thunderstorms, as well as when specific plants in specific areas create pollen," Sulpovar said. "Plants have predictable behaviorfor example, the birch tree requires high humidity for birch pollen to burst and create allergens. To know when that will happen in different locations for all different species of trees, grasses, and weeds is huge, and machine learning is a huge help to pull it together and predict the underlying conditions that cause allergens and symptoms. The model will select the best indicators for your ZIP code and be a better determinant of atmospheric behavior."

"Allergy Insights with Watson" anticipates allergy symptoms up to 15 days in advance. AI, Watson, and its open multi-cloud platform help predict and shape future outcomes, automate complex processes, and optimize workers' time. IBM's The Weather Channel and weather.com are using this machine learning Watson to alleviate some of the problems wrought by allergens.

Sulpovar said, "Watson is IBM's suite of enterprise-ready AI services, applications, and tooling. Watson helps unlock value from data in new ways, at scale."

Data scientists have discovered a more accurate representation of allergy conditions. "IBM Watson machine learning trained the model to combine multiple weather attributes with environmental data and anonymized health data to assess when the allergy symptom risk is high, Sulpovar explained. "The model more accurately reflects the impact of allergens on people across the country in their day-to-day lives."

The model is challenged by changing conditions and the impact of climate change, but there has been a 25% to 50% increase in better decision making, based on allergy symptoms.

It may surprise long-time allergy sufferers who often cite pollen as the cause of allergies that "We found pollen is not a good predictor of allergy risk alone and that pollen sources are unreliable and spotty and cover only a small subset of species," Sulpovar explained. "Pollen levels are measured by humans in specific locations, but sometimes those measurements are few and far between, or not updated often. Our team found that using AI and weather data instead of just pollen data resulted in a 25-50% increase in making better decisions based on allergy symptoms."

Available on The Weather Channel app for iOS and Android, you can also find the tool online atwww.weather.com. Users of the tool will be given an accurate forecast, be alerted to flare-ups, and be provided with practical tips to reduce seasonal allergies.

This story was updated on April 23, 2020 to correct the spelling of Misha Sulpovar's name.

If you can only read one tech story a day, this is it. Delivered Weekdays

Image: Getty Images/iStockphoto

Read the original here:
IBM's The Weather Channel app using machine learning to forecast allergy hotspots - TechRepublic

Apple is on a hiring freeze … except for its Hardware, Machine Learning and AI teams – Thinknum Media

Word in the tech community is that Apple ($NASDAQ:AAPL) employees are begnning to report hiring freezes for certain groups within the company. But other reports are that hiring is continuing at the Cupertino tech giant. In fact, we've reported on the former.

It turns out that both reports are correct. For some divisions, like Marketing and Corporate Functions, openings have been reduced. But for others, like Hardware and Machine Learning, openings and subsequent hiring appear to be as brisk as ever.

To be clear, overall, job listings at Apple have been cut back.

As recently as mid-March, Apple job listings were nearing the 6,000 mark, which would have been the company's most prolific hiring spree in history. But in late March, it became clear that no one would be going into the office any time soon, and openings quickly began disappearing from Apple's recruitment site. As of this week, openings at Apple are down to 5,240, signaling a decrease in hiring of about 13%.

But not all divisions are stalling their job listings. NeitherApple's "Hardware" or"Machine Learning and AI" groups show a decline in job listings of note.

Hardware openings are flat at worst. Today's 1,570 openings isn't significantly different than a high of 1,600 in March.

Apple's "Machine Learning and AI" group remains as healthy as ever when it comes to new listings being posted to the company's careers sites. As of this week, the team has 334 openings. Last month, that number was 300, an 11% increase in hiring activity.

However, other groups at Apple have seen significant decreases in job listings, including "Software and Services", "Marketing", and "Corporate Functions".

Apple's "Software and Services" team saw a siginificant drop in openings, particularly on April 10, when around 110 openings were cut from the company's recruiting website overnight. Since mid-March, openings on the team have fallen by about 12%.

Between April 14 and April 23, the number of listings for Apple's "Marketing" team dropped by 84. In late March, Apple was seeking 311 people for its Marketing team. Since then, openings have fallen by 36% for the team.

"Corporate Functions" jobs at Apple, which include everything from HR to Finance and Legal, have also seen a steep decline in recent weeks. In late March, Apple listed more than 300 openings for the team. As of this week, it has just around 200 openings, a roughly 1/3 hiring freeze.

So is Apple in the middle of a hiring freeze? Some parts of the company appear frozen. Others appear as hot as ever. Given the in-person nature of Marketing and Corporate Functions jobs, it's not surprising that the company would tap the breaks on interviewing for such positions. On the other hand, engineers working on hardware and machine learning can be remote interviewed and onboarded with equipment delivery.

So, yes, and yes. Apple is, and is not, in the middle of a hiring freeze.

Thinknum tracks companies using the information they post online - jobs, social and web traffic, product sales and app ratings - andcreates data sets that measure factors like hiring, revenue and foot traffic. Data sets may not be fully comprehensive (they only account for what is available on the web), but they can be used to gauge performance factors like staffing and sales.

See original here:
Apple is on a hiring freeze ... except for its Hardware, Machine Learning and AI teams - Thinknum Media

Understanding The Recognition Pattern Of AI – Forbes

Image and object recognition

Of the seven patterns of AI that represent the ways in which AI is being implemented, one of the most common is the recognition pattern. The main idea of the recognition pattern of AI is that were using machine learning and cognitive technology to help identify and categorize unstructured data into specific classifications. This unstructured data could be images, video, text, or even quantitative data. The power of this pattern is that were enabling machines to do the thing that our brains seem to do so easily: identify what were perceiving in the real world around us.

The recognition pattern is notable in that it was primarily the attempts to solve image recognition challenges that brought about heightened interest in deep learning approaches to AI, and helped to kick off this latest wave of AI investment and interest. The recognition pattern however is broader than just image recognition In fact, we can use machine learning to recognize and understand images, sound, handwriting, items, face, and gestures. The objective of this pattern is to have machines recognize and understand unstructured data. This pattern of AI is such a huge component of AI solutions because of its wide variety of applications.

The difference between structured and unstructured data is that structured data is already labelled and easy to interpret. However unstructured data is where most entities struggle. Up to 90% of an organization's data is unstructured data. It becomes necessary for businesses to be able to understand and interpret this data and that's where AI steps in. Whereas we can use existing query technology and informatics systems to gather analytic value from structured data, it is almost impossible to use those approaches with unstructured data. This is what makes machine learning such a potent tool when applied to these classes of problems.

Machine learning has a potent ability to recognize or match patterns that are seen in data. Specifically, we use supervised machine learning approaches for this pattern. With supervised learning, we use clean well-labeled training data to teach a computer to categorize inputs into a set number of identified classes. The algorithm is shown many data points, and uses that labeled data to train a neural network to classify data into those categories. The system is making neural connections between these images and it is repeatedly shown images and the goal is to eventually get the computer to recognize what is in the image based on training. Of course, these recognition systems are highly dependent on having good quality, well-labeled data that is representative of the sort of data that the resultant model will be exposed to in the real world. Garbage in is garbage out with these sorts of systems.

The many applications of the recognition pattern

The recognition pattern allows a machine learning system to be able to essentially look at unstructured data, categorize it, classify it, and make sense of what otherwise would just be a blob of untapped value. Applications of this pattern can be seen across a broad array of applications from medical imaging to autonomous vehicles, handwriting recognition to facial recognition, voice and speech recognition, or identifying even the most detailed things in videos and data of all types. Machine-learning enabled recognition has added significant power to security and surveillance systems, with the power to observe multiple simultaneous video streams in real time and recognize things such as delivery trucks or even people who are in a place they ought not be at a certain time of day.

The business applications of the recognition pattern are also plentiful. For example, in online retail and ecommerce industries, there is a need to identify and tag pictures for products that will be sold online. Previously humans would have to laboriously catalog each individual image according to all its attributes, tags, and categories. Nowadays, machine learning-based recognition systems are able to quickly identify products that are not already in the catalog and apply the full range of data and metadata necessary to sell those products online without any human interaction. This is a great place for AI to step in and be able to do the task much faster and much more efficiently than a human worker who is going to get tired out or bored. Not to mention these systems can avoid human error and allow for workers to be doing things of more value.

Not only is this recognition pattern being used with images, it's also used to identify sound in speech. There are lots of apps that exist that can tell you what song is playing or even recognize the voice of somebody speaking. Another application of this recognition pattern is recognizing animal sounds. The use of automatic sound recognition is proving to be valuable in the world of conservation and wildlife study. Using machines that can recognize different animal sounds and calls can be a great way to track populations and habits and get a better all-around understanding of different species. There could even be the potential to use this in areas such as vehicle repair where the machine can listen to different sounds being made by an engine and tell the operator of the vehicle what is wrong and what needs to be fixed and how soon.

One of the most widely adopted applications of the recognition pattern of artificial intelligence is the recognition of handwriting and text. While weve had optical character recognition (OCR) technology that can map printed characters to text for decades, traditional OCR has been limited in its ability to handle arbitrary fonts and handwriting. Machine learning-enabled handwriting and text recognition is significantly better at this job, in which it can not only recognize text in a wide range of printed or handwritten mode, but it can also recognize the type of data that is being recorded. For example, if there is text formatted into columns or a tabular format, the system can identify the columns or tables and appropriately translate to the right data format for machine consumption. Likewise, the systems can identify patterns of the data, such as Social Security numbers or credit card numbers. One of the applications of this type of technology are automatic check deposits at ATMs. Customers insert their hand written checks into the machine and it can then be used to create a deposit without having to go to a real person to deposit your checks.

The recognition pattern of AI is also applied to human gestures. This is something already heavily in use by the video game industry. Players can make certain gestures or moves that then become in-game commands to move characters or perform a task. Another major application is allowing customers to virtually try on various articles of clothing and accessories. It's even being applied in the medical field by surgeons to help them perform tasks and even to train people on how to perform certain tasks before they have to perform them on a real person. Through the use of the recognition pattern, machines can even understand sign language and translate and interpret gestures as needed without human intervention.

In the medical industry, AI is being used to recognize patterns in various radiology imaging. For example, these systems are being used to recognize fractures, blockages, aneurysms, potentially cancerous formations, and even being used to help diagnose potential cases of tuberculosis or coronavirus infections. Analyst firm Cognilytica is predicting that within just a few years, machines will perform the first analysis of most radiology images with instant identification of anomalies or patterns before they go to a human radiologist for further evaluation.

The recognition pattern is also being applied to identify counterfeit products. Machine-learning based recognition systems are looking at everything from counterfeit products such as purses or sunglasses to counterfeit drugs.

The use of this pattern of AI is impacting every industry from using images to get insurance quotes to analyzing satellite images after natural disasters to assess damage.Given the strength of machine learning in identifying patterns and applying that to recognition, it should come as little surprise that this pattern of AI will continue to see widespread adoption. In fact, in just a few years we might come to take the recognition pattern of AI for granted and not even consider it to be AI. That just goes to the potency of this pattern of AI. .

See the original post:
Understanding The Recognition Pattern Of AI - Forbes

The industries that can’t rely on machine learning – The Urban Twist

Ever since we started relying on machines and automation, people have been worried about the future of work and, specifically, whether robots will take over their jobs. And it seems this worry is becoming increasingly justified, as an estimated 40% of jobs could be replaced by robots for automated tasks by 2035. There is even a website dedicated to workers worried about whether they could eventually be replaced by robots.

While machines and artificial intelligence are becoming more complex and, therefore, more able to replace humans for menial tasks, that doesnt necessarily apply to a wide number of industries. Here, well go through the sectors that continue to require the human touch.

Despite scientists best efforts, the language and translation industry cannot be replaced by machines. Currently, automatic translation programmes are being developed with deep learning, a form of artificial intelligence which allows the computer to identify and correct its own mistakes through prolonged use and understanding. However, this still isnt enough to guarantee a correct translation, as deep learning requires external factors, like language itself, to remain the same over time. As we know, language is constantly developing, often with changes so subtle, you cant tell its happening. For a machine to be able to accurately translate texts or speech, it would need to be constantly updated with every new modification, across all languages.

Machines are also less able to pick up on the nuances found in speech or text. Things like sarcasm, jokes, or pop culture references are not easily translated, as the new audience may not understand them. Translating idioms is a particularly common example of this, as these phrases are generally unique to their dialect. In the UK, for example, the phrase its raining cats and dogs means its raining heavily. You would not want this translated on a literal level. As London Translations state in an article on the importance of using professionals for financial text translation, literal translations are technically correct, but read awkwardly and can be difficult to comprehend due to poor knowledge of the source language. Needless to say, these issues would be totally unacceptable in a document as important as a financial report.

Translating with accuracy not only requires fluency in both languages, but also a complete understanding of cultural differences and how they can be compared. Machines are simply not able to naturally make these connections without having the information already inputted by a person.

Finding the perfect candidate for a role can get stressful, especially if you have a pool of excellent potential employees to choose from. However, there are now algorithms that recruiters can use to help speed the process up and, theoretically, pick the most suitable person for the job. The technology is being praised for its ability to remove discrimination, as it simply examines raw data, and thus omits any sense of natural prejudice. It can also work to speed up the hiring process, as a computer can quickly sift through applicants and present the most relevant ones, saving someone the job of having to manually read through every application before making a decision.

However, in practice, its not that simple. Recruiting the right candidate should be based on more than qualifications and experience. Personality, attitude, and cultural fit should also be considered when recruiters are finding a candidate, none of which can be picked up on by machines.

One way of minimising this risk could be to introduce the algorithm at an earlier stage, through targeted ads or to help sift through initial applications. This allows recruiters to look at relevant candidates, rather than those that wouldnt have passed the initial screening anyway. However, this could conversely work to introduce bias to the recruitment process. The Harvard Business Review found that the algorithm effectively shapes the pool of candidates, giving a selection of applications that are all similar, fitting the mould that the computer is looking for. The study found that targeted ads on social media for a cashier role were shown to 85% of women, while cab driver ads were shown to an audience that was around 75% black. This happened as the algorithm reproduced bias from the real world, without human intervention. Having people physically checking the applications can serve to prevent this bias, introducing a more conscious effort to carefully screen each candidate on their own merits.

More people than ever before are meeting their partners online, according to a study published by Stanford University. And while a matchmaking algorithm sounds like a dream for singletons, it doesnt mean that they are able to effectively set you up with your life partner. As these algorithms are actually the intellectual property of each app, Dr Samantha Joel, assistant professor at London, Canadas Western University, created her own app with colleagues. Volunteers were asked to complete a questionnaire about themselves and ideal partners, much like typical dating websites would. After answering over 100 questions, the data was analysed and volunteers were set up on four-minute-long speed dates with potential candidates. Joel then asked the volunteers about their feelings towards any of their dates.

These results then identified the three things needed to predict romantic interest: actor desire (how much people liked their dates), partner desire (how much people were liked by dates), and attractiveness. The researchers were able to subtract attractiveness from the scores of romantic interest, giving a measure of compatibility. However, while the algorithm could accurately predict actor and partner desire, it failed on compatibility. Instead, it may be worth sticking to the second most common way of meeting a partner through a mutual friend. Your friends will be able to make educated decisions about relationships, as they have a deeper understanding of preferences and compatibility in a way that a machine simply cant replicate.

Author Bio: Syna Smith is a chief editor of Business usa today. She has also good experience in digital marketing.

More:
The industries that can't rely on machine learning - The Urban Twist

Artificial Intelligence & Advanced Machine learning Market is expected to grow at a CAGR of 37.95% from 2020-2026 – Latest Herald

According toBlueWeave Consulting, The globalArtificial Intelligence market&Advanced Machinehas reached USD 29.8 Billion in 2019 and projected to reach USD 281.24 Billion by 2026 and anticipated to grow with CAGR of 37.95% during the forecast period from 2020-2026, owing to increasing overall global investment in Artificial Intelligence Technology.

Request to get the report sample pages at : https://www.blueweaveconsulting.com/artificial-intelligence-and-advanced-machine-learning-market-bwc19415/report-sample

Artificial Intelligence (AI) is a computer science algorithm and analytics-driven approach to replicate human intelligence in a machine and Machine learning (ML) is an enhanced application of artificial intelligence, which allows software applications to predict the resulted accurately. The development of powerful and affordable cloud computing infrastructure is having a substantial impact on the growth potential of artificial intelligence and advanced machine learning market. In addition, diversifying application areas of the technology, as well as a growing level of customer satisfaction by users of AI & ML services and products is another factor that is currently driving the Artificial Intelligence & Advanced Machine Learning market. Moreover, in the coming years, applications of machine learning in various industry verticals is expected to rise exponentially. Proliferation in data generation is another major driving factor for the AI & Advanced ML market. As natural learning develops, artificial intelligence and advanced machine learning technology are paving the way for effective marketing, content creation, and consumer interactions.

In the organization size segment, large enterprises segment is estimated to have the largest market share and the SMEs segment is estimated to grow at the highest CAGR over the forecast period of 2026. The rapidly developing and highly active SMEs have raised the adoption of artificial intelligence and machine learning solutions globally, as a result of the increasing digitization and raised the cyber risks to critical business information and data. Large enterprises have been heavily adopting artificial intelligence and machine learning to extract the required information from large amounts of data and forecast the outcome of various problems.

Predictive analysis and machine learning and is rapidly used in retail, finance, and healthcare. The trend is estimated to continue as major technology companies are investing resources in the development of AI and ML. Due to the large cost-saving, effort-saving, and the reliable benefits of AI automation, machine learning is anticipated to drive the global artificial intelligence and Advanced machine learning market during the forecast period of 2026.

Digitalization has become a vital driver of artificial intelligence and advanced machine learning market across the region. Digitalization is increasingly propelling everything from hotel bookings, transport to healthcare in many economies around the globe. Digitalization had led to rising in the volume of data generated by business processes. Moreover, business developers or crucial executives are opting for solutions that let them act as data modelers and provide them an adaptive semantic model. With the help of artificial intelligence and Advanced machine learning business users are able to modify dashboards and reports as well as help users filter or develop reports based on their key indicators.

Geographically, the Global Artificial Intelligence & Advanced Machine Learning market is bifurcated into North America, Asia Pacific, Europe, Middle East, Africa & Latin America. The North America is dominating the market due to the developed economies of the US and Canada, there is a high focus on innovations obtained from R&D. North America has rapidly changed, and the most competitive global market in the world. The Asia-pacific region is estimated to be the fastest-growing region in the global AI & Advanced ML market. The rising awareness for business productivity, supplemented with competently designed machine learning solutions offered by vendors present in the Asia-pacific region, has led Asia-pacific to become a highly potential market.

Request to get the report description pages at :https://www.blueweaveconsulting.com/artificial-intelligence-and-advanced-machine-learning-market-bwc19415/

Artificial Intelligence & Advanced Machine Learning Market: Competitive Landscape

The major market players in the Artificial Intelligence & Advanced Machine Learning market are ICarbonX, TIBCO Software Inc., SAP SE, Fractal Analytics Inc., Next IT, Iflexion, Icreon, Prisma Labs, AIBrain, Oracle Corporation, Quadratyx, NVIDIA, Inbenta, Numenta, Intel, Domino Data Lab, Inc., Neoteric, UruIT, Waverley Software, and Other Prominent Players are expanding their presence in the market by implementing various innovations and technology.

Read more here:
Artificial Intelligence & Advanced Machine learning Market is expected to grow at a CAGR of 37.95% from 2020-2026 - Latest Herald

Automation, AI, and ML The Heroes in the World of Payment Fraud Detection – EnterpriseTalk

How organizations are leveraging AI to track a fraudulent activity (for example, in the financial industry) and what tools are available to the enterprises right now?

Machine Learning is not new in the world of payment fraud. In fact, one of the pioneers of Machine Learning is Professor Leon Cooper, Director of Brown Universitys Centre for Neural Science. The Centre was founded in 1973 to study animal nervous systems and the human brain. However, if you follow his career, Dr. Coopers machine learning technology was adapted for spotting fraud on credit cards and is still used today for identifying payment fraud within many financial institutions around the world.

Firms Need to be Secure to the Core Before Considering Digital Transformation

Machine learning technologies improve when they are presented with more and more data. Since there is a lot of payment data around today, payment fraud prevention has become an excellent use case for AI. To date, machine learning technologies have been used mainly by banks. Still, today more and more merchants are taking advantage of this technology to help automate fraud detection, including many retailers and telecommunications companies.

What are the interesting developments in this space for enterprises?

There is a lot of information on how machine learning is helping to understand human behavior and, more specifically, false/positive detection. However, it is our view that there is not enough focus on how automation could benefit the whole, end to end process, particularly within day to day fraud management business processes.

Until now, the fraud detection industry has focused on detecting fraud reactively; but it has not focussed on proactively evaluating the impact of automation on the whole end to end fraud management process. Clearly, the interdependencies on these two activity streams are significant, so the question remains why fraud prevention suppliers arent considering both.

Automation Is Booming Robots Are Taking Over Amid Lock-downs

Fraud is increasing, so at what point do we recognize that the approach of throwing budget and increasing the number of analysts in our teams is not working and that we need to consider automating more of the process? Machines dont steal data, so why are the manual processes/interventions not attracting more attention?

It isnt a stretch to imagine most of the fraud risk strategy process becoming automated. Instead of the expanding teams of today performing the same manual task continually, those same staff members could be used to spot enhancements in customer insight. This would enable analysts to thoroughly investigate complex fraud patterns, which a machine has not identified, or to assist in other tasks outside of risk management, which provide added business value.

Process automation is continuing to innovate and provide increased efficiency and profit gains in the places its implemented. The automation revolution isnt coming; its here.

What are the major concerns?

One major concern is the lack of urgency in adopting new ways of working there is a need to be more agile and innovative to stop the fraudsters continuing to win. We need to act fast and innovate, but many organizations are struggling to keep up, and the fraudsters are winning.

The use cases are well defined for the use of machine learning and AI, with big data sets, etc. but machine learning will not fix poor data management processes alone. Machines dont steal data. People do.

With the number of digital payments being made across the globe increasing dramatically, how can organizations ensure maximum sales conversion and payment acceptance, whilst mitigating any risk exposure?

Strategy alignment for taking digital payments is critical. The more organizations can operate holistically and not get caught out by silos and operational gaps, the better. Put simply; if key stakeholders in both the sales and marketing and risk teams are working to the same set of key performance indicators (KPIs), then mistakes will be mitigated. Many issues arise due to operational gaps, and those gaps will be exploited by the highly sophisticated and technically advanced modern-day fraudster.

Artificial Intelligence Infused with Big Data Creating a Tech-driven World

The reality is that technology is accelerating the convergence business activities. Managing that convergence and adapting your organization to ensure it remains competitive becomes more and more important. Successful organizations with a competitive future will continue to ensure maximum sales conversions and payment acceptance, whilst mitigating any risk exposure, by exploiting best of breed technology as much as possible.

Excerpt from:
Automation, AI, and ML The Heroes in the World of Payment Fraud Detection - EnterpriseTalk

Learning to Trust AI in Troubled Times – AiThority

As budgets tighten amidst a global crisis, marketers are scrambling to find better sources of truth. Whether its good prospecting performance, campaign management, or audience optimisation, there are many areas of success when it comes to the programmatic landscape. To meet this need, programmatic advertising is increasingly being driven by machine learning. So why would anyone doubt machine learning?

Machine learning models are, in many ways expert liars. Machine learning optimises by any means necessary, and if blurring the truth or taking into account irrelevant information helps to optimise, then this is what occurs. Its scary to think how much an unchecked model could get away with in the fast-paced world of programmatic, where seconds count.

In fact, Artificial Intelligence (AI) researcher, Sandra Wachter, actually calls machine learning algorithms black boxes saying: There is a lack of transparency around machine learning decisions and theyre not necessarily justified or legitimate.

So, how can anyone ensure a machine learning model is telling the truth? The best way is to treat the model like a job interview candidate; that is, any statements made should be treated with the due amount of scepticism, and facts must always be checked.

When it comes to performance, everyone wants it better. However, while a model might offer better performance on face value, its important to ask how exactly is that measured:

Machine learning technology can be time consuming and expensive, and its remarkably easy to waste money on a bad algorithm. Having good, solid proof a model works is a great way to avoid wasting budget. Fact-checking and asking for more evidence is vital if unsure of results, and if the model vendor cant offer access to an analyst who can back up the numbers with the work, move on.

Just because all data is accessible, doesnt mean it should be used, or that each point of data is as important as another.

Is knowing whether someone has bought a product before as important as the colour of their socks? If all data in the machine learning model is being used, marketers must ask how and why. Why is all of the data used? Why is all this important? What tests were run to prove it? Is the model even allowed to use all the data?

Everyones familiar with the concept and purpose of GDPR and similar global legislation. So, you must make sure you ask the question about how data is being used, or run the risk of severe fines.

Brands have clear metrics to hit and its the job of client services, together with data engineering, to ensure the machine learning optimises towards the KPIs. However, the beauty of machine learning is it frees up the client services team to do more than just achieve the brands KPI; it can help brands achieve business goals, too.

With thousands of successful campaigns under their belts, client services know what works and what doesnt. Users should expect to be able to contact a specialist at any time to make sure its doing what the clients want.

When talking about purchasing machine learning with a vendor who cant (or wont) answer your questions, its time to bail. Marketers must feel empowered to ask any and all questions of vendors, and just like a job interview, if the answer isnt a good fit then neither is the candidate.

Not knowing about or not understanding machine learning is accepted. However, whats not acceptable is to not be allowed to question machine learning just does it. In order to innovate, especially in volatile environments, everyone needs to better understand machine learning and to achieve this, a two-way conversation is vital.

Silverbullet is the new breed of data-smart marketing services, designed to empower businesses to achieve through a unique hybrid of data services, insight-informed content and programmatic. Our blend of artificial intelligence and human experience

More about Silver Bullet: http://www.wearesivlerbullet.com

Share and Enjoy !

The rest is here:
Learning to Trust AI in Troubled Times - AiThority

Googles AutoML Zero lets the machines create algorithms to avoid human bias – The Next Web

It looks like Googles working on some major upgrades to its autonomous machine learning development language AutoML. According to a pre-print research paper authored by several of the big Gs AI researchers, AutoML Zero is coming, and its bringing evolutionary algorithms with it.

AutoML is a tool from Google that automates the process of developing machine learning algorithms for various tasks. Its user-friendly, fairly simple to use, and completely open-source. Best of all, Googles always updating it.

In its current iteration, AutoML has a few drawbacks. You still have to manually create and tune several algorithms to act as building blocks for the machine to get started. This allows it to take your work and experiment with new parameters in an effort to optimize what youve done. Novices can get around this problem by using pre-made algorithm packages, but Googles working to automate this part too.

Per the Google teams pre-print paper:

It is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks. We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.

Despite the vastness of this space, evolutionary search can still discover two-layer neural networks trained by backpropagation. These simple neural networks can then be surpassed by evolving directly on tasks of interest, e.g. CIFAR-10 variants, where modern techniques emerge in the top algorithms, such as bilinear interactions, normalized gradients, and weight averaging.

Moreover, evolution adapts algorithms to different task types: e.g., dropout-like techniques appear when little data is available.

In other words: Googles figured out how to tap evolutionary algorithms for AutoML using nothing but basic math concepts. The developers created a learning paradigm in which the machine will spit out 100 randomly generated algorithms and then work to see which ones perform the best.

After several generations, the algorithms become better and better until the machine finds one that performs well enough to evolve. In order to generate novel algorithms that can solve new problems, the ones that survive the evolutionary process are tested against various standard AI problems, such as computer vision.

Read: Why the quickest path to human-level AI may be letting it evolve on its own

Perhaps the most interesting byproduct of Googles quest to completely automate the act of generating algorithms and neural networks is the removal of human bias from our AI systems. Without us there to determine what the best starting point for development is, the machines are free to find things wed never think of.

According to the researchers, AutoML Zero already outperforms its predecessor and similar state-of-the-art machine learning-generation tools. Future research will involve setting a more narrow scope for the AI and seeing how well it performs in more specific situations using a hybrid approach that creates algorithms with a combination of Zeros self-discovery techniques and human-curated starter libraries.

Published April 14, 2020 20:00 UTC

See more here:
Googles AutoML Zero lets the machines create algorithms to avoid human bias - The Next Web

Automated Machine Learning is the Future of Data Science – Analytics Insight

As the fuel that powers their progressing digital transformation endeavors, organizations wherever are searching for approaches to determine as much insight as could reasonably be expected from their data. The accompanying increased demand for advanced predictive and prescriptive analytics has, thus, prompted a call for more data scientists capable with the most recent artificial intelligence (AI) and machine learning (ML) tools.

However, such highly-skilled data scientists are costly and hard to find. Truth be told, theyre such a valuable asset, that the phenomenon of the citizen data scientist has of late emerged to help close the skills gap. A corresponding role, as opposed to an immediate substitution, citizen data scientists need explicit advanced data science expertise. However, they are fit for producing models utilizing best in class diagnostic and predictive analytics. Furthermore, this ability is incomplete because of the appearance of accessible new technologies, for example, automated machine learning (AutoML) that currently automate a significant number of the tasks once performed by data scientists.

The objective of autoML is to abbreviate the pattern of trial and error and experimentation. It burns through an enormous number of models and the hyperparameters used to design those models to decide the best model available for the data introduced. This is a dull and tedious activity for any human data scientist, regardless of whether the individual in question is exceptionally talented. AutoML platforms can play out this dreary task all the more rapidly and thoroughly to arrive at a solution faster and effectively.

A definitive estimation of the autoML tools isnt to supplant data scientists however to offload their routine work and streamline their procedure to free them and their teams to concentrate their energy and consideration on different parts of the procedure that require a more significant level of reasoning and creativity. As their needs change, it is significant for data scientists to comprehend the full life cycle so they can move their energy to higher-value tasks and sharpen their abilities to additionally hoist their value to their companies.

At Airbnb, they continually scan for approaches to improve their data science workflow. A decent amount of their data science ventures include machine learning and numerous pieces of this workflow are tedious. At Airbnb, they use machine learning to build customer lifetime value models (LTV) for guests and hosts. These models permit the company to improve its decision making and interactions with the community.

Likewise, they have seen AML tools as generally valuable for regression and classification problems involving tabular datasets, anyway, the condition of this area is rapidly progressing. In outline, it is accepted that in specific cases AML can immensely increase a data scientists productivity, often by an order of magnitude. They have used AML in many ways.

Unbiased presentation of challenger models: AML can rapidly introduce a plethora of challenger models utilizing a similar training set as your incumbent model. This can help the data scientist in picking the best model family. Identifying Target Leakage: In light of the fact that AML builds candidate models amazingly fast in an automated way, we can distinguish data leakage earlier in the modeling lifecycle. Diagnostics: As referenced prior, canonical diagnostics can be automatically created, for example, learning curves, partial dependence plots, feature importances, etc. Tasks like exploratory data analysis, pre-processing of data, hyper-parameter tuning, model selection and putting models into creation can be automated to some degree with an Automated Machine Learning system.

Companies have moved towards enhancing predictive power by coupling huge data with complex automated machine learning. AutoML, which uses machine learning to create better AI, is publicized as affording opportunities to democratise machine learning by permitting firms with constrained data science expertise to create analytical pipelines equipped for taking care of refined business issues.

Including a lot of algorithms that automate that writing of other ML algorithms, AutoML automates the end-to-end process of applying ML to real-world problems. By method for representation, a standard ML pipeline consists of the following: data pre-processing, feature extraction, feature selection, feature engineering, algorithm selection, and hyper-parameter tuning. In any case, the significant ability and time it takes to execute these strides imply theres a high barrier to entry.

In an article distributed on Forbes, Ryohei Fujimaki, the organizer and CEO of dotData contends that the discussion is lost if the emphasis on AutoML systems is on supplanting or decreasing the role of the data scientist. All things considered, the longest and most challenging part of a typical data science workflow revolves around feature engineering. This involves interfacing data sources against a rundown of wanted features that are assessed against different Machine Learning algorithms.

Success with feature engineering requires an elevated level of domain aptitude to recognize the ideal highlights through a tedious iterative procedure. Automation on this front permits even citizen data scientists to make streamlined use cases by utilizing their domain expertise. More or less, this democratization of the data science process makes the way for new classes of developers, offering organizations a competitive advantage with minimum investments.

Here is the original post:
Automated Machine Learning is the Future of Data Science - Analytics Insight

Covid-19 Detection With Images Analysis And Machine Learning – Elemental

/* we have just two outputs positive and negative according to our directories */ int outputNum = 2;int numEpochs = 1;

/*This class downloadData() downloads the datastores the data in java's tmpdir 15MB download compressedIt will take 158MB of space when uncompressedThe data can be downloaded manually here

// Define the File PathsFile trainData = new File(DATA_PATH + "/covid-19/training");File testData = new File(DATA_PATH + "/covid-19/testing");

// Define the FileSplit(PATH, ALLOWED FORMATS,random)FileSplit train = new FileSplit(trainData, NativeImageLoader.ALLOWED_FORMATS, randNumGen);FileSplit test = new FileSplit(testData, NativeImageLoader.ALLOWED_FORMATS, randNumGen);

// Extract the parent path as the image labelParentPathLabelGenerator labelMaker = new ParentPathLabelGenerator();

ImageRecordReader recordReader = new ImageRecordReader(height, width, channels, labelMaker);

// Initialize the record reader// add a listener, to extract the namerecordReader.initialize(train);//recordReader.setListeners(new LogRecordListener());

// DataSet IteratorDataSetIterator dataIter = new RecordReaderDataSetIterator(recordReader, batchSize, 1, outputNum);

// Scale pixel values to 0-1DataNormalization scaler = new ImagePreProcessingScaler(0, 1);scaler.fit(dataIter);dataIter.setPreProcessor(scaler);

// Build Our Neural Networklog.info("BUILD MODEL");MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder().seed(rngseed).optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT).updater(new Nesterovs(0.006, 0.9)).l2(1e-4).list().layer(0, new DenseLayer.Builder().nIn(height * width).nOut(100).activation(Activation.RELU).weightInit(WeightInit.XAVIER).build()).layer(1, new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD).nIn(100).nOut(outputNum).activation(Activation.SOFTMAX).weightInit(WeightInit.XAVIER).build()).setInputType(InputType.convolutional(height, width, channels)).build();

MultiLayerNetwork model = new MultiLayerNetwork(conf);

// The Score iteration Listener will log// output to show how well the network is trainingmodel.setListeners(new ScoreIterationListener(10));

log.info("TRAIN MODEL");for (int i = 0; i < numEpochs; i++) {model.fit(dataIter);}

log.info("EVALUATE MODEL");recordReader.reset();

// The model trained on the training dataset split// now that it has trained we evaluate against the// test data of images the network has not seen

recordReader.initialize(test);DataSetIterator testIter = new RecordReaderDataSetIterator(recordReader, batchSize, 1, outputNum);scaler.fit(testIter);testIter.setPreProcessor(scaler);

/*log the order of the labels for later useIn previous versions the label order was consistent, but randomIn current verions label order is lexicographicpreserving the RecordReader Labels order is nolonger needed left in for demonstrationpurposes*/log.info(recordReader.getLabels().toString());

// Create Eval object with 10 possible classesEvaluation eval = new Evaluation(outputNum);

// Evaluate the networkwhile (testIter.hasNext()) {DataSet next = testIter.next();INDArray output = model.output(next.getFeatures());// Compare the Feature Matrix from the model// with the labels from the RecordReadereval.eval(next.getLabels(), output);}// show the evaluationlog.info(eval.stats());}

More here:
Covid-19 Detection With Images Analysis And Machine Learning - Elemental

Nothing to hide? Then add these to your ML repo, Papers with Code says DEVCLASS – DevClass

In a bid to make advancements in machine learning more reproducible, ML resource and Facebook AI Research (FAIR) appendage Papers With Code has introduced a code completeness checklist for machine learning papers.

It is based on the best practices the Papers with Code team has seen in popular research repositories and the Machine Learning Reproducibility Checklist which Joelle Pineau, FAIR Managing Director, introduced in 2019, as well as some additional work Pineau and other researchers did since then.

Papers with Code was started in 2018 as a hub for newly published machine learning papers that come with source code, offering researchers an easy to monitor platform to keep up with the current state of the art. In late 2019 it became part of FAIR to further accelerate our growth, as founders Robert Stojnic and Ross Taylor put it back then.

As part of FAIR, the project will get a bit of a visibility push since the new checklist will also be used in the submission process for the 2020 edition of the popular NeurIPS conference on neural information processing systems.

The ML code completeness checklist is used to assess code repositories based on the scripts and artefacts that have been provided within it to enhance reproducibility and enable others to more easily build upon published work. It includes checks for dependencies, so that those looking to replicate a papers results have some idea what is needed in order to succeed, training and evaluation scripts, pre-trained models, and results.

While all of these seem like useful things to have, Papers with Code also tried using a somewhat scientific approach to make sure they really are indicators for a useful repository. To verify that, they looked for correlations between the number of fulfilled checklist items and the star-rating of a repository.

Their analysis showed that repositories that hit all the marks got higher ratings implying that the checklist score is indicative of higher quality submissions and should therefore encourage researchers to comply in order to produce useful resources. However, they simultaneously admitted that marketing and the state of documentation might also play into a repos popularity.

They nevertheless went on recommending to lay out the five elements mentioned and link to external resources, which always is a good idea. Additional tips for publishing research code can be found in the projects GitHub repository or the report on NeurIPS reproducibility program.

More:
Nothing to hide? Then add these to your ML repo, Papers with Code says DEVCLASS - DevClass

How Microsoft Teams will use AI to filter out typing, barking, and other noise from video calls – VentureBeat

Last month, Microsoft announced that Teams, its competitor to Slack, Facebooks Workplace, and Googles Hangouts Chat, had passed 44 million daily active users. The milestone overshadowed its unveiling of a few new features coming later this year. Most were straightforward: a hand-raising feature to indicate you have something to say, offline and low-bandwidth support to read chat messages and write responses even if you have poor or no internet connection, and an option to pop chats out into a separate window. But one feature, real-time noise suppression, stood out Microsoft demoed how the AI minimized distracting background noise during a call.

Weve all been there. How many times have you asked someone to mute themselves or to relocate from a noisy area? Real-time noise suppression will filter out someone typing on their keyboard while in a meeting, the rustling of a bag of chips (as you can see in the video above), and a vacuum cleaner running in the background. AI will remove the background noise in real time so you can hear only speech on the call. But how exactly does it work? We talked to Robert Aichner, Microsoft Teams group program manager, to find out.

The use of collaboration and video conferencing tools is exploding as the coronavirus crisis forces millions to learn and work from home. Microsoft is pushing Teams as the solution for businesses and consumers as part of its Microsoft 365 subscription suite. The company is leaning on its machine learning expertise to ensure AI features are one of its big differentiators. When it finally arrives, real-time background noise suppression will be a boon for businesses and households full of distracting noises. Additionally, how Microsoft built the feature is also instructive to other companies tapping machine learning.

Of course, noise suppression has existed in the Microsoft Teams, Skype, and Skype for Business apps for years. Other communication tools and video conferencing apps have some form of noise suppression as well. But that noise suppression covers stationary noise, such as a computer fan or air conditioner running in the background. The traditional noise suppression method is to look for speech pauses, estimate the baseline of noise, assume that the continuous background noise doesnt change over time, and filter it out.

Going forward, Microsoft Teams will suppress non-stationary noises like a dog barking or somebody shutting a door. That is not stationary, Aichner explained. You cannot estimate that in speech pauses. What machine learning now allows you to do is to create this big training set, with a lot of representative noises.

In fact, Microsoft open-sourced its training set earlier this year on GitHub to advance the research community in that field. While the first version is publicly available, Microsoft is actively working on extending the data sets. A company spokesperson confirmed that as part of the real-time noise suppression feature, certain categories of noises in the data sets will not be filtered out on calls, including musical instruments, laughter, and singing. (More on that here: ProBeat: Microsoft Teams video calls and the ethics of invisible AI.)

Microsoft cant simply isolate the sound of human voices because other noises also happen at the same frequencies. On a spectrogram of speech signal, unwanted noise appears in the gaps between speech and overlapping with the speech. Its thus next to impossible to filter out the noise if your speech and noise overlap, you cant distinguish the two. Instead, you need to train a neural network beforehand on what noise looks like and speech looks like.

To get his points across, Aichner compared machine learning models for noise suppression to machine learning models for speech recognition. For speech recognition, you need to record a large corpus of users talking into the microphone and then have humans label that speech data by writing down what was said. Instead of mapping microphone input to written words, in noise suppression youre trying to get from noisy speech to clean speech.

We train a model to understand the difference between noise and speech, and then the model is trying to just keep the speech, Aichner said. We have training data sets. We took thousands of diverse speakers and more than 100 noise types. And then what we do is we mix the clean speech without noise with the noise. So we simulate a microphone signal. And then you also give the model the clean speech as the ground truth. So youre asking the model, From this noisy data, please extract this clean signal, and this is how it should look like. Thats how you train neural networks [in] supervised learning, where you basically have some ground truth.

For speech recognition, the ground truth is what was said into the microphone. For real-time noise suppression, the ground truth is the speech without noise. By feeding a large enough data set in this case hundreds of hours of data Microsoft can effectively train its model. Its able to generalize and reduce the noise with my voice even though my voice wasnt part of the training data, Aichner said. In real time, when I speak, there is noise that the model would be able to extract the clean speech [from] and just send that to the remote person.

Comparing the functionality to speech recognition makes noise suppression sound much more achievable, even though its happening in real time. So why has it not been done before? Can Microsofts competitors quickly recreate it? Aichner listed challenges for building real-time noise suppression, including finding representative data sets, building and shrinking the model, and leveraging machine learning expertise.

We already touched on the first challenge: representative data sets. The team spent a lot of time figuring out how to produce sound files that exemplify what happens on a typical call.

They used audio books for representing male and female voices, since speech characteristics do differ between male and female voices. They used YouTube data sets with labeled data that specify that a recording includes, say, typing and music. Aichners team then combined the speech data and noises data using a synthesizer script at different signal to noise ratios. By amplifying the noise, they could imitate different realistic situations that can happen on a call.

But audiobooks are drastically different than conference calls. Would that not affect the model, and thus the noise suppression?

That is a good point, Aichner conceded. Our team did make some recordings as well to make sure that we are not just training on synthetic data we generate ourselves, but that it also works on actual data. But its definitely harder to get those real recordings.

Aichners team is not allowed to look at any customer data. Additionally, Microsoft has strict privacy guidelines internally. I cant just simply say, Now I record every meeting.'

So the team couldnt use Microsoft Teams calls. Even if they could say, if some Microsoft employees opted-in to have their meetings recorded someone would still have to mark down when exactly distracting noises occurred.

And so thats why we right now have some smaller-scale effort of making sure that we collect some of these real recordings with a variety of devices and speakers and so on, said Aichner. What we then do is we make that part of the test set. So we have a test set which we believe is even more representative of real meetings. And then, we see if we use a certain training set, how well does that do on the test set? So ideally yes, I would love to have a training set, which is all Teams recordings and have all types of noises people are listening to. Its just that I cant easily get the same number of the same volume of data that I can by grabbing some other open source data set.

I pushed the point once more: How would an opt-in program to record Microsoft employees using Teams impact the feature?

You could argue that it gets better, Aichner said. If you have more representative data, it could get even better. So I think thats a good idea to potentially in the future see if we can improve even further. But I think what we are seeing so far is even with just taking public data, it works really well.

The next challenge is to figure out how to build the neural network, what the model architecture should be, and iterate. The machine learning model went through a lot of tuning. That required a lot of compute. Aichners team was of course relying on Azure, using many GPUs. Even with all that compute, however, training a large model with a large data set could take multiple days.

A lot of the machine learning happens in the cloud, Aichner said. So, for speech recognition for example, you speak into the microphone, thats sent to the cloud. The cloud has huge compute, and then you run these large models to recognize your speech. For us, since its real-time communication, I need to process every frame. Lets say its 10 or 20 millisecond frames. I need to now process that within that time, so that I can send that immediately to you. I cant send it to the cloud, wait for some noise suppression, and send it back.

For speech recognition, leveraging the cloud may make sense. For real-time noise suppression, its a nonstarter. Once you have the machine learning model, you then have to shrink it to fit on the client. You need to be able to run it on a typical phone or computer. A machine learning model only for people with high-end machines is useless.

Theres another reason why the machine learning model should live on the edge rather than the cloud. Microsoft wants to limit server use. Sometimes, there isnt even a server in the equation to begin with. For one-to-one calls in Microsoft Teams, the call setup goes through a server, but the actual audio and video signal packets are sent directly between the two participants. For group calls or scheduled meetings, there is a server in the picture, but Microsoft minimizes the load on that server. Doing a lot of server processing for each call increases costs, and every additional network hop adds latency. Its more efficient from a cost and latency perspective to do the processing on the edge.

You want to make sure that you push as much of the compute to the endpoint of the user because there isnt really any cost involved in that. You already have your laptop or your PC or your mobile phone, so now lets do some additional processing. As long as youre not overloading the CPU, that should be fine, Aichner said.

I pointed out there is a cost, especially on devices that arent plugged in: battery life. Yeah, battery life, we are obviously paying attention to that too, he said. We dont want you now to have much lower battery life just because we added some noise suppression. Thats definitely another requirement we have when we are shipping. We need to make sure that we are not regressing there.

Its not just regression that the team has to consider, but progression in the future as well. Because were talking about a machine learning model, the work never ends.

We are trying to build something which is flexible in the future because we are not going to stop investing in noise suppression after we release the first feature, Aichner said. We want to make it better and better. Maybe for some noise tests we are not doing as good as we should. We definitely want to have the ability to improve that. The Teams client will be able to download new models and improve the quality over time whenever we think we have something better.

The model itself will clock in at a few megabytes, but it wont affect the size of the client itself. He said, Thats also another requirement we have. When users download the app on the phone or on the desktop or laptop, you want to minimize the download size. You want to help the people get going as fast as possible.

Adding megabytes to that download just for some model isnt going to fly, Aichner said. After you install Microsoft Teams, later in the background it will download that model. Thats what also allows us to be flexible in the future that we could do even more, have different models.

All the above requires one final component: talent.

You also need to have the machine learning expertise to know what you want to do with that data, Aichner said. Thats why we created this machine learning team in this intelligent communications group. You need experts to know what they should do with that data. What are the right models? Deep learning has a very broad meaning. There are many different types of models you can create. We have several centers around the world in Microsoft Research, and we have a lot of audio experts there too. We are working very closely with them because they have a lot of expertise in this deep learning space.

The data is open source and can be improved upon. A lot of compute is required, but any company can simply leverage a public cloud, including the leaders Amazon Web Services, Microsoft Azure, and Google Cloud. So if another company with a video chat tool had the right machine learners, could they pull this off?

The answer is probably yes, similar to how several companies are getting speech recognition, Aichner said. They have a speech recognizer where theres also lots of data involved. Theres also lots of expertise needed to build a model. So the large companies are doing that.

Aichner believes Microsoft still has a heavy advantage because of its scale. I think that the value is the data, he said. What we want to do in the future is like what you said, have a program where Microsoft employees can give us more than enough real Teams Calls so that we have an even better analysis of what our customers are really doing, what problems they are facing, and customize it more towards that.

Read the original post:
How Microsoft Teams will use AI to filter out typing, barking, and other noise from video calls - VentureBeat

Department Of Energy Announces $30 Million For Advanced AI & ML-Based Researches – Analytics India Magazine

The Department of Energy in the US has recently announced its initiative to provide up to $30 million for advanced research in artificial intelligence and machine learning. This fund can be used for both scientific investigation and the management of complex systems.

This initiative comprises two-fold strategies.

Firstly, focusing on the development of artificial intelligence and machine learning for predictive modelling and simulation focused on research across the physical sciences. The technologies ML and AI are considered to offer promising new alternatives to conventional programming methods for computer modelling and simulation. And, secondly, this fund will be utilised on essential ML and AI research for decision support in addressing complex systems.

Eventually, the potential applications could include cybersecurity, power grid resilience, and other complex processes where these emerging technologies can make or aid in creating business decisions in real-time.

When asked, Under Secretary for Science Paul Dabbar stated that both these technologies artificial intelligence and machine learning are among the most powerful tools we have today for both advancing scientific knowledge and managing our increasingly complex technological environment.

He further said, This foundational research will help keep the United States in the forefront as applications for ML and AI rapidly expand, and as we utilise this evolving technology to solve the worlds toughest challenges such as COVID-19.

The applications for this initiative will be open to DOE national laboratories, universities, nonprofits, and industry, and according to the peer review, the funding will be awarded.

According to DOE, the planned funding for the scientific machine learning for modelling and simulations topic will be up to $10 million in FY 2020 dollars for projects of two years in duration. On the other hand, the planned funding for the artificial intelligence and decision support for complex systems topic will be up to $20 million, with up to $7 million in FY 2020 dollars and out-year funding contingent on congressional appropriations.

comments

More here:
Department Of Energy Announces $30 Million For Advanced AI & ML-Based Researches - Analytics India Magazine

Data Annotation- Types, Tools, Benefits, and Applications in Machine Learning – Customer Think

It is unarguably true that the advent of machine learning and artificial intelligence has brought a revolutionary change in various industries globally. Both these technologies have made applications and machines way smarter than our imaginations. But, have you ever wondered how AI and ML work or how they make machines act, think, and behave like human beings.

To understand this, you have to dig deeper into the technical things. It is actually the trained data sets that do the magic to create automated machines and applications. These data sets are further needed to be created and trained through a process named Data annotation.

Data annotation is the technique of labeling the data, which is present in different formats such as images, texts, and videos. Labeling the data makes objects recognizable to computer vision, which further trains the machine. In short, the process helps the machine to understand and memorize the input patterns.

To create a data set required for machine learning, different types of data annotation methods are available. The prime aim of all these types of annotations is to help a machine to recognize text, images, and videos (objects) via computer vision.

Bounding boxesLines and splinesSemantic segmentation3D cuboidsPolygonal segmentationLandmark and key-pointImages and video annotationsEntity annotationContent and text categorization

Lets read them in detail:

The most common kind of data annotation is bounding boxes. These are the rectangular boxes used to identify the location of the object. It uses x and y-axis coordinates in both the upper-left and lower-right corners of the rectangle. The prime purpose of this type of data annotation is to detect the objects and locations.

This type of data annotation is created by lines and splines to detect and recognize lanes, which is required to run an autonomous vehicle.

This type of annotation finds its role in situations where environmental context is a crucial factor. It is a pixel-wise annotation that assigns every pixel of the image to a class (car, truck, road, park, pedestrian, etc.). Each pixel holds a semantic sense. Semantic segmentation is most commonly used to train models for self-driving cars.

This type of data annotation is almost like bounding boxes but it provides extra information about the depth of the object. Using 3D cuboids, a machine learning algorithm can be trained to provide a 3D representation of the image.

The image can further help in distinguishing the vital features (such as volume and position) in a 3D environment. For instance- 3D cuboids help driverless cars to utilize the depth information to find out the distance of objects from the vehicle.

Polygonal segmentation is used to identify complex polygons to determine the shape and location of the object with the utmost accuracy. This is also one of the common types of data annotations.

These two annotations are used to create dots across the image to identify the object and its shape. Landmark and key-point annotations play their role in facial recognitions, identifying body parts, postures, and facial expressions.

Entity annotation is used for labeling unstructured sentences with the relevant information understandable by a machine. It can be further categorized into named entity recognition and intent extraction.

Data annotation offers innumerable advantages to machine learning algorithms that are responsible for training predicting data. Here are some of the advantages of this process:

Enhanced user experience

Applications powered by ML-based trained models help in delivering a better experience to end-users. AI-based chatbots and virtual assistants are a perfect example of it. The technique makes these chatbots to provide the most relevant information in response to a users query.

Improved precision

Image annotations increase the accuracy of output by training the algorithm with huge data sets. Leveraging these data sets, the algo will learn various kinds of factors that will further assist the model to look for the suitable information in the database.

The most common annotation formats include:

COCOYOLOPascal VOC

By now, you must be aware of the different types of data annotations. Lets check out the applications of the same in machine learning:

Sequencing- It includes text and time series and a label.

Classification- Categorizing the data into multiple classes, one label, multiple labels, binary classes, and more.

Segmentation- It is used to search the position where a paragraph splits, search transitions between different topics, and for various other purposes.

Mapping- It can be done for language to language translation, to convert a complete text into the summary, and to accomplish other tasks.

Check out below some of the common tools used for annotating images:

RectlabelLabelMeLabelImgMakeSense.AIVGG image annotator

In this article, we have mentioned what data annotation or labeling is, and what are its types and benefits. Besides this, we have also listed the top tools used for labeling images. The process of labeling texts, images, and other objects help ML-based algorithms to improve the accuracy of the output and offer an ultimate user experience.

A reliable and experienced machine learning company holds expertise on how to utilize these data annotations for serving the purpose an ML algorithm is being designed for. You can contact such a company or hire ML developers to develop an ML-based application for your startup or enterprise.

Read More: How does Machine Learning Revolutionizing the Mobile Applications?

Read the original post:
Data Annotation- Types, Tools, Benefits, and Applications in Machine Learning - Customer Think

How AI can help payers navigate a coming wave of delayed and deferred care – FierceHealthcare

So far insurers have seen healthcare use plummet since the onset of the COVID-19 pandemic.

But experts are concerned about a wave of deferred care that could hit as patients start to return to patients and hospitals putting insurers on the hook for an unexpected surge of healthcare spending.

Artificial intelligence and machine learning could lend insurers a hand.

Against Coronavirus, Knowledge is Power

For organizations with a need for affordable and convenient COVID-19 antibody testing, Truvian's Easy Check COVID-19 IgM/IgG antibody test empowers onsite testing at scale, with accurate results at 10 minutes from a small sample of blood. Hear from industry experts Dr. Jerry Yeo, University of Chicago and Dr. Stephen Rawlings, University of California, San Diego on the state of COVID antibody testing and Easy Check through our on-demand webinar.

We are using the AI approaches to try to protect future cost bubbles, said Colt Courtright, chief data and analytics officer at Premera Blue Cross, during a session with Fierce AI Week on Wednesday.

WATCH THE ON-DEMAND PLAYBACK:What Payers Should Know About How AI Can Change Their Business

He noted that people are not going in and getting even routine cancer screenings.

If people have delay in diagnostics and delay in medical care how is that going to play out in the future when we think about those individuals and the need for clinical programs and the cost and how do we manage that? he said.

Insurers have started in some ways to incorporate AI and machine learning in several different facets such as claims management and customer service, but insurers are also starting to explore how AI can be used to predict healthcare costs and outcomes.

In some ways, the pandemic has accelerated the use of AI and digital technologies in general.

If we can predict, forecast and personalize care virtually, then why not do that, said Rajeev Ronanki, senior vice president and chief digital officer for Anthem, during the session.

The pandemic has led to a boom in virtual telemedicine as the Trump administration has increased flexibility for getting Medicare payments for telehealth and patients have been scared to go to hospitals and physician offices.

But Ronanki said that AI cant just help with predicting healthcare costs, but also on fixing supply chains wracked by the pandemic.

He noted that the manufacturing global supply chain is extremely optimized, especially with just-in-time ordering that doesnt require businesses to have a large amount of inventory.

But that method doesnt really work during a pandemic when there is a vast imbalance in supply and demand with personal protective equipment, said Ronanki.

When you connect all those dots, AI can then be used to configure supply and demand better in anticipation of issues like this, he said.

View original post here:
How AI can help payers navigate a coming wave of delayed and deferred care - FierceHealthcare

What is ‘custom machine learning’ and why is it important for programmatic optimisation? – The Drum

Wayne Blodwell, founder and chief exec of The Programmatic Advisory & The Programmatic University, battles through the buzzwords to explain why custom machine learning can help you unlock differentiation and regain a competitive edge.

Back in the day, simply having programmatic on plan was enough to give you a competitive advantage and no one asked any questions. But as programmatic has grown, and matured (84.5% of US digital display spend is due to be bought programmatically in 2020, the UK is on track for 92.5%), whats next to gain advantage in an increasingly competitive landscape?

Machine Learning

[noun]

The use and development of computer systems that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyse and draw inferences from patterns in data.

(Oxford Dictionary, 2020)

Youve probably head of machine learning as it exists in many Demand Side Platforms in the form of automated bidding. Automated bidding functionality does not require a manual CPM bid input nor any further bid adjustments instead, bids are automated and adjusted based on machine learning. Automated bids work from goal inputs, eg achieve a CPA of x or simply maximise conversions, and these inputs steer the machine learning to prioritise certain needs within the campaign. This tool is immensely helpful in taking the guesswork out of bids and the need for continual bid intervention.

These are what would be considered off-the-shelf algorithms, as all buyers within the DSP have access to the same tool. There is a heavy reliance on this automation for buying, with many even forgoing traditional optimisations for fear of disrupting the learnings and holding it back but how do we know this approach is truly maximising our results?

Well, we dont. What we do know is that this machine learning will be reasonably generic to suit the broad range of buyers that are activating in the platforms. And more often than not, the functionality is limited to a single success metric, provided with little context, which can isolate campaign KPIs away from their true overarching business objectives.

Custom machine learning

Instead of using out of the box solutions, possibly the same as your direct competitors, custom machine learning is the next logical step to unlock differentiation and regain an edge. Custom machine learning is simply machine learning that is tailored towards specific needs and events.

Off-the-self algorithms are owned by the DSPs; however, custom machine learning is owned by the buyer. The opportunity for application is growing, with leading DSPs opening their APIs and consoles to allow for custom logic to be built on top of existing infrastructure. Third party machine learning partners are also available, such as Scibids, MIQ & 59A, which will develop custom logic and add a layer onto the DSPs to act as a virtual trader, building out granular strategies and approaches.

With this ownership and customisation, buyers can factor in custom metrics such as viewability measurement and feed in their first party data to align their buying and success metrics with specific business goals.

This level of automation not only provides a competitive edge in terms of correctly valuing inventory and prioritisation, but the transparency of the process allows trust to rightfully be placed with automation.

Custom considerations

For custom machine learning to be effective, there are a handful of fundamental requirements which will help determine whether this approach is relevant for your campaigns. Its important to have conversations surrounding minimum event thresholds and campaign size with providers, to understand how much value you stand to gain from this path.

Furthermore, a custom approach will not fix a poor campaign. Custom machine learning is intended to take a well-structured and well-managed campaign and maximise its potential. Data needs to be inline for it to be adequately ingested and for real insight and benefit to be gained. Custom machine learning cannot simply be left to fend for itself; it may lighten the regular day to day load of a trader, but it needs to be maintained and closely monitored for maximum impact.

While custom machine learning brings numerous benefits to the table transparency, flexibility, goal alignment its not without upkeep and workflow disruption. Levels of operational commitment may differ depending on the vendors selected to facilitate this customisation and their functionality, but generally buyers must be willing to adapt to maximise the potential that custom machine learning holds.

Find out more on machine learning in a session The Programmatic University are hosting alongside Scibids on The Future Of Campaign Optimisation on 17 September. Sign up here.

See the original post here:
What is 'custom machine learning' and why is it important for programmatic optimisation? - The Drum

Deliver More Effective Threat Intelligence with Federated Machine Learning – SC Magazine

Cybercriminals never stop innovating. Their increased use of automated and scripted attacks that increase speed and scale makes them more sophisticated and dangerous than ever. And because of the volume, velocity and sophistication of todays global threat landscape, enterprises must respond in real-time and at machine speeds to effectively counter these aggressive attacks. Machine learning and artificial intelligence can help deliver better, more effective threat intelligence.

As we move through 2020, AI has started increasing its capacity to detect attack patterns using a combination of threat intelligence feeds delivered by a variety of external sources, ranging from vendors to industry consortiums, and distributed sensors and learning nodes that gather information about the threats and probes targeting the edges of the networks.

This new form of distributed AI relies on something called federated machine learning. Instead of relying on a single, centralized AI system to process data and initiate a response to threats (like in centralized AI), these regional machine learning nodes will respond to threats autonomously using existing threat intelligence. Just as white blood cells automatically react to an infection, and clotting systems respond to a cut without requiring the brain to initiate those responses, these interconnected systems can see, correlate, track, and prepare for threats as they move through cyberspace by sharing information across the network, enabling local nodes to respond with increasing accuracy and efficiency to events by leveraging continually updated response models.

Its all part of an iterative cycle, where in addition to the passive data collected by local learning nodes, the data gleaned from active responses, including how malware or attackers fight back, will also get shared across the network of local peers. This will let the entire system further refine its ability to identify additional unique characteristics of attack patterns and strategies, and formulate increasingly effective threat responses.

There are many encouraging implications for cybersecurity. Security pros will use this system of distributed nodes connected to a central AI brain to detect even the most subtle deviations in normal network traffic. Examples of this are already emerging in research and development labs, particularly in health care, where researchers are using federated learning to train algorithms without centralizing sensitive data and running afoul of HIPAA. When added to production networks, this technology will make it increasingly difficult for cybercriminals to hide.

Building from there, AI can share its locally collected data with other AI systems via an M2M interface, whether from peers in an industry, within a specific geography, or with law enforcement developing a more global perspective.

In addition to pulling from external feeds or analyzing internal traffic and data, federated machine learning will feed on the deluge of relevant information coming from new edge computing devices and environments being collected by local learning nodes.

For this to work, these local nodes will need to operate in a continuous learning mode and evolve from a hub-and-spoke model back to the central AI to a more interconnected system. Rather than operating as information islands, a federated learning system would let these data sets interconnect so that learning models could adapt to event trends and changing environments from the moment a threat gets detected.

That way, rather than waiting for information to make the round trip to the central AI once an attack sensor has been tripped, other local learning nodes and embedded security devices are immediately alerted. These regional elements could then create and coordinate an ad-hoc swarm of local, interactive components to autonomously respond to the threat in real-time, even in mid-attack by anticipating the next move of the attacker or malware, while waiting for refined intelligence from a supervised authoritative master AI node.

Finally, the systems would share these events with the master AI node and also local learner nodes so that an event at one location improves the intelligence of the entire system. This would let the system customize the intelligence to the unique configurations and solutions in place at a particular place in the network. This would help local nodes collect and process data more efficiently, and also enhance their first-tier response to local cyber events.

The security industry clearly needs more efficient ways to analyze threat intelligence. When combined with automation to assist with autonomous decision-making, the intelligence gathered with federated machine learning will help organizations more effectively fight the increasingly aggressive and damaging nature of todays cybercrime. Throughout 2020 and beyond, AI in its various forms will continue to move forward, helping to level the playing field and making it more possible to fend off the growing deluge of attacks.

Derek Manky, chief, Global Threat Alliances, FortiGuard Labs

Original post:
Deliver More Effective Threat Intelligence with Federated Machine Learning - SC Magazine