Adventures With Artificial Intelligence and Machine Learning – Toolbox

Since October of last year I have had the opportunity to work with an startup working on automated machine learning and I thought that I would share some thoughts on the experience and the details of what one might want to consider around the start of a journey with a data scientist in a box.

Ill start by saying that machine learning and artificial intelligence has almost forced itself into my work several times in the past eighteen months, all in slightly different ways.

The first brush was back in June 2018 when one of the developers I was working with wanted to demonstrate to me a scoring model for loan applications based on the analysis of some other transactional data that indicated loans that had been previously granted. The model had no explanation and no details other than the fact that it allowed you to stitch together a transactional dataset which it assessed using a nave Bayes algorithm. We had a run at showing this to a wider audience but the palate for examination seemed low and I suspect that in the end the real reason was we didnt have real data and only had a conceptual problem to be solved.

The second go was about six months later when another colleague in the same team came up with a way to classify data sets and in fact developed a flexible training engine and data tagging approach to determining whether certain columns in data sets were likely to be names, addresses, phone numbers and email addresses. On face value you would think this to be something simple but in reality, it is of course only as good as the training data and in this instance we could easily confuse the system and the data tagging with things like social security numbers that looked like phone numbers, postcodes that were simply numbers and ultimately could be anything and so on. Names were only as good as the locality from which the names training data was sourced and cities, towns. Streets and provinces all proved to most work ok but almost always needed region-specific training data. At any rate, this method of classifying contact data for the most part met the rough objectives of the task at hand and so we soldiered on.

A few months later I was called over to a developers desk and asked for my opinion on a side project that one of the senior developers and architects had been working on. The objective was ambitious but impressive. The solution had been built in response to three problems in the field. The first problem to be solved was decoding why certain records were deemed to be related to one another when with the naked eye they seemed to not be, or vice versa. While this piece didnt involve any ML per se, the second part of the solution did, in that it self-configured thousands of combinations of alternative fuzzy matching criteria to determine an optimal set of duplicate record matching rules.

This was understandably more impressive and practically understandable almost self-explanatory. This would serve as a great utility for a consultant, a data analyst or a relative layperson to find explainability in how potential duplicate records were determined to have a relationship. This was specifically important because it immediately could provide value to field services personnel and clients. In addition, the developer had cunningly introduced a manual matching option that allowed a user to evaluate two records and make a decision through visual assessment as to whether two records could potentially be considered related to one another.

In some respects what was produced was exactly the way that I like to see products produced. The field describes the problem; the product management organization translates that into more elaborate stories and looks for parallels in other markets, across other business areas and for ubiquity. Once those initial requirements have been gathered it is then to engineering and development to come up with a prototype that works toward solving the issue.

The more experienced the developer of course the more comprehensive the result may be and even the more mature the initial iteration may be. Product is then in a position to pitch the concept back at the field, to clients and a selective audience to get their perspective on the solution and how well it matches the for solving the previously articulated problem.

The challenge comes when you have a less tightly honed intent, a less specific message and a more general problem to solve and this comes now to the latest aspect of machine learning and artificial intelligence that I picked up.

One of the elements with dealing with data validation and data preparation is the last mile of action that you have in mind for that data. If your intent is as simple as one of, lets evaluate our data sources, clean them up and makes them suitable for online transaction processing then thats a very specific mission. You need to know what you want to evaluate, what benchmark you wish to evaluate them against and then have some sort of remediation plan for them so that they support the use case for which theyre intended say, supporting customer calls into a call centre. The only areas where you might consider artificial intelligence and machine learning for applicability in this instance might be for determining matches against the baseline but then the question is whether you simply have a Boolean decision or whether in fact, some sort of stack ranking is relevant at all. It could be argued either way, depending on the application.

When youre preparing data for something like a decision beyond data quality though, the mission is perhaps a little different. Effectively your goal may be to cut the cream of opportunities off the top of a pile of contacts, leads, opportunities or accounts. As such, you want to use some combination of traits within the data set to determine influencing factors that would determine a better (or worse) outcome. Here, linear regression analysis for scoring may be sufficient. The devil, of course, lies in the details and unless youre intimately familiar with the data and the proposition that youre trying to resolve for you have to do a lot of trial and error experimentation and validation. For statisticians and data scientists this is all very obvious and you could say, is a natural part of the work that they do. Effectively the challenge here is feature selection. A way of reducing complexity in the model that you will ultimately apply to the scoring.

The journey I am on right now with a technology partner, focuses on ways to actually optimise the features in a way that only the most necessary and optimised features will need to be considered. This, in turn, makes the model potentially simpler and faster to execute, particularly at scale. So while the regression analysis still needs to be done, determining what matters, what has significance and what should be retained vs discarded in terms of the model design, is being all factored into the model building in an automated way. This doesnt necessarily apply to all kinds of AI and ML work but for this specific objective it is perhaps more than adequate and one that doesnt require a data scientist to start delivering a rapid yield.

Visit link:
Adventures With Artificial Intelligence and Machine Learning - Toolbox

Educate Yourself on Machine Learning at this Las Vegas Event – Small Business Trends

One of the biggest machine learning events is taking place in Las Vegas just before summer, Machine Learning Week 2020

This five-day event will have 5 conferences, 8 tracks, 10 workshops, 160 speakers, more than 150 sessions, and 800 attendees.

If there is anything you want to know about machine learning for your small business, this is the event. Keynote speakers from Google, Facebook, Lyft, GM, Comcast, WhatsApp, FedEx, and LinkedIn to name just some of the companies that will be at the event.

The conferences will include predictive analytics for business, financial services, healthcare, industry and Deep Learning World.

Training workshops will include topics in big data and how it is changing business, hands-on introduction to machine learning, hands-on deep learning and much more.

Machine Learning Week will take place from May 31 to June 4, 2020, at Ceasars Palace in Las Vegas.

Click the red button and register.

Register Now

This weekly listing of small business events, contests and awards is provided as a community service by Small Business Trends.

You can see a full list of events, contest and award listings or post your own events by visiting the Small Business Events Calendar.

Image: Depositphotos.com

Visit link:
Educate Yourself on Machine Learning at this Las Vegas Event - Small Business Trends

Being human in the age of Artificial Intelligence – Deccan Herald

After a while, everything is overhyped and underwhelming. Even Artificial Intelligence has not been able to escape the inevitable reduction that follows such excessive hype. AI is everything and everywhere now and most of us wont even blink if we are toldAI is poweringsomeonestoothbrush. (It probably is).

The phrase is undoubtedly being misused but is the technology too? One thing is certain, whether we like it or not, whether we understand it or not, for good or bad, AI is playing a huge part in our everyday life today huger than we imagine. AI is being employed in health, wellness and warfare; it is scrutinizing you, helping you take better photos, making music, books and even love. (No, really. The first fully robotic sex doll is being created even as you are reading this.)

However, there is a sore lack of understanding of what AI really is, how it is shaping our future and why it is likely to alter our very psyche sooner or later. There is misinformation galore, of course. Either media coverage of AI is exaggerated (as if androids will take over the world tomorrow) or too specific and technical, creating further confusion and fuelling sci-fi-inspired imaginations of computers smarter than human beings.

So what is AI? No, we are not talking dictionary definitions here those you can Google yourself. Neither are we promising to explain everything that will need a book. We are onlyhoping to give you aglimpse into theextraordinary promise and peril of this single transformative technology as Prof Stuart Russell, one of the worlds pre-eminent AI experts, puts it.

Prof Russell has spent decades on AI research and is the author of Artificial Intelligence: A Modern Approach, which is used as a textbook on AI in over 1,400 universities around the world.

Machine learning first

Otherexperts believe our understanding of artificial intelligence should begin with comprehending machine learning, the so-called sub-field of AI butone that actually encompasses pretty much everything that is happening in AI at present.

In its very simplest definition, machine learning is enabling machines to learn on their own. The advantages of thisare easy to see. After a while, you need not tell it what to do it is your workhorse. All you need is to provide it data and it will keep coming up with smarter ways of digesting that data, spotting patterns, creating opportunities in short doing your work better than you perhaps ever could. This is the point where you need to scratch the surface. Scratch and you will stare into a dissolving ethical conundrum about what machines might end up learning. Because, remember they do not (cannot) explain their thinking process. Not yet, at least. Precisely why, the professor has a cautionary take.

The concept of intelligence is central to who we are. After more than 2,000 years of self-examination, we have arrived at a characterization of intelligence that can be boiled down to this: Humans are intelligent to the extent that our actions can be expected to achieve our objectives. Intelligence in machines has been defined in the same way: Machines are intelligent to the extent that their actions can be expected to achieve their objectives.

Whose objectives?

The problem,writes the professor, is in this very definition of machine intelligence. We say that machines are intelligent to the extent that their actions can be expected to achieve their objectives, but we have no reliable way to make sure that their objectives are the same as our objectives. He believes what we should have done all along is to tweak this definition to: Machines are beneficial to the extent that their actions can be expected to achieve our objectives.

The difficulty here is of course that our objectives are in us all eight billion of us and not in the machines. Machines will be uncertain about our objectives; after all we are uncertain about them ourselves but this is a good thing; this is a feature, not a bug. Uncertainty about objectives implies that machines will necessarily defer to humans they will ask permission, they will accept correction and they will allow themselves to be switched off.

Spilling out of the lab

This might mean a complete rethinking and rebuilding of the AI superstructure. Perhaps something that indeed is inevitable if we do not want this big event in human history to be the last, says the prof wryly. As Kai-Fu Lee, another AI researcher, said in an interview a while ago, we are at a moment where the technology is spilling out of the lab and into the world. Time to strap up then!

(With inputs from Human Compatible: AI and the Problem of Control by Stuart Russell, published by Penguin, UK. Extracted with permission.)

Read more from the original source:
Being human in the age of Artificial Intelligence - Deccan Herald

High Investment in AI and Machine Learning will Enhance Automotive Digital Assistants by 2025 – PRNewswire

"With the rising popularity of connected services such as traffic information and local search, digital assistants have become a key differentiator for original equipment manufacturers (OEMs). OEM-branded digital assistants will help automakers strengthen their brand and convert one-time sales into continual service-centric relationships," said Anubhav Grover, Research Analyst, Mobility. "OEMs are aiming to create their own branded digital assistants that will co-exist and integrate with third-party and tech-branded digital assistants. BMW has already launched its own Intelligent Personal Assistant (IPA), which uses Alexa to access Amazon's e-commerce and Cortana for Microsoft Office."

Frost & Sullivan's recent analysis, Strategic Analysis of Automotive Digital Assistants, Forecast to 2025, studies the competitive landscape, business models, and future focus areas of OEMs, digital assistant suppliers, and technology companies. It examines the trends in artificial intelligence integration and voice biometrics. Furthermore, it analyzes the different strategies adopted by OEMs, tier-I suppliers, and technology startups in North America, Europe, and China.

For further information on this analysis, please visit: http://frost.ly/3yk.

"North America is expected to continue leading the adoption of digital assistant solutions. Meanwhile, with higher penetration of long-term evolution (LTE) and greater production capacity in China, Asia-Pacific is expected to be a growth hub for OEMs," noted Grover. "Digital assistant developers are increasingly building strategic partnerships with telecom providers and communication module makers to enhance on-road safety and in-vehicle data-rich services. Flexible business models such as 'choice of network' for consumers will further improve customer retention and revenue generation."

For greater growth opportunities, digital assistant companies are likely to:

Strategic Analysis of Automotive Digital Assistants, Forecast to 2025,is part of Frost & Sullivan's global Automotive & Transportation Growth Partnership Service program.

About Frost & Sullivan

For over five decades, Frost & Sullivan has become world-renowned for its role in helping investors, corporate leaders and governments navigate economic changes and identify disruptive technologies, Mega Trends, new business models and companies to action, resulting in a continuous flow of growth opportunities to drive future success. Contact us: Start the discussion.

Strategic Analysis of Automotive Digital Assistants, Forecast to 2025K329-18

Contact:Mariana FernandezCorporate CommunicationsT: +1 (210) 348.1012E: mariana.fernandez@frost.com

http://ww2.frost.com

SOURCE Frost & Sullivan

Frost New Home page v2

Visit link:
High Investment in AI and Machine Learning will Enhance Automotive Digital Assistants by 2025 - PRNewswire

Why CDOs should care about ML and the human connection – CDOTrends

As an enormous decade comes to an end, digital officers are now looking to the future. The last ten years saw a boom in technology that created a digital shift in almost every industry. At no point, however, has this transformation been enough to replace human connection. Which begs the question: will it ever?

The short answer is no. However, digital officers must continually strategize to bridge the gap between machine learning and human connectivity.

The rise of ML and AI in the workplace

We are entering an era that is dominated by artificial intelligence technologies, enabling workers to not only work remotely but collaborate remotely too. Being able to work from home is hardly new, but the ability to collaborate and engage with colleagues as though you are face-to-face is the result of evolving visual and audio technology.

So why should the digital officer care? Because this technology links the growing need for remote working, collaboration, and unwavering employee engagement.

Empathy in business

In the age of digital transformation, businesses need to prioritize empathy because working with people still requires a human element.

When it comes to creating an empathetic workplace, visibility is essential. This means visibility among employees, managers, and business directors, regardless of location.

The reason for this is simple: visibility translates to availability; the more visible, the more accessible someone is.

Accessibility and availability work together to drive an empathetic workplace environment. This is critical when considering how to engage and retain employees, particularly as digital officers look to transform business operations.

While digital transformation may allow a business to automate some tasks, it can also create a more connected environment that fosters team collaboration. Digital transformation no longer translates to robots taking over human roles and responsibilities; rather, it's an opportunity to fuse machine learning with human capabilities.

Empathy and AI

In the discussion of empathy and AI, employee experience must remain high on the agenda. Technology and AI are already being used when empathically engaging with customers, but what about employee engagement? Just as a digital officer would recommend a business interact with its customers online, business directors must implement the appropriate technology to communicate with employees regardless of location or time zone.

Keeping pace

Voice assistant technology has grown in popularity and will continue to advance via the mass amounts of data being created. Again, empathy will be integral to the success and uptake of this technology. Similarly, video and audio technology will also excel, and businesses will need to keep pace with rapid adoption.

The role of the digital officer will be to assist businesses in driving their own digital transformation strategy, utilizing the latest technology to meet evolving business objectives both internally and externally and, ultimately, positively impact the bottom line.

Holger Reisinger, senior vice president for Large Enterprise at Jabra, wrote this article.The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends.Photo credit: iStockphoto/wildpixel

Original post:
Why CDOs should care about ML and the human connection - CDOTrends

Break into the field of AI and Machine Learning with the help of this training – Boing Boing

It seems like AI is everywhere these days, from the voice recognition software in our personal assistants to the ads that pop up seemingly at just the right time. But believe it or not, the field is still in its infancy.

That means there's no better time to get in on the ground floor. The Essential AI & Machine Learning Certification Training Bundle is a one-course package that can give you a broad overview of AI's many uses in the modern marketplace and how to implement them.

The best place to dive into this four-course master class is with the Artificial Intelligence (AI) & Machine Learning (ML) Foundation Course. This walkthrough gives you all the terms and concepts that underpin the entire science of AI.

Later courses let you get your hands dirty with some coding, as in the data visualization class that focuses on the role of Python in the interpretive side of data analytics. There are also separate courses on computer vision (the programming that lets machines "see" their surroundings) and natural language processing (the science of getting computers to understand speech).

The entire package is now available for Boing Boing readers at 93% off the MSRP.

Former Vice President and current 2020 Democratic presidential hopeful Joe Biden says U.S. Section 230 should be immediately revoked for Facebook and other social media platforms, and that Mark Zuckerberg should be submitted to civil liability.

FBI needs to be able to hack into your iphone, Trumps sham AG William Barr says

Gee, thanks.

Anyone who loves biking, skiing, or snowboarding in the great outdoors knows just how difficult it can be to safely transport your gearespecially during extended trips. These three accessories make it easier than ever to securely attach your gear to your car. So if youre planning to embark on a outdoor adventure soon, youd be []

If youre looking to build a career in web development, it starts with Javascript. This programming language was there at the golden age of the internet, and its still the basis for millions of web pages and apps worldwide. Suffice to say, if youre a coder who doesnt know JS yet, youre not a coder. []

Weve all got a perfect website in our minds. In the past, the problem has been the barrier of language specifically, the computer languages used to create those glittering, animation-filled pages you flock to. Now, Mac users have an alternative. Blocs 3 is a website builder that can provide an easy visual interface for []

Excerpt from:
Break into the field of AI and Machine Learning with the help of this training - Boing Boing

Goldwater Scholar wants to use AI to help ensure justice where children are involved > News > USC Dornsife – USC Dornsife College of Letters,…

Math major Zane Durantes research seeks to revitalize endangered languages, predict whether skin lesions are cancerous and enable truthful child eyewitness testimony to be taken seriously by courts. [5 min read]

In cases of abuse or neglect by a caretaker, children are often the only witnesses. Currently, however, our legal system doesnt view their testimony as reliable. Zane Durante, a mathematics major at the USC Dornsife College of Letters, Arts and Sciences, wants to change that.

Durante has been awarded a prestigious Goldwater Scholarship for his research into how to predict whether or not children are being truthful when they provide witness testimony. Durante, who is also majoring in computer science at USC Viterbi School of Engineering, conducts his research at the schools Signal Analysis and Interpretation Laboratory (SAIL). There he analyzes written transcripts of childrens testimony using various machine learning methods and natural language processing. Machine Learning, in simple terms, is the science of getting computers to learn and make rational predictions all on their own.

Eye witness accounts are already unreliable and childrens testimony is considered to be even more unreliable, Durante says. Courts dont always use child testimony, even if it may be the only information they have on determining whether or not abuse has taken place.

Durantes research looks at the vocabulary children use in their testimony.

Basically, were trying to put a score on how certain we are that the child is telling the truth or not, given the testimony, by looking at their language, he explains.

Research may help a child witness tell the truth

The next step, Durante says, is to look at how incorporating audio and video recordings into the researchers machine learning models might improve their estimates on whether or not the child is being truthful or deceptive.

Often, when humans are trying to determine whether or not someones lying, we look at many different things, including voice pitch and micro expressions that may only last for a fraction of a second but indicate a discontinuity between what the persons saying and what theyre thinking and feeling, Durante says. Right now, were just looking at language, but ultimately we think that by incorporating these other features and data into our machine learning models, well get more accurate predictions.

While lie detectors are seen as the standard method of detecting deception, Durante notes that they actually dont work very well, with meta-analysis giving them only about a 65% accuracy rate in determining whether or not a person is lying.

Thats not usually admissible in court, and so we were really looking for something better, he said.

Rather than being used after an interview to evaluate the probability a child witness was lying, the data could ultimately be used to give feedback during the interview to improve questioning so the child is more likely to answer truthfully.

The researchers also look at the emotional content of the words and the agreeability of the language used by the child. Durante and his team have found that the more agreeable child witnesses are, the less likely they are to be telling the truth. Another conclusion is that advanced vocabulary can be a sign of truthfulness because the child is being more descriptive.

Machine learning for social good

Durantes interest in machine learning was sparked in the fall of his freshman year when he joined the Center for AI in Societys student branch, CAIS++. The undergraduate group has student members from a wide range of majors, including engineering, computer science, computational linguistics, neuroscience and mathematics, all of whom are interested in machine learning and its applications for social good.

I liked math and I liked computer science, and machine learning is this really interesting intersection of both, Durante says.

After studying machine learning in the fall, CAIS++ members apply what theyve learned to a project for social good in the spring. Durante led the curriculum last semester, teaching undergraduates machine learning skills, and he will be leading a group through a project this semester

USC Dornsife math major Zane Durante believes his future lies in innovating new methods and applications of machine learning. (Photo: Sajeev Saluja.)

His first project in CAIS++ looked at a type of neural network that could be used to predict whether an image of a skin lesion is malignant or benign. This would have applications for people who couldnt see a dermatologist.

His second project, led by USC Dornsifes Khalil Iskarous, associate professor of linguistics, focused on Ladin, an endangered language in Northern Italy.

Durante felt a personal connection to the project because his father spoke a Northern Italian dialect as his first language.

A major part of any language revitalization process involves linguists transcribing audio data of the spoken language by writing down the phonemes the basic units of sound in a language, like the d sound in dog. This enables them to understand how its spoken and to reconstruct it. Its a slow and laborious process.

It can take four hours to annotate one hour of English audio data, and the process is much slower for endangered languages, Durante says.

To speed it up, Durante and his fellow students are developing an auto-complete process using machine learning to predict phonemes.

At the core of all of the machine learning algorithms that Durante uses in his research is a lot of linear algebra and probability both of which he learned in his USC Dornsife math classes.

Both of those math classes build fundamentals that are necessary if you want to understand at a deep level what the algorithms are doing so you can make modifications to improve them or be more innovative, he said.

Born in Houston to a professor of pharmacology and a medical researcher, Durante moved to Columbia, Missouri, with his family at age 6. He originally aimed to study pre-med courses at university but switched to math and computer science toward the end of high school.

Now he believes his future lies in innovating new methods and applications of machine learning.

Whether thats just new machine learning methods, new methods in natural language processing or computer vision, or some combination of the two, I want to be working on the frontier and pushing the field forward, he says.

Over the summer, he completed an internship at Google where he developed artificial intelligence tools for Google Cloud.

Durante believes AI will speed up many things that society or humanity in general has traditionally been slow at doing, bringing many benefits.

While noting the ethical concerns around the use of AI and acknowledging that he has reservations of his own about some of its potential uses, he says he remains optimistic that the benefits to humanity will outweigh any negative use.

Historically, we can see that as technologies improve, the quality of life has improved tremendously as a result. So, I think anything like machine learning that will make things easier for humans will just make life better for everyone.

More here:
Goldwater Scholar wants to use AI to help ensure justice where children are involved > News > USC Dornsife - USC Dornsife College of Letters,...

Essential AI & Machine Learning Certification Training Bundle Is Available For A Limited Time 93% Discount Offer Avail Now – Wccftech

Machine learning and AI are the future of technology. If you wish to become part of the world of technology, this is the place to begin. The world is becoming more dependent on technology every day and it wouldnt hurt to embrace it like it is. If you resist it, you will just be obsolete and will have trouble surviving. Wccftech is offering an amazing discount offer on the Essential AI & Machine Learning Certification Training Bundle. The offer will expire in less than a week, so avail it right away.

The bundle includes 4 extensive courses on NLP, Computer Vision, Data visualization and Machine Learning. Each course will help you understand the technology world a bit more and you will not regret investing your time and money on this. The courses have been created by experts so, you are in safe hands. Here are highlights of what the Essential AI & Machine Learning Certification Training Bundle has in store for you:

The bundle has been brought to you by GreyCampus. They are known for providing learning solutions to professionals in various fields including project management, data science, big data, quality management and more. They offer different kinds of teaching platforms including e-learning and live-online. All these courses have been specifically designed to meet the markets changing needs.

Original Price Essential AI & Machine Learning Certification Training Bundle: $656Wccftech Discount Price Essential AI & Machine Learning Certification Training Bundle: $39.99

Share Submit

Go here to read the rest:
Essential AI & Machine Learning Certification Training Bundle Is Available For A Limited Time 93% Discount Offer Avail Now - Wccftech

How machine learning and automation can modernize the network edge – SiliconANGLE

If you want to know the future of networking, follow the money right to the edge.

Applications are expected to move from data centers to edge facilities in record numbers, opening up a huge new market opportunity. The edge computing market is expected to grow at a compound annual growth rate of 36.3 percent between now and 2022, fueled by rapid adoption of the internet of things, autonomous vehicles, high-speed trading, content streaming and multiplayer games.

What these applications have in common is a need for near zero-latency data transfer, usually defined as less than five milliseconds, although even that figure is far too high for many emerging technologies.

The specific factors driving the need for low latency vary. In IoT applications, sensors and other devices capture enormous quantities of data, the value of which degrades by the millisecond. Autonomous vehicles require information in real-time to navigate effectively and avoid collisions. The best way to support such latency-sensitive applications is to move applications and data as close as possible to the data ingestion point, therefore reducing the overall round-trip time. Financial transactions now occur at sub-millisecond cycle times, leading one brokerage firm to invest more than $100 million to overhaul its stock trading platform in a quest for faster and faster trades.

As edge computing grows, so do the operational challenges for telecommunications service provider such as Verizon Communications Inc., AT&T Corp. and T-Mobile USA Inc. For one thing, moving to the edge essentially disaggregates the traditional data center. Instead of massive numbers of servers located in a few centralized data centers, the provider edge infrastructure consists of thousands of small sites, most with just a handful of servers. All of those sites require support to ensure peak performance, which strains the resources of the typical information technology group to the breaking point and sometimes beyond.

Another complicating factor is network functions moving toward cloud-native applications deployed on virtualized, shared and elastic infrastructure, a trend that has been accelerating in recent years. In a virtualized environment, each physical server hosts dozens of virtual machines and/or containers that are constantly being created and destroyed at rates far faster than humans can effectively manage. Orchestration tools automatically manage the dynamic virtual environment in normal operation, but when it comes to troubleshooting, humans are still in the drivers seat.

And its a hot seat to be in. Poor performance and service disruptions hurt the service providers business, so the organization puts enormous pressure on the IT staff to resolve problems quickly and effectively. The information needed to identify root causes is usually there. In fact, navigating the sheer volume of telemetry data from hardware and software components is one of the challenges facing network operators today.

A data-rich, highly dynamic, dispersed infrastructure is the perfect environment for artificial intelligence, specifically machine learning. The great strength of machine learning is the ability to find meaningful patterns in massive amounts of data that far outstrip the capabilities of network operators. Machine learning-based tools can self-learn from experience, adapt to new information and perform humanlike analyses with superhuman speed and accuracy.

To realize the full power of machine learning, insights must be translated into action a significant challenge in the dynamic, disaggregated world of edge computing. Thats where automation comes in.

Using the information gained by machine learning and real-time monitoring, automated tools can provision, instantiate and configure physical and virtual network functions far faster and more accurately than a human operator. The combination of machine learning and automation saves considerable staff time, which can be redirected to more strategic initiatives that create additional operational efficiencies and speed release cycles, ultimately driving additional revenue.

Until recently, the software development process for a typical telco consisted of a lengthy sequence of discrete stages that moved from department to department and took months or even years to complete. Cloud-native development has largely made obsolete this so-called waterfall methodology in favor of a high-velocity, integrated approach based on leading-edge technologies such as microservices, containers, agile development, continuous integration/continuous deployment and DevOps. As a result, telecom providers roll out services at unheard-of velocities, often multiple releases per week.

The move to the edge poses challenges for scaling cloud-native applications. When the environment consists of a few centralized data centers, human operators can manually determine the optimum configuration needed to ensure the proper performance for the virtual network functions or VNFs that make up the application.

However, as the environment disaggregates into thousands of small sites, each with slightly different operational characteristics, machine learning is required. Unsupervised learning algorithms can run all the individual components through a pre-production cycle to evaluate how they will behave in a production site. Operations staff can use this approach to develop a high level of confidence that the VNF being tested is going to come up in the desired operational state at the edge.

AI and automation can also add significant value in troubleshooting within cloud-native environments. Take the case of a service provider running 10 instances of a voice call processing application as a cloud-native application at an edge location. A remote operator notices that one VNF is performing significantly below the other nine.

The first question is, Do we really have a problem? Some variation in performance between application instances is not unusual, so answering the question requires a determination of the normal range of VNF performance values in actual operation. A human operator could take readings of a large number of instances of the VNF over a specified time period and then calculate the acceptable key performance indicator values a time-consuming and error-prone process that must repeated frequently to account for software upgrades, component replacements, traffic pattern variations and other parameters that affect performance.

In contrast, AI can determine KPIs in a fraction of the time and adjust the KPI values as needed when parameters change, all with no outside intervention. Once AI determines the KPI values, automation takes over. An automated tool can continuously monitor performance, compare the actual value to the AI-determined KPI and identify underperforming VNFs.

That information can then be forwarded to the orchestrator for remedial action such as spinning up a new VNF or moving the VNF to a new physical server. The combination of AI and automation helps ensure compliance with service-level agreements and removes the need for human intervention a welcome change for operators weary of late-night troubleshooting sessions.

As service providers accelerate their adoption of edge-oriented architectures, IT groups must find new ways to optimize network operations, troubleshoot underperforming VNFs and ensure SLA compliance at scale. Artificial intelligence technologies such as machine learning, combined with automation, can help them do that.

In particular, there have been a number of advancements over the last few years to enable this AI-driven future. They include systems and devices to provide high-fidelity, high-frequency telemetry that can be analyzed, highly scalable message buses such as Kafka and Redis that can capture and process that telemetry, and compute capacity and AI frameworks such as TensorFlow and PyTorch to create models from the raw telemetry streams. Taken together, they can determine in real time if operations of production systems are in conformance with standards and find problems when there are disruptions in operations.

All that has the potential to streamline operations and give service providers a competitive edge at the edge.

Sumeet Singh is vice president of engineering at Juniper Networks Inc., which provides telcos AI and automation capabilities to streamline network operations and helps them use automation capabilities to take advantage of business potential at the edge. He wrote this piece for SiliconANGLE.

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

See more here:
How machine learning and automation can modernize the network edge - SiliconANGLE

Predicting Healthcare Utilization With Applied Machine Learning – AJMC.com Managed Markets Network

On this episode of Managed Care Cast, we speak with John Showalter, MD, chief product officer at Jvion and an internal medicine physician, and Soy Chen, MS, director of data science at Jvion and part of their data science team. We discuss their research about using applied machine learning to predict healthcare utilization based on social determinants of health, appearing in the January 2019 Health IT issue of The American Journal of Managed Care.

They found that the social determinant of health most associated with risk was air quality. In addition, neighborhood in-migration, transportation, and purchasing channel preferences were more telling than ethnicity or gender in determining patients use of resources.

On this episode of Managed Care Cast, we speak to study authors Soy Chen, MS, and John Showalter, MD, about how they sourced data for the algorithm, the technology's impact on the future of healthcare, and privacy concerns raised by artificial intelligence.

Listen above or through one of these podcast services:

iTunes

TuneIn

Stitcher

Spotify

Continued here:
Predicting Healthcare Utilization With Applied Machine Learning - AJMC.com Managed Markets Network