Five real world AI and machine learning trends that will make an impact in 2021 – IT World Canada

Experts predict artificial intelligence (AI) and machine learning will enter a golden age in 2021, solving some of the hardest business problems.

Machine learning trains computers to learn from data with minimal human intervention. The science isnt new, but recent developments have given it fresh momentum, said Jin-Whan Jung, Senior Director & Leader, Advanced Analytics Lab at SAS. The evolution of technology has really helped us, said Jung. The real-time decision making that supports self-driving cars or robotic automation is possible because of the growth of data and computational power.

The COVID-19 crisis has also pushed the practice forward, said Jung. Were using machine learning more for things like predicting the spread of the disease or the need for personal protective equipment, he said. Lifestyle changes mean that AI is being used more often at home, such as when Netflix makes recommendations on the next show to watch, noted Jung. As well, companies are increasingly turning to AI to improve their agility to help them cope with market disruption.

Jungs observations are backed by the latest IDC forecast. It estimates that global AI spending will double to $110 billion over the next four years. How will AI and machine learning make an impact in 2021? Here are the top five trends identified by Jung and his team of elite data scientists at the SAS Advanced Analytics Lab:

Canadas Armed Forces rely on Lockheed Martins C-130 Hercules aircraft for search and rescue missions. Maintenance of these aircraft has been transformed by the marriage of machine learning and IoT. Six hundred sensors located throughout the aircraft produce 72,000 rows of data per flight hour, including fault codes on failing parts. By applying machine learning, the system develops real-time best practices for the maintenance of the aircraft.

We are embedding the intelligence at the edge, which is faster and smarter and thats the key to the benefits, said Jung. Indeed, the combination is so powerful that Gartner predicts that by 2022, more than 80 per cent of enterprise IoT projects will incorporate AI in some form, up from just 10 per cent today.

Computer vision trains computers to interpret and understand the visual world. Using deep learning models, machines can accurately identify objects in videos, or images in documents, and react to what they see.

The practice is already having a big impact on industries like transportation, healthcare, banking and manufacturing. For example, a camera in a self-driving car can identify objects in front of the car, such as stop signs, traffic signals or pedestrians, and react accordingly, said Jung. Computer vision has also been used to analyze scans to determine whether tumors are cancerous or benign, avoiding the need for a biopsy. In banking, computer vision can be used to spot counterfeit bills or for processing document images, rapidly robotizing cumbersome manual processes. In manufacturing, it can improve defect detection rates by up to 90 per cent. And it is even helping to save lives; whereby cameras monitor and analye power lines to enable early detection of wildfires.

At the core of machine learning is the idea that computers are not simply trained based on a static set of rules but can learn to adapt to changing circumstances. Its similar to the way you learn from your own successes and failures, said Jung. Business is going to be moving more and more in this direction.

Currently, adaptive learning is often used fraud investigations. Machines can use feedback from the data or investigators to fine-tune their ability to spot the fraudsters. It will also play a key role in hyper-automation, a top technology trend identified by Gartner. The idea is that businesses should automate processes wherever possible. If its going to work, however, automated business processes must be able to adapt to different situations over time, Jung said.

To deliver a return for the business, AI cannot be kept solely in the hands of data scientists, said Jung. In 2021, organizations will want to build greater value by putting analytics in the hands of the people who can derive insights to improve the business. We have to make sure that we not only make a good product, we want to make sure that people use those things, said Jung. As an example, Gartner suggests that AI will increasingly become part of the mainstream DevOps process to provide a clearer path to value.

Responsible AI will become a high priority for executives in 2021, said Jung. In the past year, ethical issues have been raised in relation to the use of AI for surveillance by law enforcement agencies, or by businesses for marketing campaigns. There is also talk around the world of legislation related to responsible AI.

There is a possibility for bias in the machine, the data or the way we train the model, said Jung. We have to make every effort to have processes and gatekeepers to double and triple check to ensure compliance, privacy and fairness. Gartner also recommends the creation of an external AI ethics board to advise on the potential impact of AI projects.

Large companies are increasingly hiring Chief Analytics Officers (CAO) and the resources to determine the best way to leverage analytics, said Jung. However, organizations of any size can benefit from AI and machine learning, even if they lack in-house expertise.

Jung recommends that if organizations dont have experience in analytics, they should consider getting an assessment on how to turn data into a competitive advantage. For example, the Advanced Analytics Lab at SAS offers an innovation and advisory service that provides guidance on value-driven analytics strategies; by helping organizations define a roadmap that aligns with business priorities starting from data collection and maintenance to analytics deployment through to execution and monitoring to fulfill the organizations vision, said Jung. As we progress into 2021, organizations will increasingly discover the value of analytics to solve business problems.

SAS highlights a few top trends in AI and machine learning in this video.

Jim Love, Chief Content Officer, IT World Canada

Read the original post:
Five real world AI and machine learning trends that will make an impact in 2021 - IT World Canada

Become An Expert In Artificial Intelligence With The Ultimate Artificial Intelligence Scientist Certification Bundle – IFLScience

Interested in the fast-growing field ofArtificial Intelligence(AI)? With so much under its umbrella, mastering the many aspects of AI can be cumbersome and even confusing. Truly understanding the vast world of AI meanslearning about its various subsets, how they are interconnected and what happens when they work together.

Its an exciting idea, but where does one start? Take it from here:become a certified AI scientistwith The Ultimate Artificial Intelligence Scientist Certification Bundle. The four featured courses are packed with 670 lessons covering Deep Learning, Machine Learning, Python and Tensorflow. Designed for all skill levels, the courses cover everything from the basics to real-world examples and projects. Over 1,000 students have already enrolled in this highly-rated bundle, which we break down below.

Deep Learning is at the heart of Artificial Intelligence, the key to solving the increasingly complex problems that will inevitably come up as AI advances. This course features a robust load of 180 lectures so you can gain a solid understanding of all things Deep Learning. For example, the course includes lessons on the intuition behind Artificial Neural Networks as well as differentiating between Supervised and Unsupervised Deep Learning. Youll work on real-world data sets to reinforce your learnings, including applying Convolutional Neural Networks and Self-Organizing Maps. Over 267,000 students have seen success from this course, with 30,368 positive ratings.

Learn from the best in this Machine Learning course, which was expertly designed by two data scientists. Take the reins on all the algorithms, coding libraries and complex theories with an impressive 40 hours of content. Youll Master Machine Learning on Python and R (two open-source programming languages) and also learn to handle advanced techniques like Dimensionality Reduction. At the end of this class, youll be able to build powerful Machine Learning models and know how to fuse them to solve even the most complex problems. The step-by-step tutorials have garnered 119,297 positive ratings from 653,721 students.

You've likely heard of Python, the super-popular and beginner-friendly programming language that any aspiring AI expert should be familiar with. The 73 lessons in this foundational course designed for all skill levels are crafted to build upon each previous lesson, ensuring you grasp and retain everything you learn. Get comfortable with the core principles of programming, learn to code in Jupiter Notebooks, understand the Law of Large numbers and more. By the time you take the last lesson, youll have a deep understanding of integer, float, logical, sting and other types in Python, plus know how to create a while() loop and a for() loop.Trust the 15,307 positive ratings from 111,676 students enrolled.

Consider this your complete guide to the recently-released Tensorflow 2.0. The course, complete with 133 lessons, starts with the basics and builds on this foundation to cover topics ranging from neural network modeling and training to production. Ultimately, youll be empowered with the know-how to create incredible Deep Learning and AI solutions and bring them to life. One really cool aspect of this hands-on course is that youll have the opportunity to build a bot that acts as a broker for buying and selling stocks using Reinforcement Learning. Take a cue from the 185 positive ratings from 1,579 students and follow suit.

This bundle is a great deal not only because buying even one of these courses separately would break the bank, but also because you'd only get a segmented view of the excitingly wide world of AI.

Right now, you can getThe Ultimate Artificial Intelligence Scientist Certification Bundlefor $34.99, down 95% from the original MSRP.

Prices subject to change.

Here is the original post:
Become An Expert In Artificial Intelligence With The Ultimate Artificial Intelligence Scientist Certification Bundle - IFLScience

Way to Grow in Career – Upskilling in AI and Machine Learning – Analytics Insight

There are at least two clear patterns that show a demand-supply mismatch in tech occupations in front line IT fields, for example, Artificial Intelligence and Machine Learning. One is by means of industry predictions that gauge growth in the AI market from $21.46 Bn to $190.61 Bn somewhere in the range of 2018 and 2025.

Machine learning and AI, cloud computing, cybersecurity and data science are the most pursued fields of knowledge and skills, and as innovation experts contend in the digital space quickly being surpassed by automation, huge numbers of them are upskilling themselves.

As indicated by the report from Gartner, AI-related job creation will arrive at 2,000,000 net-new openings in 2025. Notwithstanding, there arent that numerous experts with the range of abilities to match this requirement. To overcome this issue, there is an expanding need for experts to upskill in these spaces.

A study led by e-learning stage Simplilearn among 1,750 experts, discovered that 33% of respondents were spurred to take up courses to assist them with procuring better pay rates. Other persuading factors incorporate getting opportunities related to hands-on and industry-relevant projects, said 27% of those participated, while another 21% said rewards and recognition pushed them to upskill themselves.

Essentially all types of enterprise programming, transport, factory automation and different enterprises are progressively utilizing AI-based interfaces in their every day operations. Truth be told, by 2030, AI may wind up contributing USD 15.7 trillion to the worldwide economy.

Mathematical and programming aptitudes are integral to gaining competency in this field. Notwithstanding, for seasoned tech experts, it is likewise very critical to build great communication skills. A comprehension of how business functions and the common processes utilized in everyday operations will help you better use your core skills to improve authoritative work processes.

The average yearly compensation of data scientist would go between Rs 5 lakh to 42 lakh from junior to mid-range positions, trailed by the range of Rs 4.97 lakh to 50 lakh for each year for an expert skilled in cloud computing, and Rs 5 lakh to 70 lakh for every year for occupations in Artificial Intelligence, the survey anticipated.

Among the all the more exciting opportunities, one can expect in 2021 is the rising utilization of AI in healthcare to identify and analyze medical problems in people. Smart infrastructure to help balance rapid development in urban centres in India is additionally an option being explored by the government.

Tech programs presently obligatorily have AI, IoT, Machine Learning and some other basic components of arising technologies. Nonetheless, steady changes in this unique field have made it obligatory for experts to keep upskilling by means of a prominent organization to stay relevant and work commendably. When experts have upskilled themselves to fulfill the needs of the market, it is significant that they articulate their expertise productively to the hiring companies.

As robotization progressively replaces traditional entry-level technical jobs, for example, data entry and monitoring, it is turning out to be clear that moving up to cutting-edge skills, for example, AI, DL, ML and Cloud is the route forward. Artificial intelligence is assessed to supplant almost 7,000,000 positions by 2037. Automation (generally powered by AI) is probably going to affect 69% of occupations in India as corporations progressively receive the whatever can be automated, will be automated mantra to boost profitability.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Read the original post:
Way to Grow in Career - Upskilling in AI and Machine Learning - Analytics Insight

Enhancing Machine-Learning Capabilities In Oil And Gas Production – Texas A&M University Today

Machine-learning processes are invaluable at mining data for patterns in oil and gas production, but are generally limited in interpreting the information for decision-making needs.

Getty Images

Both a machine-learning algorithm and an engineer can predict if a bridge is going to collapse when they are given data that shows a failure might happen. Engineers can interpret the data based on their knowledge of physics, stresses and other factors, and state why they think the bridge is going to collapse. Machine-learning algorithms generally cant give an explanation of why a system would fail because they are limited in terms of interpretability based on scientific knowledge.

Since machine-learning algorithms are tremendously useful in many engineering areas, such as complex oil and gas processes, Petroleum Engineering Professor Akhil Datta-Gupta is leading Texas A&M Universitys participation in a multi-university and national laboratory project to reduce this limitation. The project began Sept. 2 and was initially funded by the U.S. Department of Energy (DOE). He and the other participants will inject science-informed decision-making into machine-learning systems, creating an advanced evaluation system that can assist with the interpretation of reservoir production processes and conditions while they happen.

Hydraulic fracturing operations are complex. Data is continually recorded during production processes so it can be evaluated and modeled to simulate what happens in a reservoir during the injection and recovery processes. However, these simulations are time-consuming to make, meaning they are not available during production and are more of a reference or learning tool for the next operation.

Enhanced by Datta-Guptas fast marching method, machine-learning systems can quickly compress data so they can render how fluid movements change in a reservoir during actual production processes.

Courtesy of Akhil Datta-Gupta

The DOE project will create an advanced system that will quickly sift data produced during hydraulic fracturing operations through physics-enhanced machine-learning algorithms, which will filter the outcomes using past observed experiences, and then render near real-time changes to reservoir conditions during oil recovery operations. These rapid visual evaluations will allow oil and gas operators to see, understand and effectively respond to real-time situations. The time advantage permits maximum production in areas that positively respond to fracturing, and stops unnecessary well drilling in areas that show limited response to fracturing.

It takes considerable effort to determine what changes occur in the reservoir, said Datta-Gupta, a University Distinguished Professor and Texas A&M Engineering Experiment Station researcher. This is why speed becomes critical. We are trying to do a near real-time analysis of the data, so engineering operations can make decisions almost on the fly.

The Texas A&M teams first step will focus on evaluating shale oil and gas field tests sponsored with DOE funding and identifying the machine-learning systems to use as the platform for the project. Next, they will upgrade these systems to merge multiple types of reservoir data, both actual and synthetic, and evaluate each system on how well it visualizes underground conditions compared to known outcomes.

At this point, Datta-Guptas research related to the fast marching method (FMM) for fluid front tracking will be added to speed up the systems visual calculations. FMM can rapidly sift through, track and compress massive amounts of data in order to transform the 3D aspect of reservoir fluid movements into a one-dimensional form. This reduction in complexity allows for the simpler, and faster, imaging.

Using known results from recovery processes in actual reservoirs, the researchers will train the system to understand changes the data inputs represent. The system will simulate everyday information, like fluid flow direction and fracture growth and interactions, and show how fast reservoir conditions change during actual production processes.

We are not the first to use machine-learning in petroleum engineering, Datta-Gupta said. But we are pioneering this enhancement, which is not like the usual input-output relationship. We want complex answers, ones we can interpret to get insights and predictions without compromising speed or production time. I find this very exciting.

Excerpt from:
Enhancing Machine-Learning Capabilities In Oil And Gas Production - Texas A&M University Today

Information gathering: A WebEx talk on machine learning – Santa Fe New Mexican

Were long past the point of questioning whether machines can learn. The question now is how do they learn? Machine learning, a subset of artificial intelligence, is the study of computer algorithms that improve automatically through experience. That means a machine can learn, independent of human programming. Los Alamos National Laboratory staff scientist Nga Thi Thuy Nguyen-Fotiadis is an expert on machine learning, and at 5:30 p.m. on Monday, Dec. 14, she hosts the virtual presentation Deep focus: Techniques for image recognition in machine learning, as part of the Bradbury Science Museums (1350 Central Ave., Los Alamos, 505-667-4444, lanl.gov/museum) Science on Tap lecture series. Nguyen-Fotiadis is a member of LANLs Information Sciences Group, whose Computer, Computational, and Statistical Sciences division studies fields that are central to scientific discovery and innovation. Learn about the differences between LANLs Trinity supercomputer and the human brain, and how algorithms determine recommendations for your nightly viewing pleasure on Netflix and the like. The talk is a free WebEx virtual event. Follow the link from the Bradburys event page at lanl.gov/museum/events/calendar/2020/12 /calendar-sot-nguyen-fotaidis.php to register.

Here is the original post:
Information gathering: A WebEx talk on machine learning - Santa Fe New Mexican

Embedded AI and Machine Learning Adding New Advancements In Tech Space – Analytics Insight

Throughout the most recent years, as sensor and MCU costs dove and shipped volumes have gone through the roof, an ever-increasing number of organizations have attempted to exploit by adding sensor-driven embedded AI to their products.

Automotive is driving the trend the average non-autonomous vehicle presently has 100 sensors, sending information to 30-50 microcontrollers that run about 1m lines of code and create 1TB of data per vehicle every day. Extravagance vehicles may have twice the same number of, and autonomous vehicles increase the sensor check significantly more drastically.

Yet, its not simply an automotive trend. Industrial equipment is turning out to be progressively brilliant as creators of rotating, reciprocating and other types of equipment rush to add usefulness for condition monitoring and predictive support, and a huge number of new consumer products from toothbrushes, to vacuum cleaners, to fitness monitors add instrumentation and smarts.

An ever-increasing number of smart devices are being introduced each month. We are now at a point where artificial intelligence and machine learning in its exceptionally essential structure has discovered its way into the core of embedded devices. For example, smart home lighting systems that automatically turn on and off depend on whether anybody is available in the room. By all accounts, the system doesnt look excessively stylish. Yet, when you consider everything, you understand that the system is really settling on choices all alone. In view of the contribution from the sensor, the microcontroller/SOC concludes if to turn on the light or not.

To do all of this simultaneously, defeating variety to achieve troublesome detections in real-time, at the edge, inside the vital limitations isnt at all simple. In any case, with current tools, integrating new options for machine learning for signals (like Reality AI) it is getting simpler.

They can regularly achieve detections that escape traditional engineering models. They do this by making significantly more productive and compelling utilization of data to conquer variation. Where traditional engineering approaches will ordinarily be founded on a physical model, utilizing data to appraise parameters, machine learning approaches can adapt autonomously of those models. They figure out how to recognize signatures straightforwardly from the raw information and utilize the mechanics of machine learning (mathematics) to isolate targets from non-targets without depending on physical science.

There are a lot of different regions where the convergence of machine learning and embedded systems will prompt great opportunities. Healthcare, for example, is now receiving the rewards of putting resources into AI technology. The Internet of Things or IoT will likewise profit enormously from the introduction of artificial intelligence. We will have smart automation solutions that will prompt energy savings, cost proficiency as well as the end of human blunder.

Forecasting is at the center of so many ML/AI conversations as organizations hope to use neural networks and deep learning to conjecture time series data. The worth is the capacity to ingest information and quickly acknowledge insight into how it changes the long-term outlook. Further, a large part of the circumstance relies upon the global supply chain, which makes improvements significantly harder to precisely project.

Probably the most unsafe positions in production lines are as of now being dealt by machines. Because of the advancement in embedded electronics and industrial automation, we have ground-breaking microcontrollers running the whole mechanical production systems in assembling plants. However, the majority of these machines are not exactly completely automatic and still require a type of human intercession. In any case, the time will come when the introduction of machine learning will help engineers concoct truly intelligent machines that can work with zero human mediation.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Read more:
Embedded AI and Machine Learning Adding New Advancements In Tech Space - Analytics Insight

Before machine learning can become ubiquitous, here are four things we need to do now – SiliconANGLE News

It wasnt too long ago that concepts such as communicating with your friends in real time through text or accessing your bank account information all from a mobile device seemed outside the realm of possibility. Today, thanks in large part to the cloud, these actions are so commonplace, we hardly even think about these incredible processes.

Now, as we enter the golden age of machine learning, we can expect a similar boom of benefits that previously seemed impossible.

Machine learning is already helping companies make better and faster decisions. In healthcare, the use of predictive models created with machine learning is accelerating research and discovery of new drugs and treatment regiments. In other industries, its helping remote villages of Southeast Africa gain access to financial services and matching individuals experiencing homelessness with housing.

In the short term, were encouraged by the applications of machine learning already benefiting our world. But it has the potential to have an even greater impact on our society. In the future, machine learning will be intertwined and under the hood of almost every application, business process and end-user experience.

However, before this technology becomes so ubiquitous that its almost boring, there are four key barriers to adoption we need to clear first:

The only way that machine learning will truly scale is if we as an industry make it easier for everyone regardless of skill level or resources to be able to incorporate this sophisticated technology into applications and business processes.

To achieve this, companies should take advantage of tools that have intelligence directly built into applications from which their entire organization can benefit. For example, Kabbage Inc., a data and technology company providing small business cash flow solutions, used artificial intelligence to adapt and help processquickly an unprecedented number of small business loans and unemployment claims caused by COVID-19 while preserving more than 945,000 jobs in America. By folding artificial intelligence into personalization, document processing, enterprise search, contact center intelligence, supply chain or fraud detection, all workers can benefit from machine learning in a frictionless way.

As processes go from manual to automatic, workers are free to innovate and invent, and companies are empowered to be proactive instead of reactive. And as this technology becomes more intuitive and accessible, it can be applied to nearly every problem imaginable from the toughest challenges in the information technology department to the biggest environmental issues in the world.

According to the World Economic Forum, the growth of AI could create 58 million net new jobs in the next few years. However, research suggests that there are currently only 300,000 AI engineers worldwide, and AI-related job postings are three times that of job searches with a widening divergence.

Given this significant gap, organizations need to recognize that they simply arent going to be able to hire all the data scientists they need as they continue to implement machine learning into their work. Moreover, this pace of innovation will open doors and ultimately create jobs we cant even begin to imagine today.

Thats why companies around the world such asMorningstar, Liberty MutualandDBS Bank are finding innovative ways to encourage their employees to gain new machine learning skills with a fun, interactive hands-on approach. Its critical that organizations should not only direct their efforts towards training the workforce they have with machine learning skills, but also invest in training programs that develop these important skills in the workforce of tomorrow.

With anything new, often people are of two minds: Either an emerging technology is a panacea and global savior, or it is a destructive force with cataclysmic tendencies. The reality is, more often than not, a nuance somewhere in the middle. These disparate perspectives can be reconciled with information, transparency and trust.

As a first step, leaders in the industry need to help companies and communities learn about machine learning, how it works, where it can be applied and ways to use it responsibly, and understand what it is not.

Second, in order to gain faith in machine learning products, they need to be built by diverse groups of people across gender, race, age, national origin, sexual orientation, disability, culture and education. We will all benefit from individuals who bring varying backgrounds, ideas and points of view to inventing new machine learning products.

Third, machine learning services should be rigorously tested, measuring accuracy against third party benchmarks. Benchmarks should be established by academia, as well as governments, and be applied to any machine learning-based service, creating a rubric for reliable results, as well as contextualizing results for use cases.

Finally, as a society, we need to agree on what parameters should be put in place governing how and when machine learning can be used. With any new technology, there has to be a balance in protecting civil rights while also allowing for continued innovation and practical application of the technology.

Any organization working with machine learning technology should be engaging customers, researchers, academics and others to determine the benefits of its machine learning technology along with the potential risks. And they should be in active conversation with policymakers, supporting legislation, and creating their own guidelines for the responsible use of machine learning technology. Transparency, open dialogue and constant evaluation must always be prioritized to ensure that machine learning is applied appropriately and is continuously enhanced.

Through machine learning weve already accomplished so much, and yet its still day one (and we havent even had a cup of coffee yet!). If were using machine learning to help endangered orangutans, just imagine how it could be used to help save and preserve our oceans and marine life. If were using this technology to create digital snapshots of the planets forests in real-time, imagine how it could be used to predict and prevent forest fires. If machine learning can be used to help connect small-holding farmers to the people and resources they need to achieve their economic potential, imagine how it could help end world hunger.

To achieve this reality, we as an industry have a lot of work ahead of us. Im incredibly optimistic that machine learning will help us solve some of the worlds toughest challenges and create amazing end-user experiences weve never even dreamed. Before we know it, machine learning will be as familiar as reaching for our phones.

Swami Sivasubramanianis vice president of Amazon AI, running AI and machine learning services for Amazon Web Services Inc. He wrote this article for SiliconANGLE.

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Read the rest here:
Before machine learning can become ubiquitous, here are four things we need to do now - SiliconANGLE News

AI needs to face up to its invisible-worker problem – MIT Technology Review

But there are a number of problems. One is that workers on these platforms earn very low wages. We did a study where we followed hundreds of Amazon Mechanical Turk workers for several years, and we found that they were earning around $2 per hour. This is much less than the US minimum wage. There are people who dedicate their lives to these platforms; its their main source of income.

And that brings other problems. These platforms cut off future job opportunities as well, because full-time crowdworkers are not given a way to develop their skillsat least not ones that are recognized. We found that a lot of people dont put their work on these platforms on their rsum. If they say they worked on Amazon Mechanical Turk, most employers wont even know what that is. Most employers are not aware that these are the workers behind our AI.

Its clear you have a real passion for what you do. How did you end up working on this?

I worked on a research project at Stanford where I was basically a crowdworker, and it exposed me to the problems. I helped design a new platform, which was like Amazon Mechanical Turk but controlled by the workers. But I was also a tech worker at Microsoft. And that also opened my eyes to what its like working within a large tech company. You become faceless, which is very similar to what crowdworkers experience. And that really sparked me into wanting to change the workplace.

You mentioned doing a study. How do you find out what these workers are doing and what conditions they face?

I do three things. I interview workers, I conduct surveys, and I build tools that give me a more quantitative perspective on what is happening on these platforms. I have been able to measure how much time workers invest in completing tasks. Im also measuring the amount of unpaid labor that workers do, such as searching for tasks or communicating with an employerthings youd be paid for if you had a salary.

Youve been invited to give a talk at NeurIPS this week. Why is this something that the AI community needs to hear?

Well, theyre powering their research with the labor of these workers. I think its very important to realize that a self-driving car or whatever exists because of people that arent paid minimum wage. While were thinking about the future of AI, we should think about the future of work. Its helpful to be reminded that these workers are humans.

Are you saying companies or researchers are deliberately underpaying?

No, thats not it. I think they might underestimate what theyre asking workers to do and how long it will take. But a lot of the time they simply havent thought about the other side of the transaction at all.

Because they just see a platform on the internet. And its cheap.

Yes, exactly.

What do we do about it?

Lots of things. Im helping workers get an idea how long a task might take them to do. This way they can evaluate if a task is going to be worth it. So Ive been developing an AI plug-in for these platforms that helps workers share information and coach each other about which tasks are worth their time and which let you develop certain skills. The AI learns what type of advice is most effective. It takes in the text comments that workers write to each other and learns what advice leads to better results, and promotes it on the platform.

Lets say workers want to increase their wages. The AI identifies what type of advice or strategy is best suited to help workers do that. For instance, it might suggest that you do these types of task from these employers but not these other types of task over there. Or it will tell you not to spend more than five minutes searching for work. The machine-learning model is based on the subjective opinion of workers on Amazon Mechanical Turk, but I found that it could still increase workers wages and develop their skills.

So its about helping workers get the most out of these platforms?

Thats a start. But it would be interesting to think about career ladders. For instance, we could guide workers to do a number of different tasks that let them develop their skills. We can also think about providing other opportunities. Companies putting jobs on these platforms could offer online micro-internships for the workers.

And we should support entrepreneurs. I've been developing tools that help people create their own gig marketplaces. Think about these workers: they are very familiar with gig work and they might have new ideas about how to run a platform. The problem is that they dont have the technical skills to set one up, so Im building a tool that makes setting up a platform a little like configuring a website template.

A lot of this is about using technology to shift the balance of power.

Its about changing the narrative, too. I recently met with two crowdworkers that Ive been talking to and they actually call themselves tech workers, whichI mean, they are tech workers in a certain way because they are powering our tech. When we talk about crowdworkers they are typically presented as having these horrible jobs. But it can be helpful to change the way we think about who these people are. Its just another tech job.

Follow this link:
AI needs to face up to its invisible-worker problem - MIT Technology Review

The 12 Coolest Machine-Learning Startups Of 2020 – CRN

Learning Curve

Artificial intelligence has been a hot technology area in recent years and machine learning, a subset of AI, is one of the most important segments of the whole AI arena.

Machine learning is the development of intelligent algorithms and statistical models that improve software through experience without the need to explicitly code those improvements. A predictive analysis application, for example, can become more accurate over time through the use of machine learning.

But machine learning has its challenges. Developing machine-learning models and systems requires a confluence of data science, data engineering and development skills. Obtaining and managing the data needed to develop and train machine-learning models is a significant task. And implementing machine-learning technology within real-world production systems can be a major hurdle.

Heres a look at a dozen startup companies, some that have been around for a few years and some just getting off the ground, that are addressing the challenges associated with machine learning.

AI.Reverie

Top Executive: Daeil Kim, Co-Founder, CEO

Headquarters: New York

AI.Reverie develops AI and machine -earning technology for data generation, data labeling and data enhancement tasks for the advancement of computer vision. The companys simulation platform is used to help acquire, curate and annotate the large amounts of data needed to train computer vision algorithms and improve AI applications.

In October AI.Reverie was named a Gartner Cool Vendor in AI core technologies.

Anodot

Top Executive: David Drai, Co-Founder, CEO

Headquarters: Redwood City, Calif.

Anodots Deep 360 autonomous business monitoring platform uses machine learning to continuously monitor business metrics, detect significant anomalies and help forecast business performance.

Anodots algorithms have a contextual understanding of business metrics, providing real-time alerts that help users cut incident costs by as much as 80 percent.

Anodot has been granted patents for technology and algorithms in such areas as anomaly score, seasonality and correlation. Earlier this year the company raised $35 million in Series C funding, bringing its total funding to $62.5 million.

BigML

Top Executive: Francisco Martin, Co-Founder, CEO

Headquarters: Corvallis, Ore.

BigML offers a comprehensive, managed machine-learning platform for easily building and sharing datasets and data models, and making highly automated, data-driven decisions. The companys programmable, scalable machine -earning platform automates classification, regression, time series forecasting, cluster analysis, anomaly detection, association discovery and topic modeling tasks.

The BigML Preferred Partner Program supports referral partners and partners that sell BigML and oversee implementation projects. Partner A1 Digital, for example, has developed a retail application on the BigML platform that helps retailers predict sales cannibalizationwhen promotions or other marketing activity for one product can lead to reduced demand for other products.

Carbon Relay

Top Executive: Matt Provo, Founder, CEO

Headquarters: Cambridge, Mass.

Carbon Relay provides machine learning and data science software that helps organizations optimize application performance in Kubernetes.

The startups Red Sky Ops makes it easy for DevOps teams to manage a large variety of application configurations in Kubernetes, which are automatically tuned for optimized performance no matter what IT environment theyre operating in.

In February the company said that it had raised $63 million in a funding round from Insight Partners that the company will use to expand its Red Sky Ops AIOps offering.

Comet.ML

Top Executive: Gideon Mendels, Co-Founder, CEO

Headquarters: New York

Comet.ML provides a cloud-hosted machine-learning platform for building reliable machine-learning models that help data scientists and AI teams track datasets, code changes, experimentation history and production models.

Launched in 2017, Comet.ML has raised $6.8 million in venture financing, including $4.5 million in April 2020.

Dataiku

Top Executive: Florian Douetteau, Co-Founder, CEO

Headquarters: New York

Dataikus goal with its Dataiku DSS (Data Science Studio) platform is to move AI and machine-learning use beyond lab experiments into widespread use within data-driven businesses. Dataiku DSS is used by data analysts and data scientists for a range of machine-learning, data science and data analysis tasks.

In August Dataiku raised an impressive $100 million in a Series D round of funding, bringing its total financing to $247 million.

Dataikus partner ecosystem includes analytics consultants, service partners, technology partners and VARs.

DotData

Top Executive: Ryohei Fujimaki, Founder, CEO

Headquarters: San Mateo, Calif.

DotData says its DotData Enterprise machine-learning and data science platform is capable of reducing AI and business intelligence development projects from months to days. The companys goal is to make data science processes simple enough that almost anyone, not just data scientists, can benefit from them.

The DotData platform is based on the companys AutoML 2.0 engine that performs full-cycle automation of machine-learning and data science tasks. In July the company debuted DotData Stream, a containerized AI/ML model that enables real-time predictive capabilities.

Eightfold.AI

Top Executive: Ashutosh Garg, Co-Founder, CEO

Headquarters: Mountain View, Calif.

Eightfold.AI develops the Talent Intelligence Platform, a human resource management system that utilizes AI deep learning and machine-learning technology for talent acquisition, management, development, experience and diversity. The Eightfold system, for example, uses AI and ML to better match candidate skills with job requirements and improves employee diversity by reducing unconscious bias.

In late October Eightfold.AI announced a $125 million Series round of financing, putting the startups value at more than $1 billion.

H2O.ai

Top Executive: Sri Ambati, Co-Founder, CEO

Headquarters: Mountain View, Calif.

H2O.ai wants to democratize the use of artificial intelligence for a wide range of users.

The companys H2O open-source AI and machine-learning platform, H2O AI Driverless automatic machine-learning software, H20 MLOps and other tools are used to deploy AI-based applications in financial services, insurance, health care, telecommunications, retail, pharmaceutical and digital marketing.

H2O.ai recently teamed up with data science platform developer KNIME to integrate Driverless AI for AutoMl with KNIME Server for workflow management across the entire data science life cyclefrom data access to optimization and deployment.

Iguazio

Top Executive: Asaf Somekh, Co-Founder, CEO

Headquarters: New York

The Iguazio Data Science Platform for real-time machine learning applications automates and accelerates machine-learning workflow pipelines, helping businesses develop, deploy and manage AI applications at scale that improve business outcomeswhat the company calls MLOps.

In early 2020 Iguazio raised $24 million in new financing, bringing its total funding to $72 million.

OctoML

Top Executive: Luis Ceze, Co-Founder, CEO

Headquarters: Seattle

OctoMLs Software-as-a-Service Octomizer makes it easier for businesses and organizations to put deep learning models into production more quickly on different CPU and GPU hardware, including at the edge and in the cloud.

OctoML was founded by the team that developed the Apache TVM machine-learning compiler stack project at the University of Washingtons Paul G. Allen School of Computer Science & Engineering. OctoMLs Octomizer is based on the TVM stack.

Tecton

Top Executive: Mike Del Balso, Co-Founder, CEO

Headquarters: San Francisco

Tecton just emerged from stealth in April 2020 with its data platform for machine learning that enables data scientists to turn raw data into production-ready machine-learning features. The startups technology is designed to help businesses and organizations harness and refine vast amounts of data into the predictive signals that feed machine-learning models.

The companys three founders: CEO Mike Del Balso, CTO Kevin Stumpf and Engineering Vice President Jeremy Hermann previously worked together at Uber where they developed the companys Michaelangelo machine-learning platform the ride-sharing company used to scale its operations to thousands of production models serving millions of transactions per second, according to Tecton.

The company started with $25 million in seed and Series A funding co-led by Andreessen Horowitz and Sequoia.

See the original post here:
The 12 Coolest Machine-Learning Startups Of 2020 - CRN

This New Machine Learning Tool Might Stop Misinformation – Digital Information World

Misinformation has always been a problem, but the combination of widespread social media as well as a loose definition of what can be seen as factual truth in recent times has lead to a veritable explosion in misinformation over the course of the past few years. The problem is so dire that in a lot of cases websites are made specifically because of the fact that this is the sort of thing that could potentially end up allowing misinformation to spread more easily, and this is a problem that might just have been addressed by a new machine learning tool.

This machine learning tool was developed by researchers at UCL, Berkeley and Cornell will be able to detect domain registration data and use this to ascertain whether the URL is legitimate or if it has been made specifically to legitimize a certain piece of information that people might be trying to spread around. A couple of other factors also come into play here. For example, if the identity of the person that registered the domain is private, this might be a sign that the site is not legitimate. The timing of the domain registration matters to. If it was done around the time a major news event broke out, such as the recent US presidential election, this is also a negative sign.

With all of that having been said and out of the way, it is important to note that this new machine learning tool has a pretty impressive success rate of about 92%, which is the proportion of fake domains it was able to discover. Being able to tell whether or not a news source is legitimate or whether it is direct propaganda is useful because of the fact that it can help reduce the likelihood that people might just end up taking the misinformation seriously.

Read the original here:
This New Machine Learning Tool Might Stop Misinformation - Digital Information World

The way we train AI is fundamentally flawed – MIT Technology Review

For example, they trained 50 versions of an image recognition model on ImageNet, a dataset of images of everyday objects. The only difference between training runs were the random values assigned to the neural network at the start. Yet despite all 50 models scoring more or less the same in the training testsuggesting that they were equally accuratetheir performance varied wildly in the stress test.

The stress test used ImageNet-C, a dataset of images from ImageNet that have been pixelated or had their brightness and contrast altered, and ObjectNet, a dataset of images of everyday objects in unusual poses, such as chairs on their backs, upside-down teapots, and T-shirts hanging from hooks. Some of the 50 models did well with pixelated images, some did well with the unusual poses; some did much better overall than others. But as far as the standard training process was concerned, they were all the same.

The researchers carried out similar experiments with two different NLP systems, and three medical AIs for predicting eye disease from retinal scans, cancer from skin lesions, and kidney failure from patient records. Every system had the same problem: models that should have been equally accurate performed differently when tested with real-world data, such as different retinal scans or skin types.

We might need to rethink how we evaluate neural networks, says Rohrer. It pokes some significant holes in the fundamental assumptions we've been making.

DAmour agrees. The biggest, immediate takeaway is that we need to be doing a lot more testing, he says. That wont be easy, however. The stress tests were tailored specifically to each task, using data taken from the real world or data that mimicked the real world. This is not always available.

Some stress tests are also at odds with each other: models that were good at recognizing pixelated images were often bad at recognizing images with high contrast, for example. It might not always be possible to train a single model that passes all stress tests.

One option is to design an additional stage to the training and testing process, in which many models are produced at once instead of just one. These competing models can then be tested again on specific real-world tasks to select the best one for the job.

Thats a lot of work. But for a company like Google, which builds and deploys big models, it could be worth it, says Yannic Kilcher, a machine-learning researcher at ETH Zurich. Google could offer 50 different versions of an NLP model and application developers could pick the one that worked best for them, he says.

DAmour and his colleagues dont yet have a fix but are exploring ways to improve the training process. We need to get better at specifying exactly what our requirements are for our models, he says. Because often what ends up happening is that we discover these requirements only after the model has failed out in the world.

Getting a fix is vital if AI is to have as much impact outside the lab as it is having inside. When AI underperforms in the real-world it makes people less willing to want to use it, says co-author Katherine Heller, who works at Google on AI for healthcare: We've lost a lot of trust when it comes to the killer applications, thats important trust that we want to regain.

Read more from the original source:
The way we train AI is fundamentally flawed - MIT Technology Review

Machine Learning Predicts How Cancer Patients Will Respond to Therapy – HealthITAnalytics.com

November 18, 2020 -A machine learning algorithm accurately determined how well skin cancer patients would respond to tumor-suppressing drugs in four out of five cases, according to research conducted by a team from NYU Grossman School of Medicine and Perlmutter Cancer Center.

The study focused on metastatic melanoma, a disease that kills nearly 6,800 Americans each year. Immune checkpoint inhibitors, which keep tumors from shutting down the immune systems attack on them, have been shown to be more effective than traditional chemotherapies for many patients with melanoma.

However, half of patients dont respond to these immunotherapies, and these drugs are expensive and often cause side effects in patients.

While immune checkpoint inhibitors have profoundly changed the treatment landscape in melanoma, many tumors do not respond to treatment, and many patients experience treatment-related toxicity, said corresponding study authorIman Osman, medical oncologist in the Departments of Dermatology and Medicine (Oncology) at New York University (NYU) Grossman School of Medicine and director of the Interdisciplinary Melanoma Program at NYU Langones Perlmutter Cancer Center.

An unmet need is the ability to accurately predict which tumors will respond to which therapy. This would enable personalized treatment strategies that maximize the potential for clinical benefit and minimize exposure to unnecessary toxicity.

READ MORE: How Social Determinants Data Can Enhance Machine Learning Tools

Researchers set out to develop a machine learning model that could help predict a melanoma patients response to immune checkpoint inhibitors. The team collected 302 images of tumor tissue samples from 121 men and women treated for metastatic melanoma with immune checkpoint inhibitors at NYU Langone hospitals.

They then divided these slides into 1.2 million portions of pixels, the small bits of data that make up images. These were fed into the machine learning algorithm along with other factors, such as the severity of the disease, which kind of immunotherapy regimen was used, and whether a patient responded to the treatment.

The results showed that the machine learning model achieved an AUC of 0.8 in both the training and validation cohorts, and was able to predict which patients with a specific type of skin cancer would respond well to immunotherapies in four out of five cases.

Our findings reveal that artificial intelligence is a quick and easy method of predicting how well a melanoma patient will respond to immunotherapy, said study first author Paul Johannet, MD, a postdoctoral fellow at NYU Langone Health and its Perlmutter Cancer Center.

Researchers repeated this process with 40 slides from 30 similar patients at Vanderbilt University to determine whether the results would be similar at a different hospital system that used different equipment and sampling techniques.

READ MORE: Simple Machine Learning Method Predicts Cirrhosis Mortality Risk

A key advantage of our artificial intelligence program over other approaches such as genetic or blood analysis is that it does not require any special equipment, said study co-author Aristotelis Tsirigos, PhD, director of applied bioinformatics laboratories and clinical informatics at the Molecular Pathology Lab at NYU Langone.

The team noted that aside from the computer needed to run the program, all materials and information used in the Perlmutter technique are a standard part of cancer management that most, if not all, clinics use.

Even the smallest cancer center could potentially send the data off to a lab with this program for swift analysis, said Osman.

The machine learning method used in the study is also more streamlined than current predictive tools, such as analyzing stool samples or genetic information, which promises to reduce treatment costs and speed up patient wait times.

Several recent attempts to predict immunotherapy responses do so with robust accuracy but use technologies, such as RNA sequencing, that are not readily generalizable to the clinical setting, said corresponding study authorAristotelis Tsirigos, PhD, professor in the Institute for Computational Medicine at NYU Grossman School of Medicine and member of NYU Langones Perlmutter Cancer Center.

READ MORE: Machine Learning Forecasts Prognosis of COVID-19 Patients

Our approach shows that responses can be predicted using standard-of-care clinical information such as pre-treatment histology images and other clinical variables.

However, researchers also noted that the algorithm is not yet ready for clinical use until they can boost the accuracy from 80 percent to 90 percent and test the algorithm at more institutions. The research team plans to collect more data to improve the performance of the model.

Even at its current level of accuracy, the model could be used as a screening method to determine which patients across populations would benefit from more in-depth tests before treatment.

There is potential for using computer algorithms to analyze histology images and predict treatment response, but more work needs to be done using larger training and testing datasets, along with additional validation parameters, in order to determine whether an algorithm can be developed that achieves clinical-grade performance and is broadly generalizable, said Tsirigos.

There is data to suggest that thousands of images might be needed to train models that achieve clinical-grade performance.

Read the rest here:
Machine Learning Predicts How Cancer Patients Will Respond to Therapy - HealthITAnalytics.com

Utilizing machine learning to uncover the right content at KMWorld Connect 2020 – KMWorld Magazine

At KMWorld Connect 2020 David Seuss, CEO, Northern Light, Sid Probstein, CTO, Keeeb, and Tom Barfield, chief solution architect, Keeb discussed Machine Learning & KM.

KMWorld Connect, November 16-19, and its co-located events, covers future-focused strategies, technologies, and tools to help organizations transform for positive outcomes.

Machine learning can assist KM activities in many ways. Seuss discussed using a semantic analysis of keywords in social posts about a topic of interest to yield clear guidance as to which terms have actual business relevance and are therefore worth investing in.

What are we hearing from our users? Seuss asked. The users hate the business research process.

By using AstraZeneca as an example, Seuss started the analysis of the companys conference presentations. By looking at the topics, Diabetes sank lower as a focus of AstraZenicas focus.

When looking at their twitter account, themes included oncology, COVID-19, and environmental issues. Not one reference was made to diabetes, according to Seuss.

Social media is where the energy of the company is first expressed, Seuss said.

An instant news analysis using text analytics tells us the same story: no mention of diabetes products, clinical trials, marketing, etc.

AI-based automated insight extraction from 250 AstraZeneca oncolcogy conference presentations gives insight into R&D focus.

Let the machine read the content and tell you what it thinks is important, Seuss said.

You can do that with a semantic graph of all the ideas in the conference presentations. Semantic graphs look for relationships between ideas and measure the number and strength of the relationships. Google search results are a real-world example of this in action.

We are approaching the era when users will no longer search for information, they will expect the machine to analyze and then summarize for them what they need to know, Seuss said. Machine-based techniques will change everything.

Probstein and Barfield addressed new approaches to integrate knowledge sharing into work. They looked at collaborative information curation so end users help identify the best content, allowing KM teams to focus on the most strategic knowledge challenges as well as the pragmatic application of AI through text analytics to improve both curation and findability and improve performance.

The super silo is on the rise, Probstein said. It stores files, logs, customer/sales and can be highly variable. He looked at search results for how COVID-19 is having an impact on businesses.

Not only are there many search engines, each one is different, Probstein said.

Probstein said Keeeb can help with this problem. The solution can search through a variety of data sources to find the right information.

One search, a few seconds, one pane of glass, Probstein said. Once you solve the search problem, now you can look through the documents.

Knowledge isnt always a whole document, it can be a few paragraphs or an image, which can then be captured and shared through Keeeb.

AI and machine learning can enable search to be integrated with existing tools or any system. Companies should give end-users simple approaches to organize with content-augmented with AI-benefitting themselves and others, Barfield said.

More:
Utilizing machine learning to uncover the right content at KMWorld Connect 2020 - KMWorld Magazine

Resetting Covid-19 Impact On Artificial Intelligence and Machine Learning in IoT Market Report Explores Complete Research With Top Companies- Google,…

Earlier machine learning techniques have been used extensively for a wide range of tasks including classification, regression and density estimation in a variety of application areas such as bioinformatics, speech recognition, spam detection, computer vision, fraud detection and advertising networks. Machine learning is the main method among those computational application to IoT and there are lots of application both in research and industry including energy, routing, and home automation and so on.

Top Companies Covered in this Report:

Google Inc., Cisco, IBM Corp., Microsoft Corp., Amazon Inc., PTC (ColdLight), Infobright, Mtell, Predikto, Predixion Software and Sight Machine

Get sample copy of Report at: https://www.premiummarketinsights.com/sample/TIP00001075

The report aims to provide an overview of global artificial intelligence and machine learning in IoT market with detailed market segmentation by application, and geography. The global artificial intelligence and machine learning in IoT market is expected to witness exponential growth during the forecast period so as to manage increasingly large amount of unstructured machine data available in almost all industry.

The objectives of this report are as follows:

To provide overview of the global artificial intelligence and machine learning in IoT market

To analyze and forecast the global artificial intelligence and machine learning in IoT market on the basis of its application

To provide market size and forecast till 2025 for overall artificial intelligence and machine learning in IoT market with respect to five major regions, namely; North America, Europe, Asia-Pacific (APAC), Middle East and Africa (MEA) and South America (SAM), which is later sub-segmented by respective countries

To evaluate market dynamics effecting the market during the forecast period i.e., drivers, restraints, opportunities, and future trend

To provide exhaustive PEST analysis for all five regions

To profiles key artificial intelligence and machine learning in IoT players influencing the market along with their SWOT analysis and market strategies

Get Discount for This Report https://www.premiummarketinsights.com/discount/TIP00001075

Table Of Content

1 Introduction

2 Key Takeaways

3 Artificial Intelligence and Machine Learning in IoT Market Landscape

4 Artificial Intelligence and Machine Learning in IoT Market Key Industry Dynamics

5 Artificial Intelligence and Machine Learning in IoT Market Analysis- Global

6 Artificial Intelligence and Machine Learning in IoT Market Revenue and Forecasts to 2025 Application

7 Artificial Intelligence and Machine Learning in IoT Market Revenue and Forecasts to 2025 Geographical Analysis

8 Industry Landscape

9 Competitive Landscape

10 Artificial Intelligence and Machine Learning in IoT Market, Key Company Profiles

Enquire about report at: https://www.premiummarketinsights.com/buy/TIP00001075

About Premium Market Insights:

Premiummarketinsights.com is a one stop shop of market research reports and solutions to various companies across the globe. We help our clients in their decision support system by helping them choose most relevant and cost effective research reports and solutions from various publishers. We provide best in class customer service and our customer support team is always available to help you on your research queries.

Contact Us:

Sameer Joshi

Call: +912067274191

Email: [emailprotected]

Pune

Go here to read the rest:
Resetting Covid-19 Impact On Artificial Intelligence and Machine Learning in IoT Market Report Explores Complete Research With Top Companies- Google,...

The consistency of machine learning and statistical models in predicting clinical risks of individual patients – The BMJ – The BMJ

Now, imagine a machine learning system with an understanding of every detail of that persons entire clinical history and the trajectory of their disease. With the clinicians push of a button, such a system would be able to provide patient-specific predictions of expected outcomes if no treatment is provided to support the clinician and patient in making what may be life-or-death decisions[1] This would be a major achievement. The English NHS is currently investing 250 million in Artificial Intelligence (AI). Part of this AI work could help to identify patients most at risk of diseases such as heart disease or dementia, allowing for earlier diagnosis and cheaper, more focused, personalised prevention. [2] Multiple papers have suggested that machine learning outperforms statistical models including cardiovascular disease risk prediction. [3-6] We tested whether it is true with prediction of cardiovascular disease as exemplar.

Risk prediction models have been implemented worldwide into clinical practice to help clinicians make treatment decisions. As an example, guidelines by the UK National Institute for Health and Care Excellence recommend that statins are considered for patients with a predicted 10-year cardiovascular disease risk of 10% or more. [7] This is based on the estimation of QRISK which was derived using a statistical model. [8] Our research evaluated whether the predictions of cardiovascular disease risk for an individual patient would be similar if another model, such as a machine learning models were used, as different predictions could lead to different treatment decisions for a patient.

An electronic health record dataset was used for this study with similar risk factor information used across all models. Nineteen different prediction techniques were applied including 12 families of machine learning models (such as neural networks) and seven statistical models (such as Cox proportional hazards models). It was found that the various models had similar population-level model performance (C-statistics of about 0.87 and similar calibration). However, the predictions for individual CVD risks varied widely between and within different types of machine learning and statistical models, especially in patients with higher CVD risks. Most of the machine learning models, tested in this study, do not take censoring into account by default (i.e., loss to follow-up over the 10 years). This resulted in these models substantially underestimating cardiovascular disease risk.

The level of consistency within and between models should be assessed before they are used for treatment decisions making, as an arbitrary choice of technique and model could lead to a different treatment decision.

So, can a push of a button provide patient-specific risk prediction estimates by machine learning? Yes, it can. But should we use such estimates for patient-specific treatment-decision making if these predictions are model-dependant? Machine learning may be helpful in some areas of healthcare such as image recognition, and could be as useful as statistical models on population level prediction tasks. But in terms of predicting risk for individual decision making we think a lot more work could be done. Perhaps the claim that machine learning will revolutionise healthcare is a little premature.

Yan Li, doctoral student of statistical epidemiology, Health e-Research Centre, Health Data Research UK North, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester.

Matthew Sperrin, senior lecturer in health data science, Health e-Research Centre, Health Data Research UK North, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester.

Darren M Ashcroft, professor of pharmacoepidemiology, Centre for Pharmacoepidemiology and Drug Safety, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester.

Tjeerd Pieter van Staa, professor in health e-research, Health e-Research Centre, Health Data Research UK North, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester.

Competing interests: None declared.

References:

Link:
The consistency of machine learning and statistical models in predicting clinical risks of individual patients - The BMJ - The BMJ

Machine Learning Tutorial | Machine Learning with Python …

Machine Learning tutorial provides basic and advanced concepts of machine learning. Our machine learning tutorial is designed for students and working professionals.

Machine learning is a growing technology which enables computers to learn automatically from past data. Machine learning uses various algorithms for building mathematical models and making predictions using historical data or information. Currently, it is being used for various tasks such as image recognition, speech recognition, email filtering, Facebook auto-tagging, recommender system, and many more.

This machine learning tutorial gives you an introduction to machine learning along with the wide range of machine learning techniques such as Supervised, Unsupervised, and Reinforcement learning. You will learn about regression and classification models, clustering methods, hidden Markov models, and various sequential models.

In the real world, we are surrounded by humans who can learn everything from their experiences with their learning capability, and we have computers or machines which work on our instructions. But can a machine also learn from experiences or past data like a human does? So here comes the role of Machine Learning.

Machine Learning is said as a subset of artificial intelligence that is mainly concerned with the development of algorithms which allow a computer to learn from the data and past experiences on their own. The term machine learning was first introduced by Arthur Samuel in 1959. We can define it in a summarized way as:

With the help of sample historical data, which is known as training data, machine learning algorithms build a mathematical model that helps in making predictions or decisions without being explicitly programmed. Machine learning brings computer science and statistics together for creating predictive models. Machine learning constructs or uses the algorithms that learn from historical data. The more we will provide the information, the higher will be the performance.

A machine has the ability to learn if it can improve its performance by gaining more data.

A Machine Learning system learns from historical data, builds the prediction models, and whenever it receives new data, predicts the output for it. The accuracy of predicted output depends upon the amount of data, as the huge amount of data helps to build a better model which predicts the output more accurately.

Suppose we have a complex problem, where we need to perform some predictions, so instead of writing a code for it, we just need to feed the data to generic algorithms, and with the help of these algorithms, machine builds the logic as per the data and predict the output. Machine learning has changed our way of thinking about the problem. The below block diagram explains the working of Machine Learning algorithm:

The need for machine learning is increasing day by day. The reason behind the need for machine learning is that it is capable of doing tasks that are too complex for a person to implement directly. As a human, we have some limitations as we cannot access the huge amount of data manually, so for this, we need some computer systems and here comes the machine learning to make things easy for us.

We can train machine learning algorithms by providing them the huge amount of data and let them explore the data, construct the models, and predict the required output automatically. The performance of the machine learning algorithm depends on the amount of data, and it can be determined by the cost function. With the help of machine learning, we can save both time and money.

The importance of machine learning can be easily understood by its uses cases, Currently, machine learning is used in self-driving cars, cyber fraud detection, face recognition, and friend suggestion by Facebook, etc. Various top companies such as Netflix and Amazon have build machine learning models that are using a vast amount of data to analyze the user interest and recommend product accordingly.

Following are some key points which show the importance of Machine Learning:

At a broad level, machine learning can be classified into three types:

Supervised learning is a type of machine learning method in which we provide sample labeled data to the machine learning system in order to train it, and on that basis, it predicts the output.

The system creates a model using labeled data to understand the datasets and learn about each data, once the training and processing are done then we test the model by providing a sample data to check whether it is predicting the exact output or not.

The goal of supervised learning is to map input data with the output data. The supervised learning is based on supervision, and it is the same as when a student learns things in the supervision of the teacher. The example of supervised learning is spam filtering.

Supervised learning can be grouped further in two categories of algorithms:

Unsupervised learning is a learning method in which a machine learns without any supervision.

The training is provided to the machine with the set of data that has not been labeled, classified, or categorized, and the algorithm needs to act on that data without any supervision. The goal of unsupervised learning is to restructure the input data into new features or a group of objects with similar patterns.

In unsupervised learning, we don't have a predetermined result. The machine tries to find useful insights from the huge amount of data. It can be further classifieds into two categories of algorithms:

Reinforcement learning is a feedback-based learning method, in which a learning agent gets a reward for each right action and gets a penalty for each wrong action. The agent learns automatically with these feedbacks and improves its performance. In reinforcement learning, the agent interacts with the environment and explores it. The goal of an agent is to get the most reward points, and hence, it improves its performance.

The robotic dog, which automatically learns the movement of his arms, is an example of Reinforcement learning.

Before some years (about 40-50 years), machine learning was science fiction, but today it is the part of our daily life. Machine learning is making our day to day life easy from self-driving cars to Amazon virtual assistant "Alexa". However, the idea behind machine learning is so old and has a long history. Below some milestones are given which have occurred in the history of machine learning:

Now machine learning has got a great advancement in its research, and it is present everywhere around us, such as self-driving cars, Amazon Alexa, Catboats, recommender system, and many more. It includes Supervised, unsupervised, and reinforcement learning with clustering, classification, decision tree, SVM algorithms, etc.

Modern machine learning models can be used for making various predictions, including weather prediction, disease prediction, stock market analysis, etc.

Before learning machine learning, you must have the basic knowledge of followings so that you can easily understand the concepts of machine learning:

Our Machine learning tutorial is designed to help beginner and professionals.

We assure you that you will not find any difficulty while learning our Machine learning tutorial. But if there is any mistake in this tutorial, kindly post the problem or error in the contact form so that we can improve it.

Read the original post:
Machine Learning Tutorial | Machine Learning with Python ...

Machine learning prediction for mortality of patients diagnosed with COVID-19: a nationwide Korean cohort study – DocWire News

This article was originally published here

Sci Rep. 2020 Oct 30;10(1):18716. doi: 10.1038/s41598-020-75767-2.

ABSTRACT

The rapid spread of COVID-19 has resulted in the shortage of medical resources, which necessitates accurate prognosis prediction to triage patients effectively. This study used the nationwide cohort of South Korea to develop a machine learning model to predict prognosis based on sociodemographic and medical information. Of 10,237 COVID-19 patients, 228 (2.2%) died, 7772 (75.9%) recovered, and 2237 (21.9%) were still in isolation or being treated at the last follow-up (April 16, 2020). The Cox proportional hazards regression analysis revealed that age > 70, male sex, moderate or severe disability, the presence of symptoms, nursing home residence, and comorbidities of diabetes mellitus (DM), chronic lung disease, or asthma were significantly associated with increased risk of mortality (p 0.047). For machine learning, the least absolute shrinkage and selection operator (LASSO), linear support vector machine (SVM), SVM with radial basis function kernel, random forest (RF), and k-nearest neighbors were tested. In prediction of mortality, LASSO and linear SVM demonstrated high sensitivities (90.7% [95% confidence interval: 83.3, 97.3] and 92.0% [85.9, 98.1], respectively) and specificities (91.4% [90.3, 92.5] and 91.8%, [90.7, 92.9], respectively) while maintaining high specificities > 90%, as well as high area under the receiver operating characteristics curves (0.963 [0.946, 0.979] and 0.962 [0.945, 0.979], respectively). The most significant predictors for LASSO included old age and preexisting DM or cancer; for RF they were old age, infection route (cluster infection or infection from personal contact), and underlying hypertension. The proposed prediction model may be helpful for the quick triage of patients without having to wait for the results of additional tests such as laboratory or radiologic studies, during a pandemic when limited medical resources must be wisely allocated without hesitation.

PMID:33127965 | DOI:10.1038/s41598-020-75767-2

See original here:
Machine learning prediction for mortality of patients diagnosed with COVID-19: a nationwide Korean cohort study - DocWire News

REACH and Millennium Systems International Partner to offer Machine Learning Driven Booking Automation to the MeevoXchange Marketplace – PRNewswire

REACH is available in award-winning Millennium System International's scheduling software product, Meevo 2, and serves thousands of beauty businesses in over 30 countries."We are thrilled to announce another Meevo 2 business building integration offering within our MeevoXchange marketplace REACH by Octopi. REACH delivers the AI-powered smart scheduling features to help keep our salons and spas booked and growing. This partnership aligns with our strategic goals for our award-winning software Meevo 2 as we continuously add value to our platform and ultimately our salon and spa customers," says CEO John Harms, Millennium Systems International.

"REACH is so special because it requires virtually no setup or upkeep as it follows your existing Meevo 2 online booking settings. REACH plays 'matchmaker' by connecting your clients that are due and overdue with open spaces in your Meevo 2 appointment book over the next few days, automatically. It has taken us years of research and development to create such successful and exciting tool that will begin to show value to your business starting on day one!" CEO Patrick Blickman, REACH by Octopi

Performance Guarantee and Affordability

The platform includes the REACH Revenue Guarantee thatensures each location will see a minimum of $600-$1400 in new booking revenue every month. There are never any contracts or commitments with REACH. Simply turn it on and let it start filling your Meevo 2 appointment book. Pricing starts at $149/month.

About REACH by OCTOPI

REACH was founded to make the client booking experience easier and far more automated for the health and beauty businesses we serve. Headquartered in Scottsdale, Arizona; REACH is built on decades of consolidated industry and channel expertise. Visitwww.octopi.com/reach

About Millennium Systems International:

Millennium Systems International has been a leading business management software for the salon, spa and wellness industry for more than three decades. The award-winning Meevo 2 platform provides a true cloud-based business management software that is HIPAA compliant and fully responsive, so users can gain complete access using any device, built by wellness and beauty veterans exclusively for the wellness and beauty industry. Visit https://www.millenniumsi.com

SOURCE Octopi

octopi.com

Read the original here:
REACH and Millennium Systems International Partner to offer Machine Learning Driven Booking Automation to the MeevoXchange Marketplace - PRNewswire

Ding Dong Merrily on AI: The British Neuroscience Association’s Christmas Symposium Explores the Future of Neuroscience and AI – Technology Networks

A Christmas symposium from the British Neuroscience Association (BNA) has reviewed the growing relationship between neuroscience and artificial intelligence (AI) techniques. The online event featured talks from across the UK, which reviewed how AI has changed brain science and the many unrealized applications of what remains a nascent technology.

Opening the day with his talk, Shake your Foundations: the future of neuroscience in a world where AI is less rubbish, Prof. Christopher Summerfield, from the University of Oxford, looked at the idiotic, ludic and pragmatic stages of AI. We are moving from the idiotic phase, where virtual assistants are usually unreliable and AI-controlled cars crash into random objects they fail to notice, to the ludic phase, where some AI tools are actually quite handy. Summerfield highlighted a program called DALL-E, an AI that converts text prompts into images, and a language generator called gopher that can answer complicated ethical questions with eerily natural responses.

What could these advances in AI mean for neuroscience? Summerfield suggested that they invite researchers to consider the limits of current neuroscience practice that could be enhanced by AI in the future.

Integration of neuroscience subfields could be enabled by AI, said Summerfield. Currently, he said People who study language dont care about vision. People who study vision dont care about memory. AI systems dont work properly if only one distinct subfield is considered and Summerfield suggested that, as we learn more about how to create a more complete AI, similar advances will be seen in our study of the biological brain.

Another element of AI that could drag neuroscience into the future is the level of grounding required for it to succeed. Currently, AI models are provided with contextual training data before they can learn associations, whereas the human brain learns from scratch. What makes it possible for a volunteer in a psychologists experiment to be told to do something, and then just do it? To create more natural AIs, this is a problem that neuroscience will have to solve in the biological brain first.

The University of Oxfords Prof. Mihaela van der Schaar looked at how we can use machine learning to empower human learning in her talk, Quantitative Epistemology: a new human-machine partnership. Van der Schaars talks discussed practical applications of machine learning in healthcare by teaching clinicians through a process called meta-learning. This is where, said van der Schaar, learners become aware of and increasingly in control of habits of perception, inquiry, learning and growth.

This approach provides a potential look at how AI might supplement the future of healthcare, by advising clinicians on how they make decisions and how to avoid potential error when undertaking certain practices. Van der Schaar gave an insight into how AI models can be set up to make these continuous improvements. In healthcare, which, at least in the UK, is slow to adopt new technology, van der Schaars talk offered a tantalizing glimpse of what a truly digital approach to healthcare could achieve.

Dovetailing nicely from van der Schaars talk was Imperial College London professor Aldo Faisals presentation, entitled AI and Neuroscience the Virtuous Cycle. Faisal looked at systems where humans and AI interact and how they can be classified. Whereas in van der Schaars clinical decision support systems, humans remain responsible for the final decision and AIs merely advise, in an AI-augmented prosthetic, for example, the roles are reversed. A user can suggest a course of action, such as pick up this glass, by sending nerve impulses and the AI can then find a response that addresses this suggestion, by, for example, directing a prosthetic hand to move in a certain way. Faisal then went into detail on how these paradigms can inform real-world learning tasks, such as motion-tracked subjects learning to play pool.

One fascinating study involved a balance board task, where a human subject could tilt the board in one axis, while an AI controlled another, meaning that the two had to collaborate to succeed. After time, the strategies learned by the AI could be copied between certain subjects, suggesting the human learning component was similar. But for other subjects, this wasnt possible.

Faisal suggested this hinted at complexities in how different individuals learn that could inform behavioral neuroscience, AI systems and future devices, like neuroprostheses, where the two must play nicely together.

The afternoons session featured presentations that touched on the complexities of the human and animal brain. The University of Sheffields Professor Eleni Vasilaki explained how mushroom bodies, regions of the fly brain that play roles in learning and memory, can provide insight into sparse reservoir computing. Thomas Nowotny, professor of informatics at the University of Sussex, reviewed a process called asynchrony, where neurons activate at slightly different times in response to certain stimuli. Nowotny explained how this enables relatively simple systems like the bee brain to perform incredible feats of communication and navigation using only a few thousand neurons.

Wrapping up the days presentations was a lecture that showed an uncanny future for social AIs, delivered by the Henry Shevlin, a senior researcher at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge.

Shevlin reviewed the theory of mind, which enables us to understand what other people might be thinking by, in effect modeling their thoughts and emotions. Do AIs have minds in the same way that we do? Shevlin reviewed a series of AI that have been out in the world, acting as humans, here in 2021.

One such AI, OpenAIs language model, GPT-3, spent a week posting on internet forum site Reddit, chatting with human Redditors and racking up hundreds of comments. Chatbots like Replika that personalize themselves to individual users, creating pseudo-relationships that feel as real as human connections (at least to some users). But current systems, said Shevlin, are excellent at fooling humans, but have no mental depth and are, in effect, extremely proficient versions of the predictive text systems our phones use.

While the rapid advance of some of these systems might feel dizzying or unsettling, AI and neuroscience are likely to be wedded together in future research. So much can be learned from pairing these fields and true advances will be gained not from retreating from complex AI theories but by embracing them. At the end of Summerfields talk, he summed up the idea that AIs are black boxes that we dont fully understand as lazy. If we treat deep networks and other AIs systems as neurobiological theories instead, the next decade could see unprecedented advances for both neuroscience and AI.

Originally posted here:
Ding Dong Merrily on AI: The British Neuroscience Association's Christmas Symposium Explores the Future of Neuroscience and AI - Technology Networks

BioSig and Mayo Clinic Collaborate on New R&D Program to Develop Transformative AI and Machine Learning Technologies for its PURE EP System – BioSpace

Westport, CT, Feb. 02, 2021 (GLOBE NEWSWIRE) --

BioSig Technologies, Inc. (NASDAQ: BSGM) (BioSig or the Company), a medical technology company commercializing an innovative signal processing platform designed to improve signal fidelity and uncover the full range of ECG and intra-cardiac signals, today announced a strategic collaboration with the Mayo Foundation for Medical Education and Research to develop a next-generation AI- and machine learning-powered software for its PURE EP system.

The new collaboration will include an R&D program that will expand the clinical value of the Companys proprietary hardware and software with advanced signal processing capabilities and aim to develop novel technological solutions by combining the electrophysiological signals delivered by the PURE EPand other data sources. The development program will be conducted under the leadership of Samuel J. Asirvatham, M.D., Mayo Clinics Vice-Chair of Innovation and Medical Director, Electrophysiology Laboratory, and Alexander D. Wissner-Gross, Ph.D., Managing Director of Reified LLC.

The global market for AI in healthcare is expected to grow from $4.9 billion in 2020 to $45.2 billion by 2026 at an estimated compound annual growth rate (CAGR) of 44.9%1. According to Accenture, key clinical health AI applications, when combined, can potentially create $150 billion in annual savings for the United States healthcare economy by 20262.

AI-powered algorithms that are developed on superior data from multiple biomarkers could drastically improve the way we deliver therapies, and therefore may help address the rising global demand for healthcare, commented Kenneth L Londoner, Chairman and CEO of BioSig Technologies, Inc. We believe that combining the clinical science of Mayo Clinic with the best-in-class domain expertise of Dr. Wissner-Gross and the technical leadership of our engineering team will enable us to develop powerful applications and help pave the way toward improved patient outcomes in cardiology and beyond.

Artificial intelligence presents a variety of novel opportunities for extracting clinically actionable information from existing electrophysiological signals that might otherwise be inaccessible. We are excited to contribute to the advancement of this field, said Dr. Wissner-Gross.

BioSig announced its partnership with Reified LLC, a provider of advanced artificial intelligence-focused technical advisory services to the private sector in late 2019. The new research program builds upon the progress achieved by this collaboration in 2020, which included an abstract for Computational Reconstruction of Electrocardiogram Lead Placement presented during the 2020 Computing in Cardiology Conference in Rimini, Italy, and the development of an initial suite of electrophysiological analytics for the PURE EPSystem.

BioSig signed a 10-year collaboration agreement with Mayo Clinic in March 2017. In November 2019, the Company announced that it signed three new patent and know-how license agreements with the Mayo Foundation for Medical Education and Research.

About BioSig TechnologiesBioSig Technologies is a medical technology company commercializing a proprietary biomedical signal processing platform designed toimprove signal fidelity and uncover the full range of ECG and intra-cardiac signals(www.biosig.com).

The Companys first product,PURE EP Systemis a computerized system intended for acquiring, digitizing, amplifying, filtering, measuring and calculating, displaying, recording and storing of electrocardiographic and intracardiac signals for patients undergoing electrophysiology (EP) procedures in an EP laboratory.

Forward-looking Statements

This press release contains forward-looking statements. Such statements may be preceded by the words intends, may, will, plans, expects, anticipates, projects, predicts, estimates, aims, believes, hopes, potential or similar words. Forward- looking statements are not guarantees of future performance, are based on certain assumptions and are subject to various known and unknown risks and uncertainties, many of which are beyond the Companys control, and cannot be predicted or quantified and consequently, actual results may differ materially from those expressed or implied by such forward-looking statements. Such risks and uncertainties include, without limitation, risks and uncertainties associated with (i) the geographic, social and economic impact of COVID-19 on our ability to conduct our business and raise capital in the future when needed, (ii) our inability to manufacture our products and product candidates on a commercial scale on our own, or in collaboration with third parties; (iii) difficulties in obtaining financing on commercially reasonable terms; (iv) changes in the size and nature of our competition; (v) loss of one or more key executives or scientists; and (vi) difficulties in securing regulatory approval to market our products and product candidates. More detailed information about the Company and the risk factors that may affect the realization of forward-looking statements is set forth in the Companys filings with the Securities and Exchange Commission (SEC), including the Companys Annual Report on Form 10-K and its Quarterly Reports on Form 10-Q. Investors and security holders are urged to read these documents free of charge on the SECs website at http://www.sec.gov. The Company assumes no obligation to publicly update or revise its forward-looking statements as a result of new information, future events or otherwise.

1 Artificial Intelligence in Healthcare Market with COVID-19 Impact Analysis by Offering, Technology, End-Use Application, End User and Region Global Forecast to 2026; Markets and Markets

2 Artificial Intelligence (AI): Healthcares New Nervous System https://www.accenture.com/us-en/insight-artificial-intelligence-healthcare%C2%A0

See the article here:
BioSig and Mayo Clinic Collaborate on New R&D Program to Develop Transformative AI and Machine Learning Technologies for its PURE EP System - BioSpace