The Convergence of RPA and Automated Machine Learning – AiiA

Add bookmark

The future is now. We've been discussing the fact that RPA truly transforms the costs, accuracy, productivity, speed and efficiency of your enterprise. That transformation is all the more powerful with cognitive solutions baked-in.

Our old friends at Automation Anywhere combine forces with our new friends at DataRobot to discuss the integration and convergence of RPA & Automated ML and how that combination can hurdle your enterprise further through this fourth industrial revolution.

Watch the session on demand now.

The Convergence of RPA and Automated Machine LearningGreg van Rensburg, Director, Solutions Consulting,Automation AnywhereColin Priest, Vice President, AI Strategy,DataRobot

Robotic Process Automation (RPA) has disrupted repetitive business processes across a variety of industries. The combination of RPA, cognitive automation, and analytics is a game changer for unstructured data-processing and for gaining real-time insights. The next frontier? A truly complete, end-to-end process automation with AI-powered decision-making and predictive abilities. Join Automation Anywhere and DataRobot at this session to learn how organisations are using business logic and structured inputs, through a combination of RPA and Automated Machine Learning, to automate business processes, reduce customer churn and transform to digital operating models.

Follow this link:
The Convergence of RPA and Automated Machine Learning - AiiA

Why Deep Learning DevCon Comes At The Right Time – Analytics India Magazine

The Association of Data Scientists (ADaSci) recently announced Deep Learning DEVCON or DLDC 2020, a two-day virtual conference that aims to bring machine learning and deep learning practitioners and experts from the industry on a single platform to share and discuss recent developments in the field.

Scheduled for 29th and 30th October, the conference comes at a time when deep learning, a subset of machine learning, has become one of the most advancing technologies in the world. From being used in the fields of natural language processing to making self-driving cars, it has come a long way. As a matter of fact, reports suggest that by 2024, the deep learning market is expected to grow at a CAGR of 25%. Thus, it can easily be established that the advancements in the field of deep learning have just initiated and got a long road ahead.

Also Read: Top 7 Upcoming Deep Learning Conferences To Watch Out For

Being a crucial subset of artificial intelligence and machine learning, the advancements in deep learning have increased over the last few years. Thus, it has been explored in various industries, starting from healthcare and eCommerce to advertising and finance, by many leading firms as well as startups across the globe.

While companies like Waymo and Google are using deep learning for their self-driving vehicles, Apple is using the technology for its voice assistant Siri. Alongside many are using deep learning automatic text generation, handwriting recognition, relevant caption generation, image colourisation, predicting earthquakes as well as for detecting brain cancers.

In recent news, Microsoft has introduced new advancements in their deep learning optimisation library DeepSpeed to enable next-gen AI capabilities at scale. It can now be used to train language models with one trillion parameters with fewer GPUs.

With that being said, in future, it is expected to see an increased adoption machine translation, customer experience, content creation, image data augmentation, 3D printing and more. A lot of it could be attributed to the significant advancements in hardware space as well as the democratisation of technology, which helped the field in gaining traction.

Also Read: Free Online Resources To Get Hands-On Deep Learning

Many researchers and scientists across the globe have been working with deep learning technology to leverage it in fighting the deadly pandemic COVID-19. In fact, in recent news, some researchers have proposed deep learning-based automated CT image analysis tools that can differentiate COVID patients from the ones which arent infected. In another research, scientists have proposed a fully automatic deep learning system for diagnosing the disease as well as prognostic analysis. Many are also using deep neural networks for analysing X-ray images to diagnose COVID-19 among patients.

Along with these, startups like Zeotap, SilverSparro and Brainalyzed are leveraging the technology to either drive growth in customer intelligence or power industrial automation and AI solutions. With such solutions, these startups are making deep learning technology more accessible to enterprises and individuals.

Also Read: 3 Common Challenges That Deep Learning Faces In Medical Imaging

Companies like Shell, Lenskart, Snaphunt, Baker Hughes, McAfee, Lowes, L&T and Microsoft are looking for data scientists who are equipped with deep learning knowledge. With significant advancements in this field, it has now become the hottest skill that companies are looking for in their data scientists.

Consequently looking at these requirements, many edtech companies have started coming up with free online resources as well as paid certification on deep learning to provide industry-relevant knowledge to enthusiasts and professionals. These courses and accreditation, in turn, bridges the major talent gap that emerging technologies typically face during its maturation.

Also Read: How To Switch Careers To Deep Learning

With such major advancements in the field and its increasing use cases, the area of deep learning has witnessed an upsurge in popularity as well as demand. Thus it is critical, now more than ever, to understand this complex subject in-depth for better research purposes and application. For that matter, one needs to have a thorough understanding of the field to build a career in this ever-evolving field.

And, for this reason, the Deep Learning DEVCON couldnt have come at a better time than this. Not only it will help amateurs as well as professionals to get a better understanding of the field but will also provide them opportunities to network with leading developers and experts of the field.

Further, the talks and the workshops included in the event will provide a hands-on experience for deep learning practitioners on various tools and techniques. Starting with machine learning vs deep learning, followed by feed-forward neural networks and deep neural networks, the workshops will cover topics like GANs, recurrent neural networks, sequence modelling, Autoencoders, and real-time object detection. The two-day workshop will also provide an overview of deep learning as a broad topic, which will further be accredited with a certificate for all the attendees of the workshop.

The workshops will help participants have a strong understanding of deep learning, from basics to advanced, along with in-depth knowledge of artificial neural networks. With that, it will also clear concepts about tuning, regularising and improving the models as well as an understanding of various building blocks with their practical implementations. Alongside, it will also provide practical knowledge of applying deep learning in computer vision and NLP.

Considering the conference is virtual, it will also provide convenience for participants to join the talks and workshops from the comfort of their homes. Thus, a perfect opportunity to get a first-hand experience into the complex world of deep learning along with leading experts and best minds of the field, who will share their relevant experience to encourage enthusiasts and amateurs.

To register for Deep Learning DevCon 2020, visit here.

comments

Continued here:
Why Deep Learning DevCon Comes At The Right Time - Analytics India Magazine

This artist used machine learning to create realistic portraits of Roman emperors – The World

Some people have spent their quarantine downtime bakingsourdough bread. Others experiment with tie-dye. But others namely Toronto-based artist Daniel Voshart have createdpainstaking portraits of all 54 Roman emperors of the Principate period, which spanned from 27 BC to 285 AD.

The portraits help people visualize what the Roman emperors would have looked like when they were alive.

Included are Vosharts best artistic guesses of the faces of emperors Augustus, Nero, Caligula, Marcus Aurelius and Claudius, among others. They dont look particularly heroic or epic rather, they look like regular people, with craggy foreheads, receding hairlines and bags under their eyes.

To make the portraits, Voshart used a design software called Artbreeder, which relies on a kind of artificial intelligence called generative adversarial networks (GANs).

Voshart starts by feeding the GANs hundreds of images of the emperors collected from ancient sculpted busts, coins and statues. Then he gets a composite image, which he tweaks in Photoshop. To choose characteristics such as hair color and eye color, Voshart researches the emperors backgrounds and lineages.

It was a bit of a challenge, he says. About a quarter of the project was doing research, trying to figure out if theres something written about their appearance.

He also needed to find good images to feed the GANs.

Another quarter of the research was finding the bust, finding when it was carved because a lot of these busts are recarvings or carved hundreds of years later, he says.

In a statement posted on Medium, Voshartwrites: My goal was not to romanticize emperors or make them seem heroic. In choosing bust/sculptures, my approach was to favor the bust that was made when the emperor was alive. Otherwise, I favored the bust made with the greatest craftsmanship and where the emperor was stereotypically uglier my pet theory being that artists were likely trying to flatter their subjects.

Related:Battle of the bums: Museums complete over best artistic behinds

Voshart is not a Rome expert. His background is in architecture and design, and by day he works in the art department of the TV show "Star Trek: Discovery," where he designs virtual reality walkthroughs of the sets before they'rebuilt.

But when the coronavirus pandemic hit, Voshart was furloughed. He used the extra time on his hands to learn how to use the Artbreeder software.The idea for the Roman emperor project came from a Reddit threadwhere people were posting realistic-looking images theyd created on Artbreeder using photos of Roman busts. Voshart gave it a try and went into exacting detail with his research and design process, doing multiple iterations of the images.

Voshart says he made some mistakes along the way. For example, Voshart initially based his portrait of Caligula, a notoriously sadistic emperor, on a beautifully preserved bust in the Metropolitan Museum of Art. But the bust was too perfect-looking, Voshart says.

Multiple people told me he was disfigured, and another bust was more accurate, he says.

So, for the second iteration of the portrait, Voshart favored a different bust where one eye was lower than the other.

People have been telling me my first depiction of Caligula was hot, he says. Now, no ones telling me that.

Voshart says people who see his portraits on Twitter and Reddit often approach them like theyd approachTinder profiles.

I get maybe a few too many comments, like such-and-such is hot. But a lot of these emperors are such awful people!

I get maybe a few too many comments, like such-and-such is hot. But a lot of these emperors are such awful people! Voshart says.

Voshart keeps a list on his computer of all the funny comparisons people have made to present-day celebrities and public figures.

Ive heard Nero looks like a football player. Augustus looks like Daniel Craigmy early depiction of Marcus Aurelius looks like the Dude from 'The Big Lebowski.'

But the No. 1 comment? Augustus looks like Putin.

Related:UNESCO says scammers are using its logo to defraudartcollectors

No one knows for sure whether Augustus actually looked like Vladimir Putin in real life.Voshart says his portraits are speculative.

Its definitely an artistic interpretation, he says. Im sure if you time-traveled, youd be very angry at me."

Read the original:
This artist used machine learning to create realistic portraits of Roman emperors - The World

Machine Learning Practices And The Art of Research Management – Analytics India Magazine

Allegro AI offers the first true end-to-end ML / DL product life-cycle management solution with a focus on deep learning applied to unstructured data.

Machine learning projects involve iterative and recursive R&D process of data gathering, data annotation, research, QA, deployment, additional data gathering from deployed units and back again. The effectiveness of a machine learning product depends on how intact the synergies are between data, model and various teams across the organisation.

In this informative session at CVDC 2020, a 2 day event organised by ADaSci, Dan Malowany of Allegro.AI presented the attendees with the best practices to imbibe during the lifecycle of an ML productfrom inception to production.

Dan Malowany is currently the head of deep learning research at allegro.ai. His Ph.D. research at the Laboratory of Autonomous Robotics (LAR) was focused on integrating mechanisms of the human visual system with deep convolutional neural networks. His research interests include computer vision, convolutional neural networks, reinforcement learning, the visual cortex and robotics.

Dan spoke about the features required to boost productivity in the different R&D stages. This talk specifically focused on the following:

Dan, who has worked for 15 years at the Directorate for Defense Research & Development and led various R&D programs, briefed the attendees about various complexities involved in developing deep learning applications. He shed light on the unattractive and often overlooked aspects of research. He explained the trade offs between effort and accuracy through concepts of diminishing returns in the case of increased inputs.

When your model is as good as your data then the role of data management becomes crucial. Organisations are often in the pursuit of achieving better results with less data. Practices such as mixing and matching data sets with detailed control and creating optimised synthetic data come in handy.

Underlining the importance of data and experiment management, Dan advised the attendees to track the various versions of data and treat it as a hyperparameter. Dan also highlighted the various risk factors involved in improper data management. He took the example of developing a deep learning solution for diagnosis of diabetic retinopathy. He followed this up with an overview of the benefits of resource management.

Unstructured data management is only a part of the solution. There are other challenges, which Allegro AI claims to solve. In this talk Dan introduced the audience to their customised solutions.

Towards the end of the talk, Dan gave a glimpse about the various tools integrated with allegro.ais services. Allegro AIs products are market proven and have partnered with leading global brands, such as Intel, NVIDIA, NetApp, IBM and Microsoft. Allegro AI is backed by world-class firms including household name strategic investors: Samsung, Bosch and Hyundai.

Allegro AI helps companies develop, deploy and manage machine & deep learning solutions. The companys products are based on the Allegro Trains open source ML & DL experiment manager and ML-Ops package. Here are a few features:

Unstructured Data Management

Resource Management & ML-Ops

Know more here.

Stay tuned to AIM for more updates on CVDC2020.

comments

I have a master's degree in Robotics and I write about machine learning advancements.email:ram.sagar@analyticsindiamag.com

See the original post:
Machine Learning Practices And The Art of Research Management - Analytics India Magazine

Top 3 Applications Of Machine Learning To Transform Your Business – Forbes

We all hear about Artificial intelligence and Machine learning in everyday conversations, the terms are becoming increasingly popular across businesses of all sizes and in all industries. We know AI is the future, but how can it be useful to businesses today? Having encountered numerous organisations that are confused about the actual benefits of Machine Learning, AI experts agree it is necessary to outline its key applications in simple terms that most companies can relate to.

Here are the three most impactful Machine Learning applications that can transform your business today.

Machine learning can be used to automate a host of business operations, such as document processing, database analysis, system management, employee analytics, spam detection, chatbots. A lot of manual, time consuming processes can now be replaced or at least supported by off-the-shelf AI solutions. For those companies with unique requirements, looking to create or maintain a competitive advantage or otherwise prefer to retain control of the intellectual property (IP), it is worth reaching out to end-to-end service providers that can assist in planning, developing and implementing bespoke solutions to meet these business needs.

The reason why machine learning often ends up performing better than humans at a single task is that it can very quickly improve its performance through analysing vast amounts of historical data. In other words, it can learn from its own mistakes and optimise its performance very quickly and at scale. There is no ego and no hard feelings involved, simply objective analysis, enabling optimisation to be achieved with a high efficiency and effectiveness. Popular examples of optimisation with machine learning can be found around product quality control, customer satisfaction, storage, logistics, supply chain and sustainability. If you think something in your business is not running as efficiently as it could and you have access to data, machine learning may just be the right solution.

Companies are inundated with data these days. Capturing data is one thing, but navigating and extracting value from big, disconnected databases containing different types of data on different areas of your organisation adds complexity, cost, reduces efficiency and impedes effective decision making. Data management systems can help create clarity and put your data in order. You would be surprised how much valuable information can then be extracted from your data using machine learning. Typical applications in this space include churn prediction, sales forecasting, customer segmentation, personalisation, or predictive maintenance. Machine learning can teach you more about your organisation in a month than you have learned over the past year.

If you think one of the above applications might be helpful to your business now is a good time to start. As much as companies are reluctant to invest in innovation and new technologies, especially due to difficulties caused by Covid-19, it is important to recognise that the afore mentioned applications can bring a long-term benefits to your business such as cost savings, increased efficiency, improved operations and enhanced customer value. Get started and become a leader in your field thanks to the new machine learning technologies available to you today.

Originally posted here:
Top 3 Applications Of Machine Learning To Transform Your Business - Forbes

Impact of COVID-19 on Machine Learning & Big Data Analytics Education Market 2020-2027 Size and Share by Key Players Like Cognizant, Microsoft…

The research report on Machine Learning & Big Data Analytics Education Market gives the todays industry data and future developments, allowing you to understand the products and quit customers using sales increase and profitability of the market. The record gives an in depth analysis of key drivers, leading market key players, key segments, and regions. Besides this, the experts have deeply studied one-of-a-kind geographical areas and presented aggressive situation to assist new entrants, main market players, and buyers decide emerging economies. These insights provided in the record would advantage market players to formulate strategies for the destiny and benefit a robust role within the worldwide market.

Check How COVID-19 impact on this Market. Need Sample with TOC? Click here https://www.researchtrades.com/request-sample/1549580

The Global market for Machine Learning & Big Data Analytics Education is estimated to grow at a CAGR of roughly X.X% in the next 8 years, and will reach USD X.X million in 2027, from USD X.X million in 2020.Aimed to provide most segmented consumption and sales data of different types of Machine Learning & Big Data Analytics Education, downstream consumption fields and competitive landscape in different regions and countries around the world, this report analyzes the latest market data from the primary and secondary authoritative source.

The report also tracks the latest market dynamics, such as driving factors, restraining factors, and industry news like mergers, acquisitions, and investments. It provides market size (value and volume), market share, growth rate by types, applications, and combines both qualitative and quantitative methods to make micro and macro forecasts in different regions or countries.The report can help to understand the market and strategize for business expansion accordingly. In the strategy analysis, it gives insights from marketing channel and market positioning to potential growth strategies, providing in-depth analysis for new entrants or exists competitors in the Machine Learning & Big Data Analytics Education industry.The report focuses on the top players in terms of profiles, product analysis, sales, price, revenue, and gross margin.

Major players covered in this report:*Cognizant*Microsoft Corporation*Google*Metacog, Inc.,*IBM Corporation*Querium Corporation.*DreamBox Learning*Bridge-U*Jellynote*Quantum Adaptive Learning, LLC*Jenzabar, Inc.,*Third Space Learning*Blackboard, Inc.,*Pearson*Fishtree*Century-Tech Ltd*Knewton, Inc.,

By Type:*Machine Learning*Big Data Analytics

By Application:*Higher Education*K-12 Education*Corporate Training

Geographically, the regional consumption and value analysis by types, applications, and countries are included in the report. Furthermore, it also introduces the major competitive players in these regions.Major regions covered in the report:*North America*Europe*Asia-Pacific*Latin America*Middle East & Africa

Geographically, the detailed analysis of consumption, revenue, market share and growth rate, historic and forecast (2015-2026) of the following regions are covered in Chapter 5, 6, 7, 8, 9, 10, 13:*North America (Covered in Chapter 6 and 13): United States, Canada, Mexico

*Europe (Covered in Chapter 7 and 13): Germany, UK, France, Italy, Spain, Russia, Others

*Asia-Pacific (Covered in Chapter 8 and 13): China, Japan, South Korea, Australia, India, Southeast Asia, Others

*Middle East and Africa (Covered in Chapter 9 and 13): Saudi Arabia, UAE, Egypt, Nigeria, South Africa, Others

*South America (Covered in Chapter 10 and 13): Brazil, Argentina, Columbia, Chile, Others

Years considered for this report:*Historical Years: 2015-2019*Base Year: 2019*Estimated Year: 2020*Forecast Period: 2020-2027

Request for Discount on [emailprotected] https://www.researchtrades.com/discount/1549580

Contact us:*Research Trades*Contact No:+1 6269994607SkypeID: researchtradescon[emailprotected]http://www.researchtrades.com

See the original post here:
Impact of COVID-19 on Machine Learning & Big Data Analytics Education Market 2020-2027 Size and Share by Key Players Like Cognizant, Microsoft...

Watch 3 Videos from Coursera’s New "Machine Learning for Everyone" – Machine Learning Times – machine learning & data science news – The…

Im pleased to announce that, after a successful run with a batch of beta test learners, Coursera has just launched my new three-course specialization, Machine Learning for Everyone. There is no cost to access this program of courses.

This end-to-end course series empowers you to launch machine learning. Accessible to business-level learners and yet pertinent for techies as well, it covers both the state-of-the-art techniques and the business-side best practices.

Click here to access the complete three-course series for free

LEARNING OBJECTIVES

After these three courses, you will be able to:

WATCH THE FIRST THREE VIDEOS HERE

MORE INFORMATION ABOUT THIS COURSE SERIES

Machine learning is booming. It reinvents industries and runs the world. According to Harvard Business Review, machine learning is the most important general-purpose technology of our era.

But while there are so many how-to courses for hands-on techies, there are practically none that also serve business leaders a striking omission, since success with machine learning relies on a very particular business leadership practice just as much as it relies on adept number crunching.

This specialization fills that gap. It empowers you to generate value with machine learning by ramping you up on both the technical side and the business side both the cutting edge modeling algorithms and the project management skills needed for successful deployment.

NO HANDS-ON AND NO HEAVY MATH.Rather than a hands-on training, this specialization serves both business leaders and burgeoning data scientists alike with expansive, holistic coverage of the state-of-the-art techniques and business-level best practices. There are no exercises involving coding or the use of machine learning software.

BUT TECHNICAL LEARNERS SHOULD TAKE ANOTHER LOOK.Before jumping straight into the hands-on, as quants are inclined to do, consider one thing: This curriculum provides complementary know-how that all great techies also need to master. It contextualizes the core technology, guiding you on the end-to-end process required to successfully deploy a predictive model so that it delivers a business impact.

IN-DEPTH YET ACCESSIBLE.Brought to you by industry leader Eric Siegel a winner of teaching awards when he was a professor at Columbia University this specialization stands out as one of the most thorough, engaging, and surprisingly accessible on the subject of machine learning.

Heres what you will learn:

DYNAMIC CONTENT.Across this range of topics, this specialization keeps things action-packed with case study examples, software demos, stories of poignant mistakes, and stimulating assessments.

VENDOR-NEUTRAL.This specialization includes several illuminating software demos of machine learning in action using SAS products, plus one hands-on exercise using Excel or Google Sheets. However, the curriculum is vendor-neutral and universally-applicable. The contents and learning objectives apply, regardless of which machine learning software tools you end up choosing to work with.

WHO ITS FOR.This concentrated entry-level program is totally accessible to business-level learners and yet also vital to data scientists who want to secure their business relevance. Its for anyone who wishes to participate in the commercial deployment of machine learning, no matter whether youll do so in the role of enterprise leader or quant. This includes business professionals and decision makers of all kinds, such as executives, directors, line of business managers, and consultants as well as data scientists.

LIKE A UNIVERSITY COURSE.These three courses are also a good fit for college students, or for those planning for or currently enrolled in an MBA program. The breadth and depth of this specialization is equivalent to one full-semester MBA or graduate-level course.

For more information and to enroll at no cost, click here

About the Author

Eric Siegel, Ph.D., is a leading consultant and former Columbia University professor who makes machine learning understandable and captivating. He is the founder of the long-runningPredictive Analytics Worldand theDeep Learning Worldconference series, which have served more than 17,000 attendees since 2009, the instructor of the end-to-end, business-oriented Coursera specializationMachine learning for Everyone, a popular speaker whos been commissioned formore than 100 keynote addresses, and executive editor ofThe Machine Learning Times. He authored the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, which has been used in courses at more than 35 universities, and he won teaching awards when he was a professor at Columbia University, where he sangeducational songsto his students. Eric also publishesop-eds on analytics and social justice. Follow him at@predictanalytic.

Read the original here:
Watch 3 Videos from Coursera's New "Machine Learning for Everyone" - Machine Learning Times - machine learning & data science news - The...

Benefits Of AI And Machine Learning | Expert Panel | Security News – SecurityInformed

The real possibility of advancing intelligence through deep learning and other AI-driven technology applied to video is that, in the long term, were not going to be looking at the video until after something has happened. The goal of gathering this high level of intelligence through video has the potential to be automated to the point that security operators will not be required to make the decisions necessary for response. Instead, the intelligence-driven next steps will be automatically communicated to various stakeholders from on-site guards to local police/fire departments. Instead, when security leaders access the video that corresponds to an incident, it will be because they want to see the incident for themselves. And isnt the automation, the ability to streamline response, and the instantaneous response the goal of an overall, data-rich surveillance strategy? For almost any enterprise, the answer is yes.

See the article here:
Benefits Of AI And Machine Learning | Expert Panel | Security News - SecurityInformed

Why AI bias can’t be solved with more AI – BusinessCloud

Alejandro Saucedosays hecould spend hours talking about solutions to bias in machine learning algorithms.

In fact, he has already spent countless hours on the topic via talks at events and in his day-to-day work.

Its an area he is uniquely qualified to tackle. He is engineering director of machine learning at London-based Seldon Technologies, and chief scientist at The Institute for Ethical AI and Machine Leaning.

His key thesis is that the bias which creeps into AI a problem farfrom hypotheticalcannotbe solved with more tech but with the reintroduction of human expertise.

In recent years countless stories detail how AI decisioning has resulted in women being less likely to qualify for loans, minorities being unfairly profiled by police, and facial recognition technology performing more accurately when analysing white, male faces.

You are affecting people's lives," hetellsBusinessCloud, in reference tothe magnitudeofthese automated decisionsin the security and defence space, and even in the judicial process.

Saucedoexplains that machine learning processes are, by definition, designed to be discriminatory but not like this.

"The purpose of machine learning is to discriminate toward a right answer, he said.

"Humans are not born racist, and similarly machine learning algorithms are not by default going to be racist. Theyare a reflection ofthedata ingested."

Ifalgorithms adopt human bias from our biased data, removing biastherefore suggeststhetechnology has great potential.

But the discussionoftenstopsat this theoretical level oracts asa cue for engineers to fine-tune the software in the hopes of a more equitable outcome.

Its not that simple, Saucedo suggests.

An ethical question of that magnitude shouldn't fall onto the shoulders of a single data scientist. They will not have the full picture in order to make a call that could have impact on individuals across generations, he says.

Instead the approach with the most promise takes one step further back from the problem.

Going beyond the algorithm, as he puts it, involves bringing in human experts, increasing regulation, and a much lighter touch when introducing the technology at all.

Instead of just dumping an entire encyclopaedia of an industry into a neural network to learn from scratch, you can bring in domain experts to understand how these machines learn, heexplains.

This approach allows those making the technology to better explain why an algorithm makes the choices it does something which is almost impossible with the black box of a neural network working on its own.

For instance, a lawyer could help with the building of a legal AI, to guide and review the machine learning's output for nuances even small things like words which are capitalised.

In this way, he says, the resulting machine learning becomes easier to understand.

This approach means automating a percentage of the process, and requiring a human for the remainder, or what he calls 'human augmentation' or 'human manual remediation'.

This could slow down the development of potentially lucrative technology battling to win the AI arms race but it was a choice he said would ultimately be good for business and people.

"You either take the slow and painful route which works, or you take the quick fix which doesn't, he says.

Saucedo is only calling for red tape which is proportionate to its potential impact. In short, a potential 'legal sentencing prediction system' needs more governance than a prototype being tested on a single user.

He saysanyone building machine learning algorithms with societal impact should be asking how they can build a process which still requires review from human domain expertise.

"If there is no way to introduce a human in to review, the question is: should you even be automating that process? If you should, you need to make sure that you have the ethics structure and some form of ethics board to approve those use cases."

And while his premise is that bias is not a single engineer's problem, he said that this does not make them now exempt.

"It is important as engineers, individuals and as people providing that data to be aware of the implications. Not only because of the bad usecases, butbeing aware that most of the incorrect applications of machine learning algorithms are not done through malice but lack of best practice."

This self-regulation might be tough for fast-paced AI firms hoping to make sales, but conscious awareness on the part of everyone building these systemsis a professional responsibility,he says.

And even self-regulation is only the first step. Good ethics alone does not guarantee a lack of blind spots.

That's why Saucedo also suggests external regulationandthis doesn't have to slow down innovation.

"When you introduce regulations that are embedded with what is needed, things are done the right way. And when they're done the right way, they're more efficient and there is more room for innovation."

For businesses looking to incorporate machine learning, rather than building it, he points to The Institute for Ethical AI & Machine Learnings AI-RFX Procurement Framework.

The idea is to abstract the initial high-level principles created at The Institute, such as the human augmentation mentioned earlier, and trust and privacy by design. It breaks these principles down into a security questionnaire.

"We've taken all of these principles, and we realised that understanding and agreeing on exact best-practice is very hard. What is universally agreed is what bad practice is."

This, along with access to the right stakeholders to evaluate the data and content,is enough to sort mature AI businesses from those "selling snake oil".

The institute is also contributing to some of the official industry standards that are being created for organisations like the police and the ISO, he explains.

And the work is far from done if a basic framework and regulation can be created with enough success to be adopted internationally, even differing Western and Eastern ethics need to be accounted for.

"In the West you have good and bad, and in theEastit is more about balance," he says.

There are also the differing concepts of theself versusthe community. The considerations quickly become philosophical and messy a sign that they are a little bit more human.

"If we want to reach international standards and regulation, we need to be able to align on those foundational components, to know where everyone is coming from, he says.

Read the original:
Why AI bias can't be solved with more AI - BusinessCloud

Machine learning finds use in creating sharper maps of ‘ecosystem’ lines in the ocean – Firstpost

EOSJul 01, 2020 14:54:08 IST

On land, its easy for us to see divisions between ecosystems: A rain forests fan palms and vines stand in stark relief to the cacti of a high desert. Without detailed data or scientific measurements, we can tell a distinct difference in the ecosystems flora and fauna.

But how do scientists draw those divisions in the ocean? A new paper proposes a tool to redraw the lines that define an oceans ecosystems, lines originally penned by the seagoing oceanographerAlan Longhurstin the 1990s. The paper uses unsupervised learning, a machine learning method, to analyze the complex interplay between plankton species and nutrient fluxes. As a result, the tool could give researchers a more flexible definition of ecosystem regions.

Using the tool on global modeling output suggests that the oceans surface has more than 100 different regions or as few as 12 if aggregated, simplifying the56 Longhurst regions. The research could complement ongoing efforts to improve fisheries management and satellite detection of shifting plankton under climate change. It could also direct researchers to more precise locations for field sampling.

A sea turtle in the aqua blue waters of Hawaii. Image: Rohit Tandon/Unsplash

Coccolithophores, diatoms, zooplankton, and other planktonic life-formsfloaton much of the oceans sunlit surface. Scientists monitor plankton with long-term sampling stations and peer at their colorsby satellitefrom above, but they dont have detailed maps of where plankton lives worldwide.

Models help fill the gaps in scientists knowledge, and the latest research relies on an ocean model to simulate where 51 types of plankton amass on the surface oceans worldwide. The latest research then applies the new classification tool, called the systematic aggregated ecoprovince (SAGE) method, to discern where neighborhoods of like-minded plankton and nutrients appear.

SAGE relies, in part, on a type of machine learning algorithm called unsupervised learning. The algorithms strength is that it searches for patterns unprompted by researchers.

To compare the tool to a simple example, if scientists told an algorithm to identify shapes in photographs like circles and squares, the researchers could supervise the process by telling the computer what a square and circle looked like before it began. But in unsupervised learning, the algorithm has no prior knowledge of shapes and will sift through many images to identify patterns of similar shapes itself.

Using an unsupervised approach gives SAGE the freedom to let patterns emerge that the scientists might not otherwise see.

While my human eyes cant see these different regions that stand out, the machine can, first author and physical oceanographerMaike Sonnewaldat Princeton University said. And thats where the power of this method comes in. This method could be used more broadly by geoscientists in other fields to make sense of nonlinear data, said Sonnewald.

A machine-learning technique developed at MIT combs through global ocean data to find commonalities between marine locations, based on how phytoplankton species interact with each other. Using this approach, researchers have determined that the ocean can be split into over 100 types of provinces, and 12 megaprovinces, that are distinct in their ecological makeup.

Applying SAGE to model data, the tool noted 115 distinct ecological provinces, which can then be boiled down into 12 overarching regions.

One region appears in the center of nutrient-poor ocean gyres, whereas other regions show productive ecosystems along the coast and equator.

You have regions that are kind of like the regions youd see on land, Sonnewald said.One area in the heart of a desert-like region of the ocean is characterized by very small cells. Theres just not a lot of plankton biomass. The region that includes Perus fertile coast, however, has a huge amount of stuff.

If scientists want more distinctions between communities, they can adjust the tool to see the full 115 regions. But having only 12 regions can be powerful too, said Sonnewald, because it demonstrates the similarities between the different [ocean] basins. The tool was published in arecent paperin the journalScience Advances.

OceanographerFrancois Ribaletat the University of Washington, who was not involved in the study, hopes to apply the tool to field data when he takes measurements on research cruises. He said identifying unique provinces gives scientists a hint of how ecosystems could react to changing ocean conditions.

If we identify that an organism is very sensitive to temperature, so then we can start to actually make some predictions, Ribalet said. Using the tool will help him tease out an ecosystems key drivers and how it may react to future ocean warming.

Jenessa Duncombe.Text 2020. AGU.

This story has been republished from Eosunder the Creative Commons 3.0 license.Read theoriginal story.

Find latest and upcoming tech gadgets online on Tech2 Gadgets. Get technology news, gadgets reviews & ratings. Popular gadgets including laptop, tablet and mobile specifications, features, prices, comparison.

Read more:
Machine learning finds use in creating sharper maps of 'ecosystem' lines in the ocean - Firstpost

Machine Learning Market 2020 Professional Survey Report; Industry Growth, Shares, Opportunities And Forecast To 2026 – Surfacing Magazine

Machine Learning Market research Report is a valuable supply of perceptive information for business strategists. This Machine Learning Market study provides comprehensive data which enhances the understanding, scope and application of this report.

Summary of Report @ Machine Learning Market

A thorough study of the competitive landscape of the global Machine Learning Market has been given, presenting insights into the company profiles, financial status, recent developments, mergers and acquisitions, and the SWOT analysis. This research report will give a clear idea to readers about the overall market scenario to further decide on this market projects.

The analysts also have analyzed drawbacks with on-going Machine Learning trends and the opportunities which are devoting to the increased growth of the market. International Machine Learning market research report provides the perspective of this competitive landscape of worldwide markets. The report offers particulars that originated from the analysis of the focused market. Also, it targets innovative, trends, shares and cost by Machine Learning industry experts to maintain a consistent investigation.

Market Segment by Regions, regional analysis covers

The Machine Learning analysis was made to include both qualitative and qualitative facets of this market in regards to global leading regions. The Machine Learning report also reinforces the information concerning the aspects like major Machine Learning drivers & controlling facets that may specify the markets. Also, covering multiple sections, company profile, and type, along with applications.

We do provide Sample of this report, Please go through the following information in order to Request Sample Copy.

This report sample includes:

Brief Introduction to the research report.

Table of Contents (Scope covered as a part of the study)Top players in the market

Research framework (Structure Of The Report)

Research methodology adopted by Coherent Market Insights

Get Sample copy @ https://www.coherentmarketinsights.com/insight/request-sample/1098

Reasons why you should buy this report

Understand the current and future of the Machine Learning Market in both developed and emerging markets.

The report assists in realigning the business strategies by highlighting the Machine Learning business priorities.

The report throws light on the segment expected to dominate the Machine Learning industry and market.

Forecasts the regions expected to witness the fastest growth.

The latest developments in the Machine Learning industry and details of the industry leaders along with their market share and strategies.

Saves time on the entry level analysis because the report contains very important info regarding growth, size, leading players and segments of the business.

Save and reduce time carrying out entry-level research by identifying the growth, size, leading players and segments in the global Market.

The global report is integrated considering the primary and secondary research methodologies that have been collected from reliable sources intended to generate a factual database. The data from market journals, publications, conferences, white papers and interviews of key market leaders are compiled to generate our segmentation and is mapped to a fair trajectory of the market during the forecast period.

Request Discountoption enables you to get the discounts on the actual price of the report. Kindly fill the form, and one of our consultants would get in touch with you to discuss your allocated budget, and would provide discounts.

Dont Quarantine Your Research, you keep your social distance and we provide you a socialDISCOUNTuseSTAYHOMECode in precise requirement andGetFLAT 1000USD OFFon all CMI reports

Request for Discount @ https://www.coherentmarketinsights.com/insight/request-discount/1098

Market Drivers and Restraints:

Emergence of new technologies in Enterprise Mobility

Economies of Scale in the Operational Expenditure

Lack of Training Expertise and Skills

Data Security concerns

Key highlights of this report:

Overview of key market forces driving and restraining the market growth

Market and Forecast (2018 2026)

analyses of market trends and technological improvements

analyses of market competition dynamics to offer you a competitive edge

An analysis of strategies of major competitors

Workplace Transformation Services market Volume and Forecast (2018 2026)

Companies Market Share Analysis

analysis of major industry segments

Detailed analyses of industry trends

Offers a clear understanding of the competitive landscape and key product segments

About Coherent Market Insights:

Coherent Market Insights is a prominent market research and consulting firm offering action-ready syndicated research reports, custom market analysis, consulting services, and competitive analysis through various recommendations related to emerging market trends, technologies, and potential absolute dollar opportunity.

Contact Us:

Mr. ShahCoherent Market Insights1001 4th Ave,#3200Seattle, WA 98154Tel: +1-206-701-6702Email:sales@coherentmarketinsights.comVisit Here, for More Information:https://theemmasblog.blogspot.com/

Link:
Machine Learning Market 2020 Professional Survey Report; Industry Growth, Shares, Opportunities And Forecast To 2026 - Surfacing Magazine

Microsoft and Udacity partner in new $4 million machine-learning scholarship program for Microsoft Azure – TechRepublic

Applications are now open for the nanodegree program, which will help Udacity train developers on the Microsoft Azure cloud infrastructure.

Microsoft and Udacity are teaming together to invest $4 million in a machine learning (ML) training collaboration, which begins with the Machine Learning Scholarship Program for Microsoft Azure which starts today.

The program focuses on artificial intelligence, which is continuing to grow at a face pace. AI engineers are in high demand, particularly as enterprises build new cloud applications and move old ones to the cloud. The average AI salary in the US is $114,121 a year based on data from Glassdoor.

"AI is driving transformation across organizations and there is increased demand for data science skills," said Julia White, corporate vice president, Azure Marketing, Microsoft, in a Microsoft blog post. "Through our collaboration with Udacity to offer low-code and advanced courses on Azure Machine Learning, we hope to expand data science expertise as experienced professionals will truly be invaluable resources to solving business problems."

SEE: Building the bionic brain (free PDF) (TechRepublic)

The interactive scholarship courses begin with a two-month long course, "Introduction to machine learning on Azure with a low-code experience."

Students will work with live Azure environments directly within the Udacity classroom and build on these foundations with advanced techniques such as ensemble learning and deep learning.

To earn a spot in th foundations course, students will need to submit an application. According to the blog post, "Successful applicants will ideally have basic programming knowledge in any language, preferably Python, and be comfortable writing scripts and performing loop operations."

Udacity's nanodegrees have been growing in popularity. Monthly enrollment in Udacity's nanodegrees has increased by a factor of four since the beginning of the coronavirus lockdown. Among Udacity's consumer customers, in the three weeks starting March 9 the company saw a 56% jump in weekly active users and a 102% increase in new enrollments, and they've stayed at or just below those new levels since then, according to a Udacity spokesperson.

After students complete the foundations course, Udacity will select top performers to receive a scholarship to the new machine learning nanodegree program with Microsoft Azure.

This typically four-month nanodegree program will include:

Students who aren't selected for the scholarship will still be able to enroll in the nanodegree program when it is available to the general public.

Anyone interested in becoming an Azure Machine Learning engineer and learning from experts at the forefront of the field can apply for the scholarshiphere.Applications will be open from June 10 to June 30.

We deliver the top business tech news stories about the companies, the people, and the products revolutionizing the planet. Delivered Daily

Image: NicolasMcComber / Getty Images

Read the rest here:
Microsoft and Udacity partner in new $4 million machine-learning scholarship program for Microsoft Azure - TechRepublic

Deploying Machine Learning Has Never Been This Easy – Analytics India Magazine

According to PwC, AIs potential global economic impact will reach USD 15.7 trillion by 2030. However, the enterprises who look to deploy AI are often hampered by the lack of time, trust and talent. Especially, with the highly regulated sectors such as healthcare and finance, convincing the customers to imbibe AI methodologies is an uphill task.

Of late, the AI community has seen a sporadic shift in AI adoption with the advent of AutoML tools and introduction of customised hardware to cater to the needs of the algorithms. One of the most widely used AutoML tools in the industry is H2O Driverless AI. And, when it comes to hardware Intel has been consistently updating its tool stack to meet the high computational demands of the AI workflows.

Now H2O.ai and Intel, two companies who have been spearheading the democratisation of the AI movement, join hands to develop solutions that leverage software and hardware capabilities respectively.

AI and machine-learning workflows are complex and enterprises need more confidence in the validity of their AI models than a typical black-box environment can provide. The inexplicability and the complexity of feature engineering can be daunting to the non-experts. So far AutoML has proven to be the one stop solution to all these problems. These tools have alleviated the challenges by providing automated workflows, code ready deployable models and many more.

H2O.ai especially, has pioneered in the AutoML segment. They have developed an open source, distributed in-memory machine learning platform with linear scalability that includes a module called H2OAutoML, which can be used for automating the machine learning workflow, that includes automatic training and tuning of many models within a user-specified time-limit.

Whereas, H2O.ais flagship product Driverless AI can be used to fully automate some of the most challenging and productive tasks in applied data science such as feature engineering, model tuning, model ensembling and model deployment.

But, for these AI based tools to work seamlessly, they need the backing of hardware that is dedicated to handle the computational intensity of machine learning operations.

Intel has been at the forefront of digital revolution for over half a century. Today, Intel flaunts a wide range of technologies, including its Xeon Scalable processors, Optane Solid State Drives and optimized Intel software libraries that bring in a much needed mix of enhanced performance, AI inference, network functions, persistent memory bandwidth, and security.

Integrating H2O.ais software portfolio with hardware and software technologies from Intel has resulted in solutions that can handle almost all the woes of an AI enterprise from automated workflows to explainability to production ready code that can be deployed anywhere.

For example, H2O Driverless AI, an automatic machine-learning platform enables data science experts and beginners to streamline their AI tasks within minutes that usually take months. Today, more than 18,000 companies use open source H2O in mission-critical use cases for finance, insurance, healthcare, retail, telco, sales, and marketing.

The software capabilities of H2O.ai combined with hardware infrastructure of Intel, that includes 2nd Generation Xeon Scalable processors, Optane Solid State Drives and Ethernet Network Adapters, can empower enterprises to optimize performance and accelerate deployment.

Enterprises that are looking for increasing productivity while increasing the business value of to enjoy the competitive advantages of AI innovation no longer have to wait thanks to hardware backed AutoML solutions.

comments

Read the original here:
Deploying Machine Learning Has Never Been This Easy - Analytics India Magazine

Massey University’s Teo Susnjak on how Covid-19 broke machine learning, extreme data patterns, wealth and income inequality, bots and propaganda and…

This weeks Top 5 comes from Teo Susnjaka computer scientistspecialising in machine learning. He is a Senior Lecturer in Information Technology at Massey University and is the developer behind GDPLive.

As always, we welcome your additions in the comments below or via email to david.chaston@interest.co.nz.

And if you're interested in contributing the occasional Top 5yourself, contact gareth.vaughan@interest.co.nz.

1. Covid-19 broke machine learning.

As the Covid-19 crisis started to unfold, we started to change our buying patterns. All of a sudden, some of the top purchasing items became: antibacterial soap, sanitiser, face masks, yeast and of course, toilet paper. As the demand for these unexpected items exploded, retail supply chains were disrupted. But they weren't the only ones affected.

Artificial intelligence systems began to break too. The MIT Technology Review reports:

Machine-learning models that run behind the scenes in inventory management, fraud detection, and marketing rely on a cycle of normal human behavior. But what counts as normal has changed, and now some are no longer working.

How bad the situation is depends on whom you talk to. According to Pactera Edge, a global AI consultancy, automation is in tailspin. Others say they are keeping a cautious eye on automated systems that are just about holding up, stepping in with a manual correction when needed.

Whats clear is that the pandemic has revealed how intertwined our lives are with AI, exposing a delicate codependence in which changes to our behavior change how AI works, and changes to how AI works change our behavior. This is also a reminder that human involvement in automated systems remains key. You can never sit and forget when youre in such extraordinary circumstances, says Cline.

Image source: MIT Technology Review

The extreme data capturing a previously unseen collapse in consumer spending that feeds the real-time GDP predictor at GDPLive.net, also broke our machine learning algorithms.

2. Extreme data patterns.

The eminent economics and finance historian, Niall Ferguson (not to be confused with Neil Ferguson who also likes to create predictive models) recently remarked that the first month of the lockdown created conditions which took a full year to materialise during the Great Depression.

The chart below shows the consumption data falling off the cliff, generating inputs that broke econometrics and machine learning models.

What we want to see is a rapid V-shaped recovery in consumer spending. The chart below shows the most up-to-date consumer spending trends. Consumer spending has now largely recovered, but is still lower than that of the same period in 2019. One of the key questions will be whether or not this partial rebound will be temporary until the full economic impacts of the 'Great Lockdown' take effect.

Paymark tracks consumer spending on their new public dashboard. Check it out here.

3. Wealth and income inequality.

As the current economic crisis unfolds, GDP will take centre-stage again and all other measures which attempt to quantify wellbeing and social inequalities will likely be relegated until economic stability returns.

When the conversation does return to this topic, AI might have something to contribute.

Effectively addressing income inequality is a key challenge in economics with taxation being the most useful tool. Although taxation can lead to greater equalities, over-taxation discourages from working and entrepreneurship, and motivates tax avoidance. Ultimately this leaves less resources to redistribute. Striking an optimal balance is not straightforward.

The MIT Technology Reviewreports thatAI researchers at the US business technology company Salesforce implemented machine learning techniques that identify optimal tax policies for a simulated economy.

In one early result, the system found a policy thatin terms of maximising both productivity and income equalitywas 16% fairer than a state-of-the-art progressive tax framework studied by academic economists. The improvement over current US policy was even greater.

Image source: MIT Technology Review

It is unlikely that AI will have anything meaningful to contribute towards tackling wealth inequality though. If Walter Scheidel, author of The Great Leveller and professor of ancient history at Stanford is correct, then the only historically effective levellers of inequality are: wars, revolutions, state collapses and...pandemics.

4. Bots and propaganda.

Over the coming months, arguments over what has caused this crisis, whether it was the pandemic or the over-reactive lockdown policies, will occupy much of social media. According to The MIT Technology Review, bots are already being weaponised to fight these battles.

Nearly half of Twitter accounts pushing to reopen America may be bots. Bot activity has become an expected part of Twitter discourse for any politicized event. Across US and foreign elections and natural disasters, their involvement is normally between 10 and 20%. But in a new study, researchers from Carnegie Mellon University have found that bots may account for between 45 and 60% of Twitter accounts discussing covid-19.

To perform their analysis, the researchers studied more than 200 million tweets discussing coronavirus or covid-19 since January. They used machine-learning and network analysis techniques to identify which accounts were spreading disinformation and which were most likely bots or cyborgs (accounts run jointly by bots and humans).

They discovered more than 100 types of inaccurate Covid-19-19 stories and found that not only were bots gaining traction and accumulating followers, but they accounted for 82% of the top 50 and 62% of the top 1,000 influential retweeters.

Image source: MIT Technology Review

How confident are you that you can tell the difference between a human and a bot? You can test yourself out here. BTW, I failed.

5. Primed to believe bad predictions.

This has been a particularly uncertain time. We humans don't like uncertainty especially once it reaches a given threshold. We have an amazing brain that is able to perform complex pattern recognition that enables us to predict what's around the corner. When we do this, we resolve uncertainty and our brain releases dopamine, making us feel good. When we cannot make sense of the data and the uncertainty remains unresolved, then stress kicks in.

Writing on this in Forbes, John Jennings points out:

Research shows we dislike uncertainty so much that if we have to choose between a scenario in which we know we will receive electric shocks versus a situation in which the shocks will occur randomly, well select the more painful option of certain shocks.

The article goes on to highlight how we tend to react in uncertain times. Aversion to uncertainty drives some of us to try to resolve it immediately through simple answers that align with our existing worldviews. For others, there will be a greater tendency to cluster around like-minded people with similar worldviews as this is comforting. There are some amongst us who are information junkies and their hunt for new data to fill in the knowledge gaps will go into overdrive - with each new nugget of information generating a dopamine hit. Lastly, a number of us will rely on experts who will use their crystal balls to find for us the elusive signal in all the noise, and ultimately tell us what will happen.

The last one is perhaps the most pertinent right now. Since we have a built-in drive that seeks to avoid ambiguity, in stressful times such as this, our biology makes us susceptible to accepting bad predictions about the future as gospel especially if they are generated by experts.

Experts at predicting the future do not have a strong track record considering how much weight is given to them. Their predictive models failed to see the Global Financial Crisis coming, they overstated the economic fallout of Brexit, the climate change models and their forecasts are consistently off-track, and now we have the pandemic models.

Image source:drroyspencer.com

The author suggests that this time "presents the mother of all opportunities to practice learning to live with uncertainty". I would also add that a good dose of humility on the side of the experts, and a good dose of scepticism in their ability to accurately predict the future both from the public and decision makers, would also serve us well.

Read the original post:
Massey University's Teo Susnjak on how Covid-19 broke machine learning, extreme data patterns, wealth and income inequality, bots and propaganda and...

Bootstrapper Breakfast: ML & COVID-19 — Time is of the Essence – Patch.com

Friday, June 26, 9am-10:30am Pacific time

Special Topic: Machine Learning & COVID-19: Time is of the Essence

Danilo Tomanovic will cover events including the 2003 SARS Epidemic & aspects of Machine Learning that can be applied to anticipate pandemic risk going forward. This presentation will offer a practical review of key dates & events during this COVID-19 pandemic and offer a fresh perspective on how we may collectively prevent this from happening again on this scale. For the audience this is an opportunity to become engaged as to what can work presently for them in preparation and anticipation of future concerns as they relate to viruses incorporating Machine Learning into their product/service designs.

Danilo Tomanovic's career includes sales, marketing, product development, global transaction banking, investment banking & risk management. He is the President, Founder of Machine Learning Deep Dive focused on creating ML Projects as proof of concepts pointed to market forces/demands.

This briefing will be followed by Q&A and our regular roundtable discussion.

At a Bootstrappers Breakfast(R) we have serious conversations about growing a business based on internal cashflow and organic profit: this is for founders who are actively bootstrapping a startup. We meet in the back room at several Silicon Valley restaurants so space is limited - Please RSVP. Join us upstairs for a little caffeine and sharing among the startup community.

Join other entrepreneurs who eat problems for breakfast.

* Compare Notes

* Exchange Ideas

* Learn from Others Mistakes

* Brainstorm with Peers

* Find Partners

* Small Group Atmosphere

* Serious Conversation

No Charge.

Presented by Bootstrappers Breakfast.

https://www.meetup.com/Bootstr...

events@bootstrappersbreakfast.com

408-252-9676

Read more:
Bootstrapper Breakfast: ML & COVID-19 -- Time is of the Essence - Patch.com

LeanTaaS Raises $130 Million to Strengthen Its Machine Learning Software Platform to Continue Helping Hospitals Achieve Operational Excellence -…

SANTA CLARA, Calif.--(BUSINESS WIRE)--LeanTaaS, Inc., a Silicon Valley software innovator that increases patient access and transforms operational performance for healthcare providers, announced a $130 million Series D funding round led by Insight Partners with participation from Goldman Sachs. The funds will be used to invest in building out the existing suite of products (iQueue for Operating Rooms, iQueue for Infusion Centers and iQueue for Inpatient Beds) as well as scaling the engineering, product and GoToMarket teams, and expanding the iQueue platform to include new products.

LeanTaaS is uniquely positioned to help hospitals and health systems across the country face the mounting operational and financial pressures exacerbated by the coronavirus. This funding will allow us to continue to grow and expand our impact while helping healthcare organizations deliver better care at a lower cost, said Mohan Giridharadas, founder and CEO of LeanTaaS. Our company momentum over the past several years - including greater than 50% revenue growth in 2020 and negative churn despite a difficult macro environment - reflects the increasing demand for scalable predictive analytics solutions that optimize how health systems increase operational utilization and efficiency. It also highlights how weve been able to develop and maintain deep partnerships with 100+ health systems and 300+ hospitals in order to keep them resilient and agile in the face of uncertain demand and supply conditions.

With this investment, LeanTaaS has raised more than $250 million in aggregate, including more than $150 million from Insight Partners. As part of the transaction, Insight Partners Jeff Horing and Jon Rosenbaum and Goldman Sachs Antoine Munfa will join LeanTaaS Board of Directors.

Healthcare operations in the U.S. are increasingly complex and under immense pressure to innovate; this has only been exacerbated by the prioritization of unique demands from the current pandemic, said Jeff Horing, co-founder and Managing Director at Insight Partners. Even under these unprecedented circumstances, LeanTaaS has demonstrated the effectiveness of its ML-driven platform in optimizing how hospitals and health systems manage expensive, scarce resources like infusion center chairs, operating rooms, and inpatient beds. After leading the companys Series B and C rounds, we have formed a deep partnership with Mohan and team. We look forward to continuing to help LeanTaaS scale its market presence and customer impact.

Although health systems across the country have invested in cutting-edge medical equipment and infrastructure, they cannot maximize the use of such assets and increase operational efficiencies to improve their bottom lines with human based scheduling or unsophisticated tools. LeanTaaS develops specialized software that increases patient access to medical care by optimizing how health systems schedule and allocate the use of expensive, constrained resources. By using LeanTaaS product solutions, healthcare systems can harness the power of sophisticated, AI/ML-driven software to improve operational efficiencies, increase access, and reduce costs.

We continue to be impressed by the LeanTaaS team. As hospitals and health systems begin to look toward a post-COVID-19 world, the agility and resilience LeanTaaS solutions provide will be key to restoring and growing their operations, said Antoine Munfa, Managing Director of Goldman Sachs Growth.

LeanTaaS solutions have now been deployed in more than 300 hospitals across the U.S., including five of the 10 largest health networks and 12 of the top 20 hospitals in the U.S. according to U.S. News & World Report. These hospitals use the iQueue platform to optimize capacity utilization in infusion centers, operating rooms, and inpatient beds. iQueue for Infusion Centers is used by 7,500+ chairs across 300+ infusion centers including 70 percent of the National Comprehensive Cancer Network and more than 50 percent of National Cancer Institute hospitals. iQueue for Operating Rooms is used by more than 1,750 ORs across 34 health systems to perform more surgical cases during business hours, increase competitiveness in the marketplace, and improve the patient experience.

I am excited about LeanTaaS' continued growth and market validation. As healthcare moves into the digital age, iQueue overcomes the inherent deficiencies in capacity planning and optimization found in EHRs. We are very excited to partner with LeanTaaS and implement iQueue for Operating Rooms, said Dr. Rob Ferguson, System Medical Director, Surgical Operations, Intermountain Healthcare.

Concurrent with the funding, LeanTaaS announced that Niloy Sanyal, the former CMO at Omnicell and GE Digital, would be joining as its new Chief Marketing Officer. Also, Sanjeev Agrawal has been designated as LeanTaaS Chief Operating Officer in addition to his current role as the President. "We are excited to welcome Niloy to LeanTaaS. His breadth and depth of experience will help us accelerate our growth as the industry evolves to a more data driven way of making decisions," said Agrawal.

About LeanTaaSLeanTaaS provides software solutions that combine lean principles, predictive analytics, and machine learning to transform hospital and infusion center operations. The companys software is being used by over 100 health systems across the nation which all rely on the iQueue cloud-based solutions to increase patient access, decrease wait times, reduce healthcare delivery costs, and improve revenue. LeanTaaS is based in Santa Clara, California, and Charlotte, North Carolina. For more information about LeanTaaS, please visit https://leantaas.com/, and connect on Twitter, Facebook and LinkedIn.

About Insight PartnersInsight Partners is a leading global venture capital and private equity firm investing in high-growth technology and software ScaleUp companies that are driving transformative change in their industries. Founded in 1995, Insight Partners has invested in more than 400 companies worldwide and has raised through a series of funds more than $30 billion in capital commitments. Insights mission is to find, fund, and work successfully with visionary executives, providing them with practical, hands-on software expertise to foster long-term success. Across its people and its portfolio, Insight encourages a culture around a belief that ScaleUp companies and growth create opportunity for all. For more information on Insight and all its investments, visit insightpartners.com or follow us on Twitter @insightpartners.

About Goldman Sachs GrowthFounded in 1869, The Goldman Sachs Group, Inc. is a leading global investment banking, securities and investment management firm. Goldman Sachs Merchant Banking Division (MBD) is the primary center for the firms long-term principal investing activity. As part of MBD, Goldman Sachs Growth is the dedicated growth equity team within Goldman Sachs, with over 25 years of investing history, over $8 billion of assets under management, and 9 offices globally.

LeanTaaS and iQueue are trademarks of LeanTaaS. All other brand names and product names are trademarks or registered trademarks of their respective companies.

Read the original here:
LeanTaaS Raises $130 Million to Strengthen Its Machine Learning Software Platform to Continue Helping Hospitals Achieve Operational Excellence -...

Yale Researchers Use Single-Cell Analysis and Machine Learning to Identify Major COVID-19 Target – HospiMedica

Image: The Respiratory Epithelium (Photo courtesy of Wikimedia Commons)

In the study, the scientists identified ciliated cells as the major target of SARS-CoV-2 infection. The bronchial epithelium acts as a protective barrier against allergens and pathogens. Cilia removes mucus and other particles from the respiratory tract. Their findings offer insight into how the virus causes disease. The scientists infected HBECs in an air-liquid interface with SARS-CoV-2. Over a period of three days, they used single-cell RNA sequencing to identify signatures of infection dynamics such as the number of infected cells across cell types, and whether SARS-CoV-2 activated an immune response in infected cells.

The scientists utilized advanced algorithms to develop working hypotheses and used electron microscopy to learn about the structural basis of the virus and target cells. These observations provide insights about host-virus interaction to measure SARS-CoV-2 cell tropism, or the ability of the virus to infect different cell types, as identified by the algorithms. After three days, thousands of cultured cells became infected. The scientists analyzed data from the infected cells along with neighboring bystander cells. They observed ciliated cells were 83% of the infected cells. These cells were the first and primary source of infection throughout the study. The virus also targeted other epithelial cell types including basal and club cells. The goblet, neuroendocrine, tuft cells, and ionocytes were less likely to become infected.

The gene signatures revealed an innate immune response associated with a protein called Interleukin 6 (IL-6). The analysis also showed a shift in the polyadenylated viral transcripts. Lastly, the (uninfected) bystander cells also showed an immune response, likely due to signals from the infected cells. Pulling from tens of thousands of genes, the algorithms locate the genetic differences between infected and non-infected cells. In the next phase of this study, the scientists will examine the severity of SARS-CoV-2 compared to other types of coronaviruses, and conduct tests in animal models.

Machine learning allows us to generate hypotheses. Its a different way of doing science. We go in with as few hypotheses as possible. Measure everything we can measure, and the algorithms present the hypothesis to us, said senior author David van Dijk, PhD, an assistant professor of medicine in the Section of Cardiovascular Medicine and Computer Science.

Related Links:Yale School of Medicine

Read the rest here:
Yale Researchers Use Single-Cell Analysis and Machine Learning to Identify Major COVID-19 Target - HospiMedica

How AI & Machine Learning is Infiltrating the Fintech Industry? – Customer Think

Credits: freepik

Fintech is a buzzword in the modern world, which essentially means financial technology. It uses technology to offer improved financial services and solutions.

How AI and machine learning are making ways across industries, including fintech? Its an important question in the business world globally.

The use of artificial intelligence (AI) and machine learning (ML) is evolving in the finance market, owing to their exceptional benefits like more efficient processes, better financial analysis and customer engagement.

According to the prediction of Autonomous Research, AI technologies will allow financial institutions to reduce their operational costs by 22%, by 2030.AI and ML are truly efficient tools in the financial sector. In this blog, we are going to discuss how they actually help fintech? What benefits do these technologies can bring to the industry?

The implementation of AI and ML in the financial landscape has been transforming the industry. As fintech is a developing market, it requires industry-specific solutions to meet its goals. AI tools and machine learning can offer something great here.

Are you eager to know the impact of AI and ML on fintech? These disruptive technologies are not only effective in improving the accuracy level but also speeds up the entire financial process by applying various proven methodologies.

AI-based financial solutions are focused on the crucial needs of the modern financial sector such as better customer experience, cost-effectiveness, real-time data integration, and enhanced security. Adoption of AI and allied its applications enables the industry to create a better, engaging financial environment for its customers.

Use of AI and ML has facilitated financial and banking operations. With the help of such smart developments, fintech companies are delivering tailored products and services as per the needs of the evolving market.

According to a study by research group Forrester, around 50% of financial services and insurance companies already use AI globally. And the number is expected to grow with newer technology advancements.

You will be thinking why AI and ML are becoming more important in fintech? In this section, we explain how these technologies are infiltrating the industry.

The need for better, safer, and customized solutions is rising with expectations of customers. Automation has helped the fintech industry to provide better customer service and experience.

Customer-facing systems such as AI interfaces and Chatbots can offer useful advice while reducing the cost of staffing. Moreover, AI can automate the back office process and make it seamless.

Automation can greatly help Fintech firms to save time as well as money. Using AI and ML, the industry has ample opportunities for reducing human errors and improving customer support.

Finance, insurance and banking firms can leverage AI tools to make better decisions. Here management decisions are data-driven, which creates a unique way for management.

Machine learning effectively analyzes the data and brings required outcomes that help officials to cut costs. Also, it empowers organizations to solve specific problems effectively.

Technologies are meant to deliver convenience and improved speed. But, along with these benefits, there is also an increase in online fraud. Keeping this in mind, Fintech companies and financial institutions are investing in AI and machine learning to defeat fraudulent transactions.

AI and machine learning solutions are strong enough to react in real-time and can analyze more data quickly. The organizations can efficiently find patterns and recognize fraudulent process using different models of machine learning. The fintech software development company can help build secured financial software and apps using these technologies.

With AI and ML, a huge amount of data can be analyzed and optimized for better applications. Hence fintech is the right industry where there is a great future of AI and machine learning innovations.

Owing to their potential benefits, automation and machine learning are increasingly used in the Fintech industry. In the case of smart wallets, they learn and monitor users behaviour and activities, so that appropriate information can be provided for their expenses.

Fintech firms are working with development and technology leaders to bring new concepts that are effective and personalized. Artificial intelligence, machine learning, and allied technologies are playing a vital role in financial organizations to improve skills, customer satisfaction, and reduce costs.

In the developing world, it is crucial for fintech companies to categorize clients by data analyzing, and allied patterns. AI tools show excellent capabilities in it to automate the process of profiling clients, based on their risk profile. This profiling work helps experts give product recommendations to customers in an appropriate and automated way.

Predictive analytics is another competitive advantage of using AI tools in the financial sector. It is helpful to improve sales, optimize resource use, and enhance operational efficiency.

With machine learning algorithms, businesses can effectively gather and analyze huge data sets to make faster and more accurate predictions of future trends in the financial market. Accordingly, they can offer specific solutions for customers.

As the market continues to demand easier and faster transactions, emerging technologies, such as artificial intelligence and machine learning, will remain crucial for the Fintech sector.

Innovations based on AI and ML are empowering the Fintech industry significantly. As a result, financial institutions are now offering better financial services to customers with excellence.

Leading financial and banking firms globally are using the convenient features of artificial intelligence to make business more stable and streamlined.

View post:
How AI & Machine Learning is Infiltrating the Fintech Industry? - Customer Think

AI and Machine Learning Technologies Are On the Rise Globally, with Governments Launching Initiatives to Support Adoption: Report – Crowdfund Insider

Kate MacDonald, New Zealand Government Fellow at the World Economic Forum, and Lofred Madzou, Project Lead, AI and Machine Learning at the World Economic Forum have published a report that explains how AI can benefit everyone.

According to MacDonald and Madzou, artificial intelligence can improve the daily lives of just about everyone, however, we still need to address issues such as accuracy of AI applications, the degree of human control, transparency, bias and various privacy issues. The use of AI also needs to be carefully and ethically managed, MacDonald and Madzou recommend.

As mentioned in a blog post by MacDonald and Madzou:

One way to [ensure ethical practice in AI] is to set up a national Centre for Excellence to champion the ethical use of AI and help roll out training and awareness raising. A number of countries already have centres of excellence those which dont, should.

The blog further notes:

AI can be used to enhance the accuracy and efficiency of decision-making and to improve lives through new apps and services. It can be used to solve some of the thorny policy problems of climate change, infrastructure and healthcare. It is no surprise that governments are therefore looking at ways to build AI expertise and understanding, both within the public sector but also within the wider community.

As noted by MacDonald and Madzou, the UK has established many Office for AI centers, which aim to support the responsible adoption of AI technologies for the benefit of everyone. These UK based centers ensure that AI is safe through proper governance, strong ethical foundations and understanding of key issues such as the future of work.

The work environment is changing rapidly, especially since the COVID-19 outbreak. Many people are now working remotely and Fintech companies have managed to raise a lot of capital to launch special services for professionals who may reside in a different jurisdiction than their employer. This can make it challenging for HR departments to take care of taxes, compliance, and other routine work procedures. Thats why companies have developed remote working solutions to support companies during these challenging times.

Many firms might now require advanced cybersecurity solutions that also depend on various AI and machine learning algorithms.

The blog post notes:

AI Singapore is bringing together all Singapore-based research institutions and the AI ecosystem start-ups and companies to catalyze, synergize and boost Singapores capability to power its digital economy. Its objective is to use AI to address major challenges currently affecting society and industry.

As covered recently, AI and machine learning (ML) algorithms are increasingly being used to identify fraudulent transactions.

As reported in August 2020, the Hong Kong Institute for Monetary and Financial Research (HKIMR), the research segment of the Hong Kong Academy of Finance (AoF), had published a report on AI and banking. Entitled Artificial Intelligence in Banking: The Changing Landscape in Compliance and Supervision, the report seeks to provide insights on the long-term development strategy and direction of Hong Kongs financial industry.

In Hong Kong, the use of AI in the banking industry is said to be expanding including front-line businesses, risk management, and back-office operations. The tech is poised to tackle tasks like credit assessments and fraud detection. As well, banks are using AI to better serve their customers.

Policymakers are also exploring the use of AI in improving compliance (Regtech) and supervisory operations (Suptech), something that is anticipated to be mutually beneficial to banks and regulators as it can lower the burden on the financial institution while streamlining the regulator process.

The blog by MacDonald and Madzou also mentions that India has established a Centre of Excellence in AI to enhance the delivery of AI government e-services. The blog noted that the Centre will serve as a platform for innovation and act as a gateway to test and develop solutions and build capacity across government departments.

The blog post added that Canada is notably the worlds first country to introduce a National AI Strategy, and to also establish various centers of excellence in AI research and innovation at local universities. The blog further states that this investment in academics and researchers has built on Canadas reputation as a leading AI research hub.

MacDonald and Madzou also mentioned that Malta has launched the Malta Digital Innovation Authority, which serves as a regulatory body that handles governmental policies that focus on positioning Malta as a centre of excellence and innovation in digital technologies. The island countrys Innovation Authority is responsible for establishing and enforcing relevant standards while taking appropriate measures to ensure consumer protection.

Visit link:
AI and Machine Learning Technologies Are On the Rise Globally, with Governments Launching Initiatives to Support Adoption: Report - Crowdfund Insider

CORRECTING and REPLACING Anyscale Hosts Inaugural Ray Summit on Scalable Python and Scalable Machine Learning – Yahoo Finance

Creators of Ray Open Source Project Gather Industry Experts for Two-Day Event on Building Distributed Applications at Scale

Please replace the release with the following corrected version due to multiple revisions.

The updated release reads:

ANYSCALE HOSTS INAUGURAL RAY SUMMIT ON SCALABLE PYTHON AND SCALABLE MACHINE LEARNING

Creators of Ray Open Source Project Gather Industry Experts for Two-Day Event on Building Distributed Applications at Scale

Anyscale, the distributed programming platform company, is proud to announce Ray Summit, an industry conference dedicated to the use of the Ray open source framework for overcoming challenges in distributed computing at scale. The two-day virtual event is scheduled for Sept. 30 Oct. 1, 2020.

With the power of Ray, developers can build applications and easily scale them from a laptop to a cluster, eliminating the need for in-house distributed computing expertise. Ray Summit brings together a leading community of architects, machine learning engineers, researchers, and developers building the next generation of scalable, distributed, high-performance Python and machine learning applications. Experts from organizations including Google, Amazon, Microsoft, Morgan Stanley, and more will showcase Ray best practices, real-world case studies, and the latest research in AI and other scalable systems built on Ray.

"Ray Summit gives individuals and organizations the opportunity to share expertise and learn from the brightest minds in the industry about leveraging Ray to simplify distributed computing," said Robert Nishihara, Ray co-creator and Anyscale co-founder and CEO. "Its also the perfect opportunity to build on Rays established popularity in the open source community and celebrate achievements in innovation with Ray."

Anyscale will announce the v1.0 release of the Ray open source framework at the Summit and unveil new additions to a growing list of popular third-party machine learning libraries and frameworks on top of Ray.

The Summit will feature keynote presentations, general sessions, and tutorials suited to attendees with various experience and skill levels using Ray. Attendees will learn the basics of using Ray to scale Python applications and machine learning applications from machine learning visionaries and experts including:

"It is essential to provide our customers with an enterprise grade platform as they build out intelligent autonomous systems applications," said Mark Hammond, GM Autonomous Systems, Microsoft. "Microsoft Project Bonsai leverages Ray and Azure to provide transparent scaling for both reinforcement learning training and professional simulation workloads, so our customers can focus on the machine teaching needed to build their sophisticated, real world applications. Im happy we will be able to share more on this at the inaugural Anyscale Ray Summit."

To view the full event schedule, please visit: https://events.linuxfoundation.org/ray-summit/program/schedule/

For complimentary registration to Ray Summit, please visit: https://events.linuxfoundation.org/ray-summit/register/

About Anyscale

Anyscale is the future of distributed computing. Founded by the creators of Ray, an open source project from the UC Berkeley RISELab, Anyscale enables developers of all skill levels to easily build applications that run at any scale, from a laptop to a data center. Anyscale empowers organizations to bring AI applications to production faster, reduce development costs, and eliminate the need for in-house expertise to build, deploy and manage these applications. Backed by Andreessen Horowitz, Anyscale is based in Berkeley, CA. http://www.anyscale.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200812005122/en/

Contacts

Media Contact:Allison Stokesfama PR for Anyscaleanyscale@famapr.com 617-986-5010

Link:
CORRECTING and REPLACING Anyscale Hosts Inaugural Ray Summit on Scalable Python and Scalable Machine Learning - Yahoo Finance