Pursue a future in big data and machine learning with these classes – Mashable

Products featured here are selected by our partners at StackCommerce.If you buy something through links on our site, Mashable may earn an affiliate commission.All instructorscome from solid technical backgrounds.

Image: pexels

By StackCommerceMashable Shopping2020-06-05 19:43:23 UTC

TL;DR: Get involved with the world's most valuable resource data, of course with The Complete 2020 Big Data and Machine Learning Bundle for $39.90, a 96% savings as of June 5.

Big data has gotten sobigthat the adjective doesn't even do it justice any longer. If anything, it should be described as gargantuan data, given how the entire digital universe is expected to generate 44 zettabytes of data by the end of this year. WTF is zettabytes? It's equal to one sextillion (1021) or270 bytes. It's a lot.

It's never been clearer that data is the world's most valuable resource, making now an opportune time to get to grips with all things data. The Complete 2020 Big Data and Machine Learning Bundle can be your springboard to exploring a career in data science and data analysis.

Big Data and Machine Learning are intimidating concepts, which is why this bundle of courses demystifies them in a way that beginners will understand. After you've familiarized yourself with foundational concepts, you will then move onto the nitty-gritty and get the chance to arm yourself with skills including analyzing and visualizing data with tools like Elastisearch, creating neural networks and deep learning structures with Keras, processing a torrential downpour of data in real-time using Spark Streaming, translating complex analysis problems into digestible chunks with MapReduce, and taming data using Hadoop.

Look, we know all this sounds daunting, but trust that you'll be able to learn and synthesize everything, all thanks to the help of expert instructors who know their stuff.

For a limited time, you can gain access to the bundle on sale for only $39.90.

Here is the original post:
Pursue a future in big data and machine learning with these classes - Mashable

Machine Learning Market 2020 Professional Survey Report; Industry Growth, Shares, Opportunities And Forecast To 2026 – Surfacing Magazine

Machine Learning Market research Report is a valuable supply of perceptive information for business strategists. This Machine Learning Market study provides comprehensive data which enhances the understanding, scope and application of this report.

Summary of Report @ Machine Learning Market

A thorough study of the competitive landscape of the global Machine Learning Market has been given, presenting insights into the company profiles, financial status, recent developments, mergers and acquisitions, and the SWOT analysis. This research report will give a clear idea to readers about the overall market scenario to further decide on this market projects.

The analysts also have analyzed drawbacks with on-going Machine Learning trends and the opportunities which are devoting to the increased growth of the market. International Machine Learning market research report provides the perspective of this competitive landscape of worldwide markets. The report offers particulars that originated from the analysis of the focused market. Also, it targets innovative, trends, shares and cost by Machine Learning industry experts to maintain a consistent investigation.

Market Segment by Regions, regional analysis covers

The Machine Learning analysis was made to include both qualitative and qualitative facets of this market in regards to global leading regions. The Machine Learning report also reinforces the information concerning the aspects like major Machine Learning drivers & controlling facets that may specify the markets. Also, covering multiple sections, company profile, and type, along with applications.

We do provide Sample of this report, Please go through the following information in order to Request Sample Copy.

This report sample includes:

Brief Introduction to the research report.

Table of Contents (Scope covered as a part of the study)Top players in the market

Research framework (Structure Of The Report)

Research methodology adopted by Coherent Market Insights

Get Sample copy @ https://www.coherentmarketinsights.com/insight/request-sample/1098

Reasons why you should buy this report

Understand the current and future of the Machine Learning Market in both developed and emerging markets.

The report assists in realigning the business strategies by highlighting the Machine Learning business priorities.

The report throws light on the segment expected to dominate the Machine Learning industry and market.

Forecasts the regions expected to witness the fastest growth.

The latest developments in the Machine Learning industry and details of the industry leaders along with their market share and strategies.

Saves time on the entry level analysis because the report contains very important info regarding growth, size, leading players and segments of the business.

Save and reduce time carrying out entry-level research by identifying the growth, size, leading players and segments in the global Market.

The global report is integrated considering the primary and secondary research methodologies that have been collected from reliable sources intended to generate a factual database. The data from market journals, publications, conferences, white papers and interviews of key market leaders are compiled to generate our segmentation and is mapped to a fair trajectory of the market during the forecast period.

Request Discountoption enables you to get the discounts on the actual price of the report. Kindly fill the form, and one of our consultants would get in touch with you to discuss your allocated budget, and would provide discounts.

Dont Quarantine Your Research, you keep your social distance and we provide you a socialDISCOUNTuseSTAYHOMECode in precise requirement andGetFLAT 1000USD OFFon all CMI reports

Request for Discount @ https://www.coherentmarketinsights.com/insight/request-discount/1098

Market Drivers and Restraints:

Emergence of new technologies in Enterprise Mobility

Economies of Scale in the Operational Expenditure

Lack of Training Expertise and Skills

Data Security concerns

Key highlights of this report:

Overview of key market forces driving and restraining the market growth

Market and Forecast (2018 2026)

analyses of market trends and technological improvements

analyses of market competition dynamics to offer you a competitive edge

An analysis of strategies of major competitors

Workplace Transformation Services market Volume and Forecast (2018 2026)

Companies Market Share Analysis

analysis of major industry segments

Detailed analyses of industry trends

Offers a clear understanding of the competitive landscape and key product segments

About Coherent Market Insights:

Coherent Market Insights is a prominent market research and consulting firm offering action-ready syndicated research reports, custom market analysis, consulting services, and competitive analysis through various recommendations related to emerging market trends, technologies, and potential absolute dollar opportunity.

Contact Us:

Mr. ShahCoherent Market Insights1001 4th Ave,#3200Seattle, WA 98154Tel: +1-206-701-6702Email:sales@coherentmarketinsights.comVisit Here, for More Information:https://theemmasblog.blogspot.com/

Read more:
Machine Learning Market 2020 Professional Survey Report; Industry Growth, Shares, Opportunities And Forecast To 2026 - Surfacing Magazine

Breaking Down COVID-19 Models Limitations and the Promise of Machine Learning – EnterpriseAI

Every major news outlet offers updates on infections, deaths, testing, and other metrics related to COVID-19. They also link to various models, such as those on HealthData.org, from The Institute for Health Metrics and Evaluation (IHME), an independent global health research center at the University of Washington. Politicians, corporate executives, and other leaders rely on these models (and many others) to make important decisions about reopening local economies, restarting businesses, and adjusting social distancing guidelines. Many of these models possess a shortcomingthey are not built with machine learning and AI.

Predictions and Coincidence

Given the sheer numbers of scientists and data experts working on predictions about the COVID-19 pandemic, the odds favor someone being right. Like the housing crisis and other calamitous events in the U.S., someone took credit for predicting that exact event. However, its important to note the number of predictors. It creates a multiple hypothesis testing situation where the higher number of trials increases the chance of a result via coincidence.

This is playing out now with COVID-19, and we will see in the coming months many experts claiming they had special knowledge after their predictions proved true. There is a lot of time, effort, and money invested in projections, and the non-scientists involved are not as eager as the scientists to see validation and proof. AI and machine learning technologies need to step into this space to improve the odds that the right predictions were very educated projections based on data instead of coincidence.

Modeling Meets its Limits

The models predicting infection rates, total mortality, and intensive care capacity are simpler constructs. They are adjusted when the conditions on the ground materially change, such as when states reopen; otherwise, they remain static. The problem with such an approach lies partly in the complexity of COVID-19s different variables. These variables mean the results of typical COVID-19 projections do not have linear relationships with the inputs used to create them. AI comes into play here, due to its ability to ignore assumptions about the ways the predictors building the models might assist or ultimately influence the prediction.

Improving Models with Machine Learning

Machine Learning, which is one way of building AI systems, can better leverage more data sets and their interrelated connections. For example, socioeconomic status, gender, age, and health status can all inform these platforms to determine how the virus relates to current and future mortality and infections. Its enabling a granular approach to review the impacts of the virus for smaller groups who might be in age group A and geographic area Z while also having a preexisting condition X that puts people in a higher COVID-19 risk group. Pandemic planners can use AI in a similar way as financial services and retail firms leverage personalized predictions to suggest things for people to buy as well as risk and credit predictions.

Community leaders need this detail to make more informed decisions about opening regional economies and implementing plans to better protect high-risk groups. On the testing front, AI is vital for producing quality data that are specific for a city or state and takes into account more than just basic demographics, but also more complex individual-based features.

Variations in testing rules across the states require adjusting models to account for different data types and structures. Machine learning is well suited to manage these variations. The complexity of modeling testing procedures means true randomization is essential for determining the most accurate estimates of infection rates for a given area.

The Automation Advantage

The pandemic hit with crushing speed, and the scientific community has tried to quickly react. Enabling faster movement with modeling, vaccine development, and drug trials is possible with automated AI and machine learning platforms. Automation removes manual processes from the scientists day, giving them time to focus on the core of their work, instead of mundane tasks.

According to a study titled Perceptions of scientific research literature and strategies for reading papers depend on academic career stage, scientists spend a considerable amount of time reading. It states, Engaging with the scientific literature is a key skill for researchers and students on scientific degree programmes; it has been estimated that scientists spend 23% of total work time reading. Various AI-driven platforms such as COVIDScholar use web scrapers to pull all new virus-related papers, and then machine learning is used to tag subject categories. The results are enhanced research capabilities that can then inform various models for vaccine development and other vital areas. AI is also pulling insights from research papers that are hidden from human eyes, such as the potential for existing medications as possible treatments for COVID-19 conditions.

Machine learning and AI can improve COVID-19 modeling as well as vaccine and medication development. The challenges facing scientists, doctors, and policy makers provide an opportunity for AI to accelerate various tasks and eliminate time-consuming practices. For example, researchers at the University of Chicago and Argonne National Laboratory collaborated to use AI to collect and analyze radiology images in order to better diagnose and differentiate the current infection stages for COVID-19 patients. The initiative provides physicians with a much faster way to assess patient conditions and then propose the right treatments for better outcomes. Its a simple example of AIs power to collect readily available information and turn it into usable insights.

Throughout the pandemic, AI is poised to provide scientists with improved models and predictions, which can then guide policymakers and healthcare professionals to make informed decisions. Better data quality through AI also creates strategies for managing a second wave or a future pandemic in the coming decades.

About the Author

PedroAlves is the founder and CEO of Ople.AI,a software startup that provides an Automated Machine Learning platform to empower business users with predictive analytics.

While pursuing his Ph.D. in ComputationalBiology from Yale University, Alves started his career as a data scientist and gained experience in predicting, analyzing, and visualizing data in the fields of social graphs, genomics, gene networks, cancer metastasis, insurance fraud, soccer strategies, joint injuries, human attraction, spam detection and topic modeling among others. Realizing that he was learning by observing how algorithms learn from processing different models, Alves discovered that data scientists could benefit from AI that mimics this behavior of learning to learn to learn. Therefore, he founded Ople to advance the field of data science and make AI easy, cheap, and ubiquitous.

Alves enjoys tackling new problems and actively participates in the AI community through projects, lectures, panels, mentorship, and advisory boards. He is extremely passionate about all aspects of AI and dreams of seeing it deliver on its promises; driven by Ople.

Related

View original post here:
Breaking Down COVID-19 Models Limitations and the Promise of Machine Learning - EnterpriseAI

Machine learning can give healthcare workers a ‘superpower’ – Healthcare IT News

With healthcare organizations around the world leveraging cloud technologies for key clinical and operational systems, the industry is building toward digitally enhanced, data-driven healthcare.

And unstructured healthcare data, within clinical documents and summaries, continues to remain an important source of insights to support clinical and operational excellence.

But there are countless nuggets of important unstructured data something that does not lend itself to manual search and manipulation by clinicians. This is where automation comes.

Arun Ravi, senior product leader at Amazon Web Services is co-presenting a HIMSS20 Digital presentation on unstructured healthcare data and machine learning, Accelerating Insights from Unstructured Data, Cloud Capabilities to Support Healthcare.

There is a huge shift from volume- to value-based care: 54% of hospital CEOs see the transition from volume to value as their biggest financial challenge, and two-thirds of the IT budget goes toward keeping the lights on, Ravi explained.

Machine learning has this really interesting role to play where were not necessarily looking to replace the workflows, but give essentially a superpower to people in healthcare and allow them to do their jobs a lot more efficiently.

In terms of how this affects health IT leaders, with value-based care there is a lot of data that is being created. When a patient goes through the various stages of care, there is a lot of documentation, a lot of data that is created.

But how do you apply the resources that are available to make it much more streamlined, to create that perfect longitudinal view of the patient? Ravi asked. A lot of the current IT models lack that agility to keep pace with technology. And again, its about giving the people in this space a superpower to help them bring the right data forward and use that in order to make really good clinical decisions.

This requires responding to a very new model that has come into play. And this model requires focus on differentiating a healthcare organizations ability to do this work in real time and do it at scale.

How you incorporate these new technologies into care delivery in a way that not only is scalable but actually reaches your patients and also makes sure your internal stakeholders are happy with it, Ravi said. And again, you want to reduce the risk, but overall, how do you manage this data well in a way that is easy for you to scale and easy for you to deploy into new areas as the care model continues to shift?

So why is machine learning important in healthcare?

If you look at the amount of unstructured data that is created, it is increasing exponentially, said Ravi. And a lot of that remains untapped. There are 1.2 billion unstructured clinical documents that are actually created every year. How do you extract the insights that are valuable for your application without applying manual approaches to it?

Automating all of this really helps a healthcare organization reduce the expense and the time that is spent trying to extract these insights, he said. And this creates a unique opportunity, not just to innovate but to build new products, he added.

Ravi and his co-presenter, Paul Zhao, senior product leader at AWS, offer an in-depth look into gathering insights from all of this unstructured healthcare data via machine learning and cloud capabilities in their HIMSS20 Digital session. To attend the session, click here.

Twitter:@SiwickiHealthITEmail the writer:bill.siwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read more here:
Machine learning can give healthcare workers a 'superpower' - Healthcare IT News

What is machine learning, and how does it work? – Pew Research Center

At Pew Research Center, we collect and analyze data in a variety of ways. Besides asking people what they think through surveys, we also regularly study things like images, videos and even the text of religious sermons.

In a digital world full of ever-expanding datasets like these, its not always possible for humans to analyze such vast troves of information themselves. Thats why our researchers have increasingly made use of a method called machine learning. Broadly speaking, machine learning uses computer programs to identify patterns across thousands or even millions of data points. In many ways, these techniques automate tasks that researchers have done by hand for years.

Our latest video explainer part of our Methods 101 series explains the basics of machine learning and how it allows researchers at the Center to analyze data on a large scale. To learn more about how weve used machine learning and other computational methods in our research, including the analysis mentioned in this video, you can explore recent reports from our Data Labs team.

Excerpt from:
What is machine learning, and how does it work? - Pew Research Center

Big data and machine learning are growing at massive rates. This training explains why – The Next Web

TLDR: The Complete 2020 Big Data and Machine Learning Bundle breaks down understanding and getting started in two of the tech eras biggest new growth sectors.

Its instructive to know just how big Big Data really is. And the reality is that its now so big that the word big doesnt even effectively do it justice anymore. Right now, humankind is creating 2.5 quintillion bytes of data every day. And its growing exponentially, with 90 percent of all data created in just the past two years. By 2023, the big data industry will be worth about $77 billion and thats despite the fact that unstructured data is identified as a problem by 95 percent of all businesses.

Meanwhile, data analysis is also the background of other emerging fields, like the explosion of machine learning projects that have companies like Apple scooping up machine learning upstarts.

The bottom is that if you understand Big Data, you can effectively right your own ticket salary-wise. You can jump into this fascinating field the right way with the training in The Complete 2020 Big Data and Machine Learning Bundle, on sale now for $39.90, over 90 percent off from TNW Deals.

This collection includes 10 courses featuring 68 hours of instruction covering the basics of big data, the tools data analysts need to know, how machines are being taught to think for themselves, and the career applications for learning all this cutting-edge technology.

Everything starts with getting a handle on how data scientists corral mountains of raw information. Six of these courses focus on big data training, including close exploration of the essential industry-leading tools that make it possible. If you dont know what Hadoop, Scala or Elasticsearch do or that Spark Streaming is a quickly developing technology for processing mass data sets in real-time, you will after these courses.

Meanwhile, the remaining four courses center on machine learning, starting with a Machine Learning for Absolute Beginners Level 1 course that helps first-timers get a grasp on the foundations of machine learning, artificial intelligence and deep learning. Students also learn about the Python coding languages role in machine learning as well as how tools like Tensorflow and Keras impact that learning.

A training package valued at almost $1,300, you can start turning Big Data and machine learning into a career with this instruction for just $39.90.

Prices are subject to change.

Read next: The 'average' Robinhood trader is no match for the S&P 500, just like Buffett

Read our daily coverage on how the tech industry is responding to the coronavirus and subscribe to our weekly newsletter Coronavirus in Context.

For tips and tricks on working remotely, check out our Growth Quarters articles here or follow us on Twitter.

Excerpt from:
Big data and machine learning are growing at massive rates. This training explains why - The Next Web

Massey University’s Teo Susnjak on how Covid-19 broke machine learning, extreme data patterns, wealth and income inequality, bots and propaganda and…

This weeks Top 5 comes from Teo Susnjaka computer scientistspecialising in machine learning. He is a Senior Lecturer in Information Technology at Massey University and is the developer behind GDPLive.

As always, we welcome your additions in the comments below or via email to david.chaston@interest.co.nz.

And if you're interested in contributing the occasional Top 5yourself, contact gareth.vaughan@interest.co.nz.

1. Covid-19 broke machine learning.

As the Covid-19 crisis started to unfold, we started to change our buying patterns. All of a sudden, some of the top purchasing items became: antibacterial soap, sanitiser, face masks, yeast and of course, toilet paper. As the demand for these unexpected items exploded, retail supply chains were disrupted. But they weren't the only ones affected.

Artificial intelligence systems began to break too. The MIT Technology Review reports:

Machine-learning models that run behind the scenes in inventory management, fraud detection, and marketing rely on a cycle of normal human behavior. But what counts as normal has changed, and now some are no longer working.

How bad the situation is depends on whom you talk to. According to Pactera Edge, a global AI consultancy, automation is in tailspin. Others say they are keeping a cautious eye on automated systems that are just about holding up, stepping in with a manual correction when needed.

Whats clear is that the pandemic has revealed how intertwined our lives are with AI, exposing a delicate codependence in which changes to our behavior change how AI works, and changes to how AI works change our behavior. This is also a reminder that human involvement in automated systems remains key. You can never sit and forget when youre in such extraordinary circumstances, says Cline.

Image source: MIT Technology Review

The extreme data capturing a previously unseen collapse in consumer spending that feeds the real-time GDP predictor at GDPLive.net, also broke our machine learning algorithms.

2. Extreme data patterns.

The eminent economics and finance historian, Niall Ferguson (not to be confused with Neil Ferguson who also likes to create predictive models) recently remarked that the first month of the lockdown created conditions which took a full year to materialise during the Great Depression.

The chart below shows the consumption data falling off the cliff, generating inputs that broke econometrics and machine learning models.

What we want to see is a rapid V-shaped recovery in consumer spending. The chart below shows the most up-to-date consumer spending trends. Consumer spending has now largely recovered, but is still lower than that of the same period in 2019. One of the key questions will be whether or not this partial rebound will be temporary until the full economic impacts of the 'Great Lockdown' take effect.

Paymark tracks consumer spending on their new public dashboard. Check it out here.

3. Wealth and income inequality.

As the current economic crisis unfolds, GDP will take centre-stage again and all other measures which attempt to quantify wellbeing and social inequalities will likely be relegated until economic stability returns.

When the conversation does return to this topic, AI might have something to contribute.

Effectively addressing income inequality is a key challenge in economics with taxation being the most useful tool. Although taxation can lead to greater equalities, over-taxation discourages from working and entrepreneurship, and motivates tax avoidance. Ultimately this leaves less resources to redistribute. Striking an optimal balance is not straightforward.

The MIT Technology Reviewreports thatAI researchers at the US business technology company Salesforce implemented machine learning techniques that identify optimal tax policies for a simulated economy.

In one early result, the system found a policy thatin terms of maximising both productivity and income equalitywas 16% fairer than a state-of-the-art progressive tax framework studied by academic economists. The improvement over current US policy was even greater.

Image source: MIT Technology Review

It is unlikely that AI will have anything meaningful to contribute towards tackling wealth inequality though. If Walter Scheidel, author of The Great Leveller and professor of ancient history at Stanford is correct, then the only historically effective levellers of inequality are: wars, revolutions, state collapses and...pandemics.

4. Bots and propaganda.

Over the coming months, arguments over what has caused this crisis, whether it was the pandemic or the over-reactive lockdown policies, will occupy much of social media. According to The MIT Technology Review, bots are already being weaponised to fight these battles.

Nearly half of Twitter accounts pushing to reopen America may be bots. Bot activity has become an expected part of Twitter discourse for any politicized event. Across US and foreign elections and natural disasters, their involvement is normally between 10 and 20%. But in a new study, researchers from Carnegie Mellon University have found that bots may account for between 45 and 60% of Twitter accounts discussing covid-19.

To perform their analysis, the researchers studied more than 200 million tweets discussing coronavirus or covid-19 since January. They used machine-learning and network analysis techniques to identify which accounts were spreading disinformation and which were most likely bots or cyborgs (accounts run jointly by bots and humans).

They discovered more than 100 types of inaccurate Covid-19-19 stories and found that not only were bots gaining traction and accumulating followers, but they accounted for 82% of the top 50 and 62% of the top 1,000 influential retweeters.

Image source: MIT Technology Review

How confident are you that you can tell the difference between a human and a bot? You can test yourself out here. BTW, I failed.

5. Primed to believe bad predictions.

This has been a particularly uncertain time. We humans don't like uncertainty especially once it reaches a given threshold. We have an amazing brain that is able to perform complex pattern recognition that enables us to predict what's around the corner. When we do this, we resolve uncertainty and our brain releases dopamine, making us feel good. When we cannot make sense of the data and the uncertainty remains unresolved, then stress kicks in.

Writing on this in Forbes, John Jennings points out:

Research shows we dislike uncertainty so much that if we have to choose between a scenario in which we know we will receive electric shocks versus a situation in which the shocks will occur randomly, well select the more painful option of certain shocks.

The article goes on to highlight how we tend to react in uncertain times. Aversion to uncertainty drives some of us to try to resolve it immediately through simple answers that align with our existing worldviews. For others, there will be a greater tendency to cluster around like-minded people with similar worldviews as this is comforting. There are some amongst us who are information junkies and their hunt for new data to fill in the knowledge gaps will go into overdrive - with each new nugget of information generating a dopamine hit. Lastly, a number of us will rely on experts who will use their crystal balls to find for us the elusive signal in all the noise, and ultimately tell us what will happen.

The last one is perhaps the most pertinent right now. Since we have a built-in drive that seeks to avoid ambiguity, in stressful times such as this, our biology makes us susceptible to accepting bad predictions about the future as gospel especially if they are generated by experts.

Experts at predicting the future do not have a strong track record considering how much weight is given to them. Their predictive models failed to see the Global Financial Crisis coming, they overstated the economic fallout of Brexit, the climate change models and their forecasts are consistently off-track, and now we have the pandemic models.

Image source:drroyspencer.com

The author suggests that this time "presents the mother of all opportunities to practice learning to live with uncertainty". I would also add that a good dose of humility on the side of the experts, and a good dose of scepticism in their ability to accurately predict the future both from the public and decision makers, would also serve us well.

Excerpt from:
Massey University's Teo Susnjak on how Covid-19 broke machine learning, extreme data patterns, wealth and income inequality, bots and propaganda and...

Machine Learning Takes The Embarrassment Out Of Videoconference Wardrobe Malfunctions – Hackaday

Telecommuters: tired of the constant embarrassment of showing up to video conferences wearing nothing but your underwear? Save the humiliation and all those pesky trips down to HR with Safe Meeting, the new system that uses the power of artificial intelligence to turn off your camera if you forget that casual Friday isnt supposed to be that casual.

The following infomercial is brought to you by [Nick Bild], who says the whole thing is tongue-in-cheek but we sense a certain degree of necessity is the mother of invention here. Its true that the sudden throng of remote-work newbies certainly increases the chance of videoconference mishaps and the resulting mortification, so whatever the impetus, Safe Meeting seems like a great idea. It uses a Pi cam connected to a Jetson Nano to capture images of you during videoconferences, which are conducted over another camera. The stream is classified by a convolutional neural net (CNN) that determines whether it can see your underwear. If it can, it makes a REST API call to the conferencing app to turn off the camera. The video below shows it in action, and that it douses the camera quickly enough to spare your modesty.

We shudder to think about how [Nick] developed an underwear-specific training set, but we applaud him for doing so and coming up with a neat application for machine learning. Hes been doing some fun work in this space lately, from monitoring where surfaces have been touched to a 6502-based gesture recognition system.

Go here to see the original:
Machine Learning Takes The Embarrassment Out Of Videoconference Wardrobe Malfunctions - Hackaday

Senior Product Manager Payments – Machine Learning job with Zalando | 141053 – The Business of Fashion

As a Senior Product Manager for Revenue Management at Zalando Payments, you and your team of experienced researchers & engineers will work on cutting edge Machine Learning products to support our end to end Payments platform.WHERE YOUR EXPERTISE IS NEEDED

We celebrate diversity and are committed to building teams that represent a variety of backgrounds, perspectives and skills. All employment is decided on the basis of qualifications, merit and business need.

ABOUT ZALANDOZalando is Europe's leading online platform for fashion and lifestyle, connecting customers, brands and partners across 17 markets. We drive digital solutions for fashion, logistics, advertising and research, bringing head-to-toe fashion to more than 23 million active customers through diverse skill-sets, interests and languages our teams choose to use.

Our Payana Team includes of 12 highly motivated and skilled data scientists and research engineers. Our mission is to provide accurate and scalable prediction services for managing the payment risk of every checkout session and each order on the zalando platform. We work in groups, autonomously developing end-to-end solutions while following an agile process.

Please note that all applications must be completed using the online form - we do not accept applications via email.

View original post here:
Senior Product Manager Payments - Machine Learning job with Zalando | 141053 - The Business of Fashion

Artificial Intelligence, Machine Learning and the Future of Graphs – BBN Times

I am a skeptic of machine learning. There, I've said it. I say this not because I don't think that machine learning is a poor technology - it's actually quite powerful for what it does - but because machine-learning by itself is only half a solution.

To explain this (and the relationship that graphs have to machine learning and AI), it's worth spending a bit of time exploring what exactly machine learning does, how it works. Machine learning isn't actually one particular algorithm or piece of software, but rather the use of statistical algorithms to analyze large amounts of data and from that construct a model that can, at a minimum, classify the data consistently. If it's done right, the reasoning goes, it should then be possible to use that model to classify new information so that it's consistent with what's already known.

Many such systems make use of clustering algorithms - they take a look at data as vectors that can be described in an n-dimensional space. That is to say, there are n different facets that describe a particular thing, such as a thing's color, shape (morphology), size, texture, and so forth. Some of these attributes can be identified by a single binary (does the thing have a tail or not), but in most cases the attributes usually range along a spectrum, such as "does the thing have an an exclusively protein-based diet (an obligate carnivore) or does its does consist of a certain percentage of grains or other plants?". In either case, this means that it is possible to use the attribute as a means to create a number between zero and one (what mathematicians would refer to as a normal orthogonal vector).

Orthogonality is an interesting concept. In mathematics, two vectors are considered orthogonal if there exists some coordinate system in which you cannot express any information about one vector using the other. For instance, if two vectors are at right angles to one another, then there is one coordinate system where one vector aligns with the x-axis and the other with the y-axis. I cannot express any part of the length of a vector along the y axis by multiplying the length of the vector on the x-axis. In this case they are independent of one another.

This independence is important. Mathematically, there is no correlation between the two vectors - they represent different things, and changing one vector tells me nothing about any other vector. When vectors are not orthogonal, one bleeds a bit (or more than a bit) into another. One two vectors are parallel to one another, they are fully correlated - one vector can be expressed as a multiple of the other. A vector in two dimensions can always be expressed as the "sum" of two orthogonal vectors, a vector in three dimensions, can always be expressed as the "sum" of three orthogonal vectors and so forth.

If you can express a thing as a vector consisting of weighted values, this creates a space where related things will generally be near one another in an n-dimensional space. Cats, dogs, and bears are all carnivores, so in a model describing animals, they will tend to be clustered in a different group than rabbits, voles, and squirrels based upon their dietary habits. At the same time cats,, dogs and bears will each tend to cluster in different groups based upon size as even a small adult bear will always be larger than the largest cat and almost all dogs. In a two dimensional space, it becomes possible to carve out a region where you have large carnivores, medium-sized carnivores, small carnivores, large herbivores and so forth.

Machine learning (at its simplest) would recognize that when you have a large carnivore, given a minimal dataset, you're likely to classify that as a bear, because based upon the two vectors size and diet every time you are at the upper end of the vectors for those two values, everything you've already seen (your training set) is a bear, while no vectors outside of this range are classified in this way.

A predictive model with only two independent vectors is going to be pretty useless as a classifier for more than a small set of items. A fox and a dog will be indistinguishable in this model, and for that matter, a small dog such as a Shitsu vs. a Maine Coon cat will confuse the heck out of such a classifier. On the flip side, the more variables that you add, the harder it is to ensure orthogonality, and the more difficult it then becomes determine what exactly is the determining factor(s) for classification, and consequently increasing the chances of misclassification. A panda bear is, anatomically and genetically, a bear. Yet because of a chance genetic mutation it is only able to reasonably digest bamboo, making it a herbivore.

You'd need to go to a very fine-grained classifier, one capable of identifying genomic structures, to identify a panda as a bear. The problem here is not in the mathematics but in the categorization itself. Categorizations are ultimately linguistic structures. Normalization functions are themselves arbitrary, and how you normalize will ultimately impact the kind of clustering that forms. When the number of dimensions in the model (even assuming that they are independent, which gets harder to determine with more variables) gets too large, then the size of hulls for clustering becomes too small, and interpreting what those hulls actually significant become too complex.

This is one reason that I'm always dubious when I hear about machine learning models that have thousands or even millions of dimensions. As with attempting to do linear regressions on curves, there are typically only a handful of parameters that typically drive most of the significant curve fitting, which is ultimately just looking for adequate clustering to identify meaningful patterns - and typically once these patterns are identified, then they are encoded and indexed.

Facial recognition, for instance, is considered a branch of machine learning, but for the most part it works because human faces exist within a skeletal structure that limits the variations of light and dark patterns of the face. This makes it easy to identify the ratios involved between eyes, nose, and mouth, chin and cheekbones, hairlines and other clues, and from that reduce this information to a graph in which the edges reflect relative distances between those parts. This can, in turn, be hashed as a unique number, in essence encoding a face as a graph in a database. Note this pattern. Because the geometry is consistent, rotating a set of vectors to present a consistent pattern is relatively simple (especially for modern GPUs).

Facial recognition then works primarily due to the ability to hash (and consequently compare) graphs in databases. This is the same way that most biometric scans work, taking a large enough sample of datapoints from unique images to encode ratios, then using the corresponding key to retrieve previously encoded graphs. Significantly, there's usually very little actual classification going on here, save perhaps in using courser meshes to reduce the overall dataset being queried. Indeed, the real speed ultimately is a function of indexing.

This is where the world of machine learning collides with that of graphs. I'm going to make an assertion here, one that might get me into trouble with some readers. Right now there's a lot of argument about the benefits and drawbacks of property graphs vs. knowledge graphs. I contend that this argument is moot - it's a discussion about optimization strategies, and the sooner that we get past that argument, the sooner that graphs will make their way into the mainstream.

Ultimately, we need to recognize that the principal value of a graph is to index information so that it does not need to be recalculated. One way to do this is to use machine learning to classify, and semantics to bind that classification to the corresponding resource (as well as to the classifier as an additional resource). If I have a phrase that describes a drink as being nutty or fruity, then these should be identified as classifications that apply to drinks (specifically to coffees, teas or wines). If I come across flavors such as hazelnut, cashew or almond, then these should be correlated with nuttiness, and again stored in a semantic graph.

The reason for this is simple - machine learning without memory is pointless and expensive. Machine learning is fast facing a crisis in that it requires a lot of cycles to train, classify and report. Tie machine learning into a knowledge graph, and you don't have to relearn all the time, and you can also reduce the overall computational costs dramatically. Furthermore, you can make use of inferencing, which are rules that can make use of generalization and faceting in ways that are difficult to pull off in a relational data system. Something is bear-like if it is large, has thick fur, does not have opposable thumbs, has a muzzle, is capable of extended bipedal movement and is omnivorous.

What's more, the heuristic itself is a graph, and as such is a resource that can be referenced. This is something that most people fail to understand about both SPARQL and SHACL. They are each essentially syntactic sugar on top of graph templates. They can be analyzed, encoded and referenced. When a new resource is added into a graph, the ingestion process can and should run against such templates to see if they match, then insert or delete corresponding additional metadata as the data is folded in.

Additionally, one of those pieces of metadata may very well end up being an identifier for the heuristic itself, creating what's often termed a reverse query. Reverse queries are significant because they make it possible to determine which family of classifiers was used to make decisions about how an entity is classified, and from that ascertain the reasons why a given entity was classified a certain way in the first place.

This gets back to one of the biggest challenges seen in both AI and machine learning - understanding why a given resource was classified. When you have potentially thousands of facets that may have potentially been responsible for a given classification, the ability to see causal chains can go a long way towards making such a classification system repeatable and determining whether the reason for a given classification was legitimate or an artifact of the data collection process. This is not something that AI by itself is very good at, because it's a contextual problem. In effect, semantic graphs (and graphs in general) provide a way of making recommendations self-documenting, and hence making it easier to trust the results of AI algorithms.

One of the next major innovations that I see in graph technology is actually a mathematical change. Most graphs that exist right now can be thought of as collections of fixed vectors, entities connected by properties with fixed values. However, it is possible (especially when using property graphs) to create properties that are essentially parameterized over time (or other variables) or that may be passed as functional results from inbound edges. This is, in fact, an alternative approach to describing neural networks (both physical and artificial), and it has the effect of being able to make inferences based upon changing conditions over time.

This approach can be seen as one form of modeling everything from the likelihood of events happening given other events (Bayesian trees) or modeling complex cost-benefit relationships. This can be facilitated even today with some work, but the real value will come with standardization, as such graphs (especially when they are closed network circuits) can in fact act as trainable neuron circuits.

It is also likely that graphs will play a central role in Smart Contracts, "documents" that not only specify partners and conditions but also can update themselves transactional, can trigger events and can spawn other contracts and actions. These do not specifically fall within the mandate of "artificial intelligence" per se, but the impact that smart contracts play in business and society, in general, will be transformative at the very least.

It's unlikely that this is the last chapter on graphs, either (though it is the last in the series about the State of the Graph). Graphs, ultimately, are about connections and context. How do things relate to one another? How are they connected? What do people know, and how do they know them. They underlie contracts and news, research and entertainment, history and how the future is shaped. Graphs promise a means of generating knowledge, creating new models, and even learning. They remind us that, even as forces try to push us apart, we are all ultimately only a few hops from one another in many, many ways.

I'm working on a book calledContext, hopefully out by Summer 2020. Until then, stay connected.

Read more from the original source:
Artificial Intelligence, Machine Learning and the Future of Graphs - BBN Times