Why 2020 will be the Year of Automated Machine Learning – Gigabit Magazine – Technology News, Magazine and Website

As the fuel that powers their ongoing digital transformation efforts, businesses everywhere are looking for ways to derive as much insight as possible from their data. The accompanying increased demand for advanced predictive and prescriptive analytics has, in turn, led to a call for more data scientists proficient with the latest artificial intelligence (AI) and machine learning (ML) tools.

But such highly-skilled data scientists are expensive and in short supply. In fact, theyre such a precious resource that the phenomenon of the citizen data scientist has recently arisen to help close the skills gap. A complementary role, rather than a direct replacement, citizen data scientists lack specific advanced data science expertise. However, they are capable of generating models using state-of-the-art diagnostic and predictive analytics. And this capability is partly due to the advent of accessible new technologies such as automated machine learning (AutoML) that now automate many of the tasks once performed by data scientists.

Algorithms and automation

According to a recent Harvard Business Review article, Organisations have shifted towards amplifying predictive power by coupling big data with complex automated machine learning. AutoML, which uses machine learning to generate better machine learning, is advertised as affording opportunities to democratise machine learning by allowing firms with limited data science expertise to develop analytical pipelines capable of solving sophisticated business problems.

Comprising a set of algorithms that automate the writing of other ML algorithms, AutoML automates the end-to-end process of applying ML to real-world problems. By way of illustration, a standard ML pipeline is made up of the following: data pre-processing, feature extraction, feature selection, feature engineering, algorithm selection, and hyper-parameter tuning. But the considerable expertise and time it takes to implement these steps means theres a high barrier to entry.

AutoML removes some of these constraints. Not only does it significantly reduce the time it would typically take to implement an ML process under human supervision, it can also often improve the accuracy of the model in comparison to hand-crafted models, trained and deployed by humans. In doing so, it offers organisations a gateway into ML, as well as freeing up the time of ML engineers and data practitioners, allowing them to focus on higher-order challenges.

SEE ALSO:

Overcoming scalability problems

The trend for combining ML with Big Data for advanced data analytics began back in 2012, when deep learning became the dominant approach to solving ML problems. This approach heralded the generation of a wealth of new software, tooling, and techniques that altered both the workload and the workflow associated with ML on a large scale. Entirely new ML toolsets, such as TensorFlow and PyTorch were created, and people increasingly began to engage more with graphics processing units (GPUs) to accelerate their work.

Until this point, companies efforts had been hindered by the scalability problems associated with running ML algorithms on huge datasets. Now, though, they were able to overcome these issues. By quickly developing sophisticated internal tooling capable of building world-class AI applications, the BigTech powerhouses soon overtook their Fortune 500 peers when it came to realising the benefits of smarter data-driven decision-making and applications.

Insight, innovation and data-driven decisions

AutoML represents the next stage in MLs evolution, promising to help non-tech companies access the capabilities they need to quickly and cheaply build ML applications.

In 2018, for example, Google launched its Cloud AutoML. Based on Neural Architecture Search (NAS) and transfer learning, it was described by Google executives as having the potential to make AI experts even more productive, advance new fields in AI, and help less-skilled engineers build powerful AI systems they previously only dreamed of.

The one downside to Googles AutoML is that its a proprietary algorithm. There are, however, a number of alternative open-source AutoML libraries such as AutoKeras, developed by researchers at Texas University and used to power the NAS algorithm.

Technological breakthroughs such as these have given companies the capability to easily build production-ready models without the need for expensive human resources. By leveraging AI, ML, and deep learning capabilities, AutoML gives businesses across all industries the opportunity to benefit from data-driven applications powered by statistical models - even when advanced data science expertise is scarce.

With organisations increasingly reliant on civilian data scientists, 2020 is likely to be the year that enterprise adoption of AutoML will start to become mainstream. Its ease of access will compel business leaders to finally open the black box of ML, thereby elevating their knowledge of its processes and capabilities. AI and ML tools and practices will become ever more ingrained in businesses everyday thinking and operations as they become more empowered to identify those projects whose invaluable insight will drive better decision-making and innovation.

By Senthil Ravindran, EVP and global head of cloud transformation and digital innovation, Virtusa

View original post here:
Why 2020 will be the Year of Automated Machine Learning - Gigabit Magazine - Technology News, Magazine and Website

How is AI and machine learning benefiting the healthcare industry? – Health Europa

In order to help build increasingly effective care pathways in healthcare, modern artificial intelligence technologies must be adopted and embraced. Events such as the AI & Machine Learning Convention are essential in providing medical experts around the UK access to the latest technologies, products and services that are revolutionising the future of care pathways in the healthcare industry.

AI has the potential to save the lives of current and future patients and is something that is starting to be seen across healthcare services across the UK. Looking at diagnostics alone, there have been large scale developments in rapid image recognition, symptom checking and risk stratification.

AI can also be used to personalise health screening and treatments for cancer, not only benefiting the patient but clinicians too enabling them to make the best use of their skills, informing decisions and saving time.

The potential AI will have on the NHS is clear, so much so, NHS England is setting up a national artificial intelligence laboratory to enhance the care of patients and research.

The Health Secretary, Matt Hancock, commented that AI had enormous power to improve care, save lives and ensure that doctors had more time to spend with patients, so he pledged 250M to boost the role of AI within the health service.

The AI and Machine Learning Convention is a part of Mediweek, the largest healthcare event in the UK and as a new feature of the Medical Imaging Convention and the Oncology Convention, the AI and Machine Learning expo offer an effective CPD accredited education programme.

Hosting over 50 professional-led seminars, the lineup includes leading artificial intelligence and machine learning experts such as NHS Englands Dr Minai Bakhai, Faculty of Clinical Informatics Professor Jeremy Wyatt, and Professor Claudia Pagliari from the University of Edinburgh.

Other speakers in the seminar programme come from leading organisations such as the University of Oxford, Kings College London, and the School of Medicine at the University of Nottingham.

The event all takes place at the National Exhibition Centre, Birmingham on the 17th and 18th March 2020. Tickets to the AI and Machine Learning are free and gains you access to the other seven shows within MediWeek.

Health Europa is proud to be partners with the AI and Machine Learning Convention, click here to get your tickets.

Do you want the latest news and updates from Health Europa? Click here to subscribe to all the latest updates and stay connected with us here.

Read the original:
How is AI and machine learning benefiting the healthcare industry? - Health Europa

Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning – Yahoo Finance

Highlights:

Recently, the international evaluation agency Standard Performance Evaluation Corporation (SPEC) has finalized the election of new Open System Steering Committee (OSSC) executive members, which include Inspur, Intel, AMD, IBM, Oracle and other three companies.

It is worth noting that Inspur, a re-elected OSSC member, was also re-elected as the chair of the SPEC Machine Learning (SPEC ML) working group. The development plan of ML test benchmark proposed by Inspur has been approved by members which aims to provide users with standard on evaluating machine learning computing performance.

SPEC is a global and authoritative third-party application performance testing organization established in 1988, which aims to establish and maintain a series of performance, function, and energy consumption benchmarks, and provides important reference standards for users to evaluate the performance and energy efficiency of computing systems. The organization consists of 138 well-known technology companies, universities and research institutions in the industry such as Intel, Oracle, NVIDIA, Apple, Microsoft, Inspur, Berkeley, Lawrence Berkeley National Laboratory, etc., and its test standard has become an important indicator for many users to evaluate overall computing performance.

The OSSC executive committee is the permanent body of the SPEC OSG (short for Open System Group, the earliest and largest committee established by SPEC) and is responsible for supervising and reviewing the daily work of major technical groups of OSG, major issues, additions and deletions of members, development direction of research and decision of testing standards, etc. Meanwhile, OSSC executive committee uniformly manages the development and maintenance of SPEC CPU, SPEC Power, SPEC Java, SPEC Virt and other benchmarks.

Machine Learning is an important direction in AI development. Different computing accelerator technologies such as GPU, FPGA, ASIC, and different AI frameworks such as TensorFlow and Pytorch provide customers with a rich marketplace of options. However, the next important thing for the customer to consider is how to evaluate the computing efficiency of various AI computing platforms. Both enterprises and research institutions require a set of benchmarks and methods to effectively measure performance to find the right solution for their needs.

In the past year, Inspur has done much to advance the SPEC ML standard specific component development, contributing test models, architectures, use cases, methods and so on, which have been duly acknowledged by SPEC organization and its members.

Joe Qiao, General Manager of Inspur Solution and Evaluation Department, believes that SPEC ML can provide an objective comparison standard for AI / ML applications, which will help users choose a computing system that best meet their application needs. Meanwhile, it also provides a unified measurement standard for manufacturers to improve their technologies and solution capabilities, advancing the development of the AI industry.

About Inspur

Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the worlds top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to http://www.inspursystems.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200221005123/en/

Contacts

Media Fiona LiuLiuxuan01@inspur.com

Read more:
Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning - Yahoo Finance

Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning – HPCwire

SAN JOSE, Calif., Feb. 21, 2020 Recently, the international evaluation agency Standard Performance Evaluation Corporation (SPEC) has finalized the election of new Open System Steering Committee (OSSC) executive members, which include Inspur, Intel, AMD, IBM, Oracle and other three companies.

It is worth noting that Inspur, a re-elected OSSC member, was also re-elected as the chair of the SPEC Machine Learning (SPEC ML) working group. The development plan of ML test benchmark proposed by Inspur has been approved by members which aims to provide users with standard on evaluating machine learning computing performance.

SPEC is a global and authoritative third-party application performance testing organization established in 1988, which aims to establish and maintain a series of performance, function, and energy consumption benchmarks, and provides important reference standards for users to evaluate the performance and energy efficiency of computing systems. The organization consists of 138 well-known technology companies, universities and research institutions in the industry such as Intel, Oracle, NVIDIA, Apple, Microsoft, Inspur, Berkeley, Lawrence Berkeley National Laboratory, etc., and its test standard has become an important indicator for many users to evaluate overall computing performance.

The OSSC executive committee is the permanent body of the SPEC OSG (short for Open System Group, the earliest and largest committee established by SPEC) and is responsible for supervising and reviewing the daily work of major technical groups of OSG, major issues, additions and deletions of members, development direction of research and decision of testing standards, etc. Meanwhile, OSSC executive committee uniformly manages the development and maintenance of SPEC CPU, SPEC Power, SPEC Java, SPEC Virt and other benchmarks.

Machine Learning is an important direction in AI development. Different computing accelerator technologies such as GPU, FPGA, ASIC, and different AI frameworks such as TensorFlow and Pytorch provide customers with a rich marketplace of options. However, the next important thing for the customer to consider is how to evaluate the computing efficiency of various AI computing platforms. Both enterprises and research institutions require a set of benchmarks and methods to effectively measure performance to find the right solution for their needs.

In the past year, Inspur has done much to advance the SPEC ML standard specific component development, contributing test models, architectures, use cases, methods and so on, which have been duly acknowledged by SPEC organization and its members.

Joe Qiao, General Manager of Inspur Solution and Evaluation Department, believes that SPEC ML can provide an objective comparison standard for AI / ML applications, which will help users choose a computing system that best meet their application needs. Meanwhile, it also provides a unified measurement standard for manufacturers to improve their technologies and solution capabilities, advancing the development of the AI industry.

About Inspur

Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the worlds top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go towww.inspursystems.com.

Source: Inspur

Original post:
Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning - HPCwire

Overview of causal inference in machine learning – Ericsson

In a major operators network control center complaints are flooding in. The network is down across a large US city; calls are getting dropped and critical infrastructure is slow to respond. Pulling up the systems event history, the manager sees that new 5G towers were installed in the affected area today.

Did installing those towers cause the outage, or was it merely a coincidence? In circumstances such as these, being able to answer this question accurately is crucial for Ericsson.

Most machine learning-based data science focuses on predicting outcomes, not understanding causality. However, some of the biggest names in the field agree its important to start incorporating causality into our AI and machine learning systems.

Yoshua Bengio, one of the worlds most highly recognized AI experts, explained in a recent Wired interview: Its a big thing to integrate [causality] into AI. Current approaches to machine learning assume that the trained AI system will be applied on the same kind of data as the training data. In real life it is often not the case.

Yann LeCun, a recent Turing Award winner, shares the same view, tweeting: Lots of people in ML/DL [deep learning] know that causal inference is an important way to improve generalization.

Causal inference and machine learning can address one of the biggest problems facing machine learning today that a lot of real-world data is not generated in the same way as the data that we use to train AI models. This means that machine learning models often arent robust enough to handle changes in the input data type, and cant always generalize well. By contrast, causal inference explicitly overcomes this problem by considering what might have happened when faced with a lack of information. Ultimately, this means we can utilize causal inference to make our ML models more robust and generalizable.

When humans rationalize the world, we often think in terms of cause and effect if we understand why something happened, we can change our behavior to improve future outcomes. Causal inference is a statistical tool that enables our AI and machine learning algorithms to reason in similar ways.

Lets say were looking at data from a network of servers. Were interested in understanding how changes in our network settings affect latency, so we use causal inference to proactively choose our settings based on this knowledge.

The gold standard for inferring causal effects is randomized controlled trials (RCTs) or A/B tests. In RCTs, we can split a population of individuals into two groups: treatment and control, administering treatment to one group and nothing (or a placebo) to the other and measuring the outcome of both groups. Assuming that the treatment and control groups arent too dissimilar, we can infer whether the treatment was effective based on the difference in outcome between the two groups.

However, we can't always run such experiments. Flooding half of our servers with lots of requests might be a great way to find out how response time is affected, but if theyre mission-critical servers, we cant go around performing DDOS attacks on them. Instead, we rely on observational datastudying the differences between servers that naturally get a lot of requests and those with very few requests.

There are many ways of answering this question. One of the most popular approaches is Judea Pearl's technique for using to statistics to make causal inferences. In this approach, wed take a model or graph that includes measurable variables that can affect one another, as shown below.

To use this graph, we must assume the Causal Markov Condition. Formally, it says that subject to the set of all its direct causes, a node is independent of all the variables which are not direct causes or direct effects of that node. Simply put, it is the assumption that this graph captures all the real relationships between the variables.

Another popular method for inferring causes from observational data is Donald Rubin's potential outcomes framework. This method does not explicitly rely on a causal graph, but still assumes a lot about the data, for example, that there are no additional causes besides the ones we are considering.

For simplicity, our data contains three variables: a treatment , an outcome , and a covariate . We want to know if having a high number of server requests affects the response time of a server.

In our example, the number of server requests is determined by the memory value: a higher memory usage means the server is less likely to get fed requests. More precisely, the probability of having a high number of requests is equal to 1 minus the memory value (i.e. P(x=1)=1-z , where P(x=1) is the probability that x is equal to 1). The response time of our system is determined by the equation (or hypothetical model):

y=1x+5z+

Where is the error, that is, the deviation from the expected value of given values of and depends on other factors not included in the model. Our goal is to understand the effect of on via observations of the memory value, number of requests, and response times of a number of servers with no access to this equation.

There are two possible assignments (treatment and control) and an outcome. Given a random group of subjects and a treatment, each subject has a pair of potential outcomes: and , the outcomes Y_i (0) and Y_i (1) under control and treatment respectively. However, only one outcome is observed for each subject, the outcome under the actual treatment received: Y_i=xY_i (1)+(1-x)Y_i (0). The opposite potential outcome is unobserved for each subject and is therefore referred to as a counterfactual.

For each subject, the effect of treatment is defined to be Y_i (1)-Y_i (0) . The average treatment effect (ATE) is defined as the average difference in outcomes between the treatment and control groups:

E[Y_i (1)-Y_i (0)]

Here, denotes an expectation over values of Y_i (1)-Y_i (0)for each subject , which is the average value across all subjects. In our network example, a correct estimate of the average treatment effect would lead us to the coefficient in front of x in equation (1) .

If we try to estimate this by directly subtracting the average response time of servers with x=0 from the average response time of our hypothetical servers with x=1, we get an estimate of the ATE as 0.177 . This happens because our treatment and control groups are not inherently directly comparable. In an RTC, we know that the two groups are similar because we chose them ourselves. When we have only observational data, the other variables (such as the memory value in our case) may affect whether or not one unit is placed in the treatment or control group. We need to account for this difference in the memory value between the treatment and control groups before estimating the ATE.

One way to correct this bias is to compare individual units in the treatment and control groups with similar covariates. In other words, we want to match subjects that are equally likely to receive treatment.

The propensity score ei for subject is defined as:

e_i=P(x=1z=z_i ),z_i[0,1]

or the probability that x is equal to 1the unit receives treatmentgiven that we know its covariate is equal to the value z_i. Creating matches based on the probability that a subject will receive treatment is called propensity score matching. To find the propensity score of a subject, we need to predict how likely the subject is to receive treatment based on their covariates.

The most common way to calculate propensity scores is through logistic regression:

Now that we have calculated propensity scores for each subject, we can do basic matching on the propensity score and calculate the ATE exactly as before. Running propensity score matching on the example network data gets us an estimate of 1.008 !

We were interested in understanding the causal effect of binary treatment x variable on outcome y . If we find that the ATE is positive, this means an increase in x results in an increase in y. Similarly, a negative ATE says that an increase in x will result in a decrease in y .

This could help us understand the root cause of an issue or build more robust machine learning models. Causal inference gives us tools to understand what it means for some variables to affect others. In the future, we could use causal inference models to address a wider scope of problems both in and out of telecommunications so that our models of the world become more intelligent.

Special thanks to the other team members of GAIA working on causality analysis: Wenting Sun, Nikita Butakov, Paul Mclachlan, Fuyu Zou, Chenhua Shi, Lule Yu and Sheyda Kiani Mehr.

If youre interested in advancing this field with us, join our worldwide team of data scientists and AI specialists at GAIA.

In this Wired article, Turing Award winner Yoshua Bengio shares why deep learning must begin to understand the why before it can replicate true human intelligence.

In this technical overview of causal inference in statistics, find out whats need to evolve AI from traditional statistical analysis to causal analysis of multivariate data.

This journal essay from 1999 offers an introduction to the Causal Markov Condition.

Originally posted here:
Overview of causal inference in machine learning - Ericsson

ReversingLabs Releases First Threat Intelligence Platform with Explainable Machine Learning to Automate Incident Response Processes with Verified…

Advances to ReversingLabs Titanium Platform Deliver Transparent and Trusted Malware Insights that Address Security Skills Gap

CAMBRIDGE, Mass., Feb. 18, 2020 (GLOBE NEWSWIRE) -- ReversingLabs, a leading provider of explainable threat intelligence solutions today announced new and enhanced capabilities for its Titanium Platform, including new machine learning algorithm models, explainable classification and out-of-the-box security information and event management (SIEM) plug-ins, security, orchestration, automation and response (SOAR) playbooks, and MITRE ATT&CK Framework support. Introducing a new level of threat intelligence, the Titanium Platform now delivers explainable insights and verification that better support humans in the incident response decision making process. ReversingLabs has been named as a ML-Based Machine Learning Binary Analysis Sample Provider within Gartners 2019 Emerging Technologies and Trends Impact Radar: Security1.. ReversingLabs will showcase its new Titanium Platform at RSA 2020, February 24-28 in San Francisco, Moscone Center, Booth #3311 in the South Expo.

As digital initiatives continue to gain momentum, companies are exposed to an increasing number of threat vectors fueled by a staggering volume of data that contains countless malware infected files and objects, demanding new requirements from the IT teams that support them, said Mario Vuksan, CEO and Co-founder, ReversingLabs. Its no wonder security operations teams struggle to manage incident response. Combine the complexity of threats with blind black box detection engine verdicts, and a lack of analyst experience, skill and time, and teams are crippled by their inability to effectively understand and take action against these increased risks. The current and future threat landscape requires a different approach to threat intelligence and detection that automates time-intensive threat research efforts with the level of detail analysts need to better understand events, improve productivity and refine their skills.

According to Gartners Emerging Technologies and Trends Impact Radar: Security, Gartner estimates that ML-based file analysis has grown at 35 percent over the past year in security technology products with endpoint products being first movers to adopt this new technology.2

Black Box to Glass Box VerdictsBecause signature, AI and machine learning-based threat classifications from black box detection engines come with little to no context, security analysts are left in the dark as to why a verdict was determined, negatively impacting their ability to verify threats, take informed action and extend critical job skills. That lack of context and transparency propelled ReversingLabs to develop a new glass box approach to threat intelligence and detection designed to better inform human understanding first. Security operations teams using ReversingLabs Titanium Platform with patent-pending Explainable Machine Learning can automatically inspect, unpack, and classify threats as before, but with the added capability of verifying these threats in context with transparent, easy to understand results. By applying new machine learning algorithms to identify threat indicators, ReversingLabs enables security teams to more quickly and accurately identify and classify unknown threats.

Key FeaturesAvailable now with Explainable Machine Learning, ReversingLabs platform inspires confidence in threat detection verdicts amongst security operations teams through a transparent and context-aware diagnosis, automating manual threat research with results humans can interpret to take informed action on zero day threats, while simultaneously fueling continuous education and the upskilling of analysts. ReversingLabs Explainable Machine Learning is based on machine learning-based binary file analysis, providing high-speed analysis, feature extraction and classification that can be used to enhance telemetry provided to incident response analysts. Key features of ReversingLabs updated platform include:

Effective machine learning results depend on having the right volume, structure, and quality of data to convert information into a relevant finding, said Vijay Doradla, Chief Business Officer at SparkCognition. With access to ReversingLabs cloud extensive repository, we have the breadth, depth, and scale of data necessary to train our machine learning models. Accurate classification and detection of threats fuels the machine learning-driven predictive security model leveraged in our DeepArmor next-generation endpoint protection platform.

1, 2 Gartner, Emerging Technologies and Trends Impact Radar: Security, Lawrence Pingree, et al, 13 November 2019

About ReversingLabsReversingLabs helps Security Operations Center (SOC) teams identify, detect and respond to the latest attacks, advanced persistent threats and polymorphic malware by providing explainable threat intelligence into destructive files and objects.ReversingLabs technology is used by the worlds most advanced security vendors and deployed across all industries searching for a better way to get at the root of the web, mobile, email, cloud, app development and supply chain threat problem, of which files and objects have become major risk contributors.

ReversingLabs Titanium Platform provides broad integration support with more than 4,000 unique file and object formats, speeds detection of malicious objects through automated static analysis, prioritizing the highest risks with actionable detail in only .005 seconds. With unmatched breadth and privacy, the platform accurately detects threats through explainable machine learning models, leveraging the largest repository of malware in the industry, containing more than 10 billion files and objects. Delivering transparency and trust, thousands of human readable indicators explain why a classification and threat verdict was determined, while integrating at scale across the enterprise with connectors that support existing SIEM, SOAR, threat intelligence platform and sandbox investments, reducing incident response time for SOC analysts, while providing high priority and detailed threat information for hunters to take quick action. Learn more at https://www.reversinglabs.com, or connect on LinkedIn or Twitter.

Media Contact: Jennifer Balinski, Guyer Groupjennifer.balinski@guyergroup.com

Go here to read the rest:
ReversingLabs Releases First Threat Intelligence Platform with Explainable Machine Learning to Automate Incident Response Processes with Verified...

Euro machine learning startup plans NYC rental platform, the punch list goes digital & other proptech news – The Real Deal

New York City rentals (Credit: iStock)

Digital marketplace gets a boost

CRE digital marketplace CREXi nabbed $30 million in a Series B round led by Mitsubishi Estate Company, Industry Ventures, and Prudence Holdings. The new funds will help them build out a subscription service aimed at brokers and an analytics service that highlights trends in the industry. The company wants to become the go-to platform for every step in the CRE process, from marketing to sale.

Dude, wheres my tech-fueled hotel chain?

Ashton Kutchers Sound Ventures and travel-focused VC firm Thayer Ventures have gotten behind hospitality startup Life House, leading a $30 million Series B round. The company runs a boutique hotel chain as well as a management platform, which gives hotel owners access to AI-based pricing and automated financial accounting. Life House has over 800 rooms across cities such as Miami and Denver, with plans to expand to 25 hotels by next year.

Working from home

As the deadly Coronavirus virus outbreak becomes more serious with every hour, WeWork said it is temporarily closing 55 locations in China. The struggling co-working company encouraged employees at these sites to work from home or in private rooms to keep from catching the virus. Also this week, the startup closed a three-year deal to provide office space for 250 employees of gym membership company Gympass, per Reuters. WeWorks owner SoftBank is a minority investor in Gympass so it looks like Masa Sons using some parts of his portfolio to prop up others.

300,000

Thats how many listings rental platform/flatmate matcher Badi has across London, Berlin, Madrid, and Barcelona. Barcelona-based Badi claims to use machine-learning technology to match tenants and rooms. Badi plans on hopping across the pond to New York City within the year. Its an interesting market for the Barcelona-based company to enter. Though most people use a platform like StreetEasy to find an apartment with a traditional landlord, few established companies have cracked the sublet game without running afoul New York Citys rental laws. In effect, Badi would likely be competing with Facebook groups such as Gypsy Housing plus wanna-be-my-roommate startups like Roomi and SpareRoom. Badi is backed by Goodwater Capital, Target Global, Spark Capital and Mangrove Capital. The firm has raised over $45 million in VC funding since its founding in 2015.

Pink slips at Compass

Uh oh, yet another SoftBank-funded startup is laying off employees. Up to 40 employees of tech brokerage Compass in the IT, marketing and M&A departments will be getting the pink slip this week. Sources told E.B. Solomont that the nationwide cuts are a part of a reorganization to introduce a new Agent Experience Team that will take over onboarding and training new agents from former employees. Its a small number of cuts compared to the 18,000 employees Compass has across the U.S. but it isnt a great look in todays business climate.

Getting ready to move

As SoftBank-backed hospitality startup Oyo continues to cut back, their arch nemesis RedDoorz just launched a new co-living program in Indonesia. Theyre targeting young professionals and college students with the KoolKost service, dishing out shared units with flexible leases and free WiFi. Their main business, like Oyo, is running a network of budget hotels across Southeast Asia. Well see if co-living will help them avoid some of Oyos profitability problems.

Homes on Olympus

Its no secret that it can be a pain to figure out a place to live when work needs you to move to a new city for a bit. You can take your pick between bland corporate housing and Airbnbs designed for quick vacations. Thats where Zeus comes in (not with a thunderbolt but with a corporate housing platform.)

Zeus signs two-year minimum leases with landlords, furnishes the apartments with couches meant to look chic, and rents them out to employees for 30 days or more. They currently manage around 2,000 furnished homes with the goal of filling a newly added apartment within 10 days.

The corporate housing is a competitive space with startups like Domio and Sonder also trying to lure in business travelers. Youd think that Zeus would have to go one-on-one with Airbnb but the two companies actually have a partnership. The short-term rental giant lists Zeus properties on its platform and invested in the company as a part of a $55 million Series B round last month. Theyre trying to keep competition close.

Punch lists go digital

Home renovations platform Punch List just scored $4 million in a seed round led by early stage VC funds Bling Capital and Bedrock Capital, per Crunchbase. The platform lets homeowners track project progress and gives contractors a place to send digital invoices, all on a newly launched app. They want to make as much of the frustrating process of remodeling as digital as possible.

Go here to read the rest:
Euro machine learning startup plans NYC rental platform, the punch list goes digital & other proptech news - The Real Deal

Itiviti Partners With AI Innovator Imandra to Integrate Machine Learning Into Client Onboarding and Testing Tools – PRNewswire

NEW YORK, Jan. 30, 2020 /PRNewswire/ -- Itiviti, a leading technology, and service provider to financial institutions worldwide, has signed an exclusive partnership agreement with Imandra Inc., the AI pioneer behind the Imandra automated reasoning engine.

Imandra's technology will initially be applied to improving the onboarding process for our clients to Itiviti's Managed FIX global connectivity platform, with further plans to swiftly expand the AI capabilities across a number of our software solutions and services.

Imandra is the world-leader in cloud-scale automated reasoning, and has pioneered scalable symbolic AI for financial algorithms. Imandra's technology brings deep advances relied upon in safety-critical industries such as avionics and autonomous vehicles to the financial markets. Imandra is relied upon by top investment banks for the design, testing and governance of highly regulated trading systems. In 2019, the company expanded outside financial services and is currently under contract with the US Department of Defense for applications of Imandra to safety-critical algorithms.

"Partnerships are integral to Itiviti's overall strategy, by partnering with cutting edge companies like Imandra we can remain at the forefront of technology innovation and continue to develop quality solutions to support our clients. Generally, client onboarding has been a neglected area within the industry for many years, but we believe working with Imandra we can raise the level of automation for testing and QA, while significantly reducing onboarding bottlenecks for our clients. Other areas we are actively exploring to benefit from AI are within the Compliance and Analytics space. We are very excited to be working with Imandra." said Linda Middleditch, EVP, Head of Product Strategy, Itiviti Group.

"This partnership will capture the tremendous opportunities within financial markets for removing manual work and applying much-needed rigorous scientific techniques toward testing of safety critical infrastructure," said Denis Ignatovich, co-founder and co-CEO of Imandra. "We look forward to helping Itiviti empower clients to take full advantage of their solutions, while adding key capabilities." Dr Grant Passmore, co-founder and co-CEO of Imandra, further added, "This partnership is the culmination of many years of deep R&D and we're thrilled to partner with Itiviti to bring our technology to global financial markets on a massive scale."

About Itiviti

Itiviti enables financial institutions worldwide to transform their trading and capture tomorrow. With innovative technology, deep expertise and a dedication to service, we help customers seize market opportunities and guide them through regulatory change.

Top-tier banks, brokers, trading firms and institutional investors rely on Itiviti's solutions to service their clients, connect to markets, trade smarter in all asset classes by consolidating trading platforms and leverage automation to move faster.

A global technology and service provider, we offer the most innovative, consistent, and reliable connectivity and trading solutions available.

With presence in all major financial centres and serving around 2,000 clients in over 50 countries, Itiviti delivers on a global scale.

For more information, please visitwww.itiviti.com.

Itiviti is owned by Nordic Capital.

About Imandra

Imandra Inc. (www.imandra.ai) is the world-leader in cloud-scale automated reasoning, democratizing deep advances in algorithm analysis and symbolic AI for making algorithms safe, explainable and fair. Imandra has been deep in R&D and industrial pilots over the past 5 years and has recently closed its $5mm Seed round led by several top deep-tech investors in US and UK. Imandra is headquartered in Austin, TX, and has offices in the UK and continental Europe.

For further information, please contact:

Itiviti

Linda Middleditch, EVPHead of Product StrategyTel +44 796 82 126 24Email: linda.middleditch@itiviti.com

George RosenbergerHead of Product StrategyClient Connectivity ServiceTel: + Email: george.rosenberger@itiviti.com

Christine BlinkeEVP, Head of Marketing & CommunicationsTel. +46 739 01 02 01Email: christine.blinke@itiviti.com

Imandra

Denis Ignatovich, co-CEOTel: +44 20 3773 6225Email: denis@imandra.ai

Grant Passmoreco-CEOTel: +1 512 629 4038Email: grant@imandra.ai

This information was brought to you by Cision http://news.cision.com

https://news.cision.com/itiviti-group-ab/r/itiviti-partners-with-ai-innovator-imandra-to-integrate-machine-learning-into-client-onboarding-and-,c3021540

The following files are available for download:

SOURCE Itiviti Group AB

View original post here:
Itiviti Partners With AI Innovator Imandra to Integrate Machine Learning Into Client Onboarding and Testing Tools - PRNewswire

Learning that Targets Millennial and Generation Z – HR Exchange Network

Both Millennials and Generation Z can be categorized as digital natives. The way in which they learn reflects that reality. From a learning perspective, a companys learning programs must reflect that also.

Utilizing technologies such as microlearning, which is usually delivered with mobile technology, or machine learning to can engage these individuals in the way they are accustomed to consuming information.

Microlearning is delivering learning in bite-sized pieces. It can take many different forms such an animation or a video. In either case, the information is delivered in a short amount of time; in as little as two to three minutes. In most cases, micro-learning happens on a mobile device or tablet.

When should micro-learning be used?

Think of it as a way to engage employees already on the job. It can be used to deliver quick bits of information that will become immediately relevant to their daily responsibilities. To be more pointed, microlearning is the bridge between formal training and application. At least one study shows after six weeks following a formal training, 85% of the content consumed will have been lost. Microlearning can deliver that information in the interim and can be used at the moment of application.

Microlearning shouldnt be used to replace formal training, but rather as a compliment which makes it perfect for developing and retaining high-quality talent.

Amnesty International piloted a microlearning strategy to launch its global campaign on Human Rights Defenders. The program used the learning approached to build a culture of human rights. It allowed Amnesty to discuss human rights issues in a quick, relevant, and creative manner. As such, learners were taught how to talk to people in everyday life about human rights and human rights defenders.

WEBINAR: L&Ds Role in Enabling the Future of Work with a Skills Focused Strategy

Dell has also used the strategy to implement a digital campaign to encourage 14,000 sales representatives around the world to implement elements of its Net Promoter Score methodology. Using mobile technology and personal computers, the company was able to achieve 11% to 19% uptake in desire among sales reps globally.

Machine learning can also be used as a strategy. Machine learning, which is a branch of artificial intelligence, is an application that provides systems the ability to automatically learn and improve from experience without being programmed to do so.

For the purpose of explanation, the example of an AI-controlled multiple-choice test is relevant. If a person taking the test marked an incorrect answer, AI would then give them a question a bit easier to answer. If the question was answered wrong again, AI would follow with a question lower in difficulty level. When the student began to answer questions correctly, the difficulty of the questions would increase. Similarly, a person answering questions correctly would continue to get more difficult questions. This allows the AI to determine what topics the student understands least. In doing so, learning becomes personalized and specific for the student.

But technology isnt the sole basis for disseminating information. Learning programs should also focus on creating more experience opportunities that offer development in either leadership or talent. Those programs should also prioritize retention. Programs such as mentoring and coaching are great examples.

Dipankar Bandyopadhyay led this charge when he was the Vice President of HR Global R&D and Integration Planning Lead Culture & Change Management for the Monsanto Company. Monsanto achieved this through itsGlobal Leadership Program For Experienced Hires.

A couple of years ago, we realized we had a need to supplement our talent pipeline, essentially in our commercial organization and businesses globally really building talent for key leadership roles within the business, which play really critical influence roles and help drive organizational strategy in these areas. With this intention, we created Global Commercial Emerging Leaders Program, Bandyopadhyay said. Essentially, what it does is focus on getting external talent into Monsanto through different industry segments. This allows us to broaden our talent pipeline, bringing in diverse points of view from very different industry segments (i.e., consumer goods, investment banking, the technology space, etc.) The program selects, onboards, assimilates and develops external talent to come into Monsanto.

Microlearning and machine learning are valuable in developing the workforce, but they are not the only ones available. Additionally, its important to note an organization cant simply provide development and walk away. There has to be data and analysis that tracks employee learning success. There also needs to be strategies in place to make sure workers are retaining that knowledge. Otherwise, it is a waste of money.

NEXT: How L&D Can Help Itself

Want more content faster? Connect with us on Twitter, Facebook and LinkedIn. And don't forget to join our LinkedIn group!

Photo courtesy: Pexels

See original here:
Learning that Targets Millennial and Generation Z - HR Exchange Network

Technologies of the future, but where are AI and ML headed to? – YourStory

Today, when we look around, the technological advances in recent years have been immense. We can see driverless cars, hands-free devices that can turn on the lights, and robots working in factories, which prove that intelligent machines are possible.

In the last four years in the Indian startup ecosystem, the terms that were used (overused rather) more than funding, valuation, and exit were artificial intelligence (AI) and machine learning (ML). We also saw investors readily putting in their money in startups that remotely used or claimed to use these emerging technologies.

From deeptech, ecommerce, fintech, and conversational chatbots to mobility, foodtech, and healthcare, AI and ML have transformed most industry sectors today.

The industry has swiftly moved from asking programmers to feed tonnes of code to the machine to acquiring terabytes of data and crunching it to build relevant logic.

Sameer Dhanrajani, Co-Founder and Chief Executive Officer of AI Advisory & Consulting Firm AIQRATE, says,

A subset of artificial intelligence, machine learning allows systems to make predictions and crucial business decisions, driven by data and pattern-based experiences. Without humans having to intervene, the algorithms that are fed to the systems are helping them develop and improve their own models and understanding of a certain use-case.

According to a study carried out by Analytics India and AnalytixLabs, the Indian data analytics market is expected to double its size by 2020, with about 24 percent being attributed to Big Data. It said that almost 60 percent of the analytics revenue across India comes from exports of analytics to the USA. Domestic revenue accounts for only four percent of the total analytics revenue across the country.

The BFSI industry accounts for almost 37 percent of the total analytics market while generating almost $756 million. While marketing and advertising comes second at 26 percent, ecommerce contributes to about 15 percent.

At present, the average paycheck sizes of AI and ML engineers in India start from Rs 10 lakh per annum and the maximum cap often crosses Rs 50 lakh per annum.

According to a report by Great Learning, an edtech startup for professional education, India is expected to see 1.5 lakh new openings in Data Science in 2020, an increase of about 62 percent as compared to that of 2019. Currently, 70 percent of job postings in this sector are for Data Scientists with less than five years of work experience.

Shantanu Bhattacharya, a data scientist at Locus, had told YourStory earlier about the phenomenon, and opined that it is wrong to look at machine learning as a tool or a career path, and that it is only a convenient means to develop training models to solve problems in general.

The fluid nature of data science allows people from multiple fields of expertise to come and crack it. Shantanu believes if JRR Tolkien, being the brilliant linguist that he was, pursued data science to develop NLP models, he would have been the greatest NLP expert ever, and that is the kind of liberty and scope data science offers.

Needless to say, AI and ML have the scope to exponentially amplify the profitability and efficiency of a business by automating many tasks. And naturally, the trend has spread its wings to the jobs market where the dire need for experts and engineers in these technologies is only going up, and does not seem to slow down.

Thanks to the hefty paychecks and faster career growth, the role of machine learning engineers has claimed the top spot in job portals.

Hari Krishnan Nair, the co-founder of Great Learning, says,

For a country like India, acquiring new skills is not something of a luxury but a necessary requirement, and the trends of upskilling and reskilling are also currently on the rise to complement with the same. But data science, machine learning, and artificial intelligence are those fields where mere book-reading and formulaic interpretation and execution just does not cut it.

If one aspires to have a competitive career in futuristic technologies, machine learning and data science have a larger spectrum of required understanding of probability, statistics, and mathematics on a fundamental level.

To break the myths around programmers and software developers entering this market, machine learning involves understanding of basic programming languages (Python, SQL, R), linear algebra and calculus, as well as inferential and descriptive statistics.

Siddharth Das, Founder of Univ.ai, an early stage edtech startup that focuses on teaching these tools, says,

For a business world that thrives on data and its leverage, the science around it is where the employment economy is moving towards. While the youth of the country is anxious how rapid their upskilling rate is ought to be, it is no easy mountain to climb to rightfully master the art of data science, which it is often referred to as.

Most professionals say it is a consistent routine of learning for almost six to eight months, to be an expert in this field. During this time, when the industry is almost on the verge of fully migrating to NLP and Neural Networks, which are a significant part of future deep-tech, now is more than a better time to start learning machine learning.

With rapidly changing technological paradigms, predicting how the world is going to run is something close to impossible. And being prepared for anything is the best one can manage with, at the moment.

(Edited by Megha Reddy)

Read more:
Technologies of the future, but where are AI and ML headed to? - YourStory

The 4 Hottest Trends in Data Science for 2020 – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

Originally published in Towards Data Science, January 8, 2020

2019 was a big year for all of Data Science.

Companies all over the world across a wide variety of industries have been going through what people are calling a digital transformation. That is, businesses are taking traditional business processes such as hiring, marketing, pricing, and strategy, and using digital technologies to make them 10 times better.

Data Science has become an integral part of those transformations. With Data Science, organizations no longer have to make their important decisions based on hunches, best-guesses, or small surveys. Instead, theyre analyzing large amounts of real data to base their decisions on real, data-driven facts. Thats really what Data Science is all about creating value through data.

This trend of integrating data into the core business processes has grown significantly, with an increase in interest by over four times in the past 5 years according to Google Search Trends. Data is giving companies a sharp advantage over their competitors. With more data and better Data Scientists to use it, companies can acquire information about the market that their competitors might not even know existed. Its become a game of Data or perish.

Google search popularity of Data Science over the past 5 years. Generated by Google Trends.

In todays ever-evolving digital world, staying ahead of the competition requires constant innovation. Patents have gone out of style while Agile methodology and catching new trends quickly is very much in.

Organizations can no longer rely on their rock-solid methods of old. If a new trend like Data Science, Artificial Intelligence, or Blockchain comes along, it needs to be anticipated beforehand and adapted quickly.

The following are the 4 hottest Data Science trends for the year 2020. These are trends which have gathered increasing interest this year and will continue to grow in 2020.

(1) Automated Data Science

Even in todays digital age, Data Science still requires a lot of manual work. Storing data, cleaning data, visualizing and exploring data, and finally, modeling data to get some actual results. That manual work is just begging for automation, and thus has been the rise of automated Data Science and Machine Learning.

Nearly every step of the Data Science pipeline has been or is in the process of becoming automated.

Auto-Data Cleaning has been heavily researched over the past few years. Cleaning big data often takes up most of a Data Scientists expensive time. Both startups and large companies such as IBM offer automation and tooling for data cleaning.

Another large part of Data Science known as feature engineering has undergone significant disruption. Featuretools offers a solution for automatic feature engineering. On top of that, modern Deep Learning techniques such as Convolutional and Recurrent Neural Networks learn their own features without the need for manual feature design.

Perhaps the most significant automation is occurring in the Machine Learning space. Both Data Robot and H2O have established themselves in the industry by offering end-to-end Machine Learning platforms, giving Data Scientists a very easy handle on data management and model building. AutoML, a method for automatic model design and training, has also boomed over 2019 as these automated models surpass the state-of-the-art. Google, in particular, is investing heavily in Cloud AutoML.

In general, companies are investing heavily in building and buying tools and services for automated Data Science. Anything to make the process cheaper and easier. At the same time, this automation also caters to smaller and less technical organizations who can leverage these tools and services to have access to Data Science without building out their own team.

(2) Data Privacy and Security

Privacy and security are always sensitive topics in technology. All companies want to move fast and innovate, but losing the trust of their customers over privacy or security issues can be fatal. So, theyre forced to make it a priority, at least to a bare minimum of not leaking private data.

Data privacy and security has become an incredibly hot topic over the past year as the issues are magnified by enormous public hacks. Just recently on November 22, 2019, an exposed server with no security was discovered on Google Cloud. The server contained the personal information of 1.2 Billion unique people including names, email addresses, phone numbers, and LinkedIn and Facebook profile information. Even the FBI came in to investigate. Its one of the largest data exposures of all time.

To continue reading this article click here.

See the original post here:
The 4 Hottest Trends in Data Science for 2020 - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times

Educate Yourself on Machine Learning at this Las Vegas Event – Small Business Trends

One of the biggest machine learning events is taking place in Las Vegas just before summer, Machine Learning Week 2020

This five-day event will have 5 conferences, 8 tracks, 10 workshops, 160 speakers, more than 150 sessions, and 800 attendees.

If there is anything you want to know about machine learning for your small business, this is the event. Keynote speakers from Google, Facebook, Lyft, GM, Comcast, WhatsApp, FedEx, and LinkedIn to name just some of the companies that will be at the event.

The conferences will include predictive analytics for business, financial services, healthcare, industry and Deep Learning World.

Training workshops will include topics in big data and how it is changing business, hands-on introduction to machine learning, hands-on deep learning and much more.

Machine Learning Week will take place from May 31 to June 4, 2020, at Ceasars Palace in Las Vegas.

Click the red button and register.

Register Now

This weekly listing of small business events, contests and awards is provided as a community service by Small Business Trends.

You can see a full list of events, contest and award listings or post your own events by visiting the Small Business Events Calendar.

Image: Depositphotos.com

Read this article:
Educate Yourself on Machine Learning at this Las Vegas Event - Small Business Trends

Machine Learning Market Size Worth $96.7 Billion by 2025 …

SAN FRANCISCO, Jan. 13, 2020 /PRNewswire/ -- The global machine learning marketsize is expected to reach USD 96.7 billion by 2025, according to a new report by Grand View Research, Inc. The market is anticipated to expand at a CAGR of 43.8% from 2019 to 2025. Production of massive amounts of data has increased the adoption of technologies that can provide a smart analysis of that data.

Key suggestions from the report:

Read 100 page research report with ToC on "Machine Learning Market Size, Share & Trends Analysis Report By Component, By Enterprise Size, By End Use (Healthcare, BFSI, Law, Retail, Advertising & Media), And Segment Forecasts, 2019 - 2025" at: https://www.grandviewresearch.com/industry-analysis/machine-learning-market

Technologies such as Machine Learning (ML) are being rapidly adopted across various applications in order to automatically detect meaningful patterns within a data set. Software based on ML algorithms, such as search engines, anti-spam software, and fraud detection software, are being increasingly used, thereby contributing to market growth.

The rapid emergence of ML technology has increased its adoption across various application areas. It provides cloud computing optimization along with intelligent voice assistance. In healthcare, it is used for the diagnosis of individuals. In case of businesses, the use of ML models that are open source and have a standards-based structure has increased in recent years. These models can be easily deployed in various business programs and can help companies bridge the skills gap between IT programmers and information scientists.

Developments such as fine-tuned personalization, hyper-targeting, searching engine optimization, no-code environment, self-learning bots, and others are projected to change the machine learning landscape. The development of capsule network has replaced neural networks in order to provide more accuracy in pattern detection, with fewer errors. These advanced developments are anticipated to proliferate market growth in the foreseeable future.

Grand View Research has segmented the global machine learning market based on component, enterprise size, end use, and region:

Find more research reports on Next Generation Technologies Industry, by Grand View Research:

Gain access to Grand View Compass, our BI enabled intuitive market research database of 10,000+ reports

About Grand View Research

Grand View Research, U.S.-based market research and consulting company, provides syndicated as well as customized research reports and consulting services. Registered in California and headquartered in San Francisco, the company comprises over 425 analysts and consultants, adding more than 1200 market research reports to its vast database each year. These reports offer in-depth analysis on 46 industries across 25 major countries worldwide. With the help of an interactive market intelligence platform, Grand View Research helps Fortune 500 companies and renowned academic institutes understand the global and regional business environment and gauge the opportunities that lie ahead.

Contact:

Sherry JamesCorporate Sales Specialist, USA Grand View Research, Inc. Phone: +1-415-349-0058 Toll Free: 1-888-202-9519 Email: sales@grandviewresearch.comWeb: https://www.grandviewresearch.comFollow Us: LinkedIn| Twitter

SOURCE Grand View Research, Inc.

Read this article:
Machine Learning Market Size Worth $96.7 Billion by 2025 ...

Technology Trends to Keep an Eye on in 2020 – Built In Chicago

Artificial intelligence and machine learning, with an eye toward task automation.

For Senior Data Scientist James Buban at iHerb, those are just a couple of the tech trends hell be watching in 2020.

As companies enter a new decade, its important for their leaders to anticipate how the latest tech trends will evolve in order to determine how they can benefit their businesses and their customers. CEO of 20spokes Ryan Fischer said his company uses machine learning data to provide a better user experience for our clientscustomers by leveraging data on individual user behavior.

We asked Buban, Fischer and other local tech execs which trends theyre watching this year and how theyll be utilizing them to enhance their businesses. From natural language processing to computer vision, these are the trends that will be shaping tech in 2020.

As a development agency, 20spokes specializes in helping startups plan, build and scale innovative products. CEO Ryan Fischer said he is looking to AI and machine learning to design better chatbots and wrangle large data sets.

What are the top tech trends you're watching in 2020? What impact do you think these trends will have on your industry in particular?

In 2020, we expect AI to play an even bigger role for our clients. When we talk about AI, we are really discussing machine learning and using data to train a model to use patterns and inference.

Working with machine learning continues to get easier with many large providers working on simpler implementations, and we expect the barrier to entry to continue to lower in 2020. We also have more user data which allows us to use machine learning to design more tailored and intelligent experiences for users.

We areusing machine learning to improve chatbots to create more dynamic dialogue.

How are you applying these trends in your work in the year ahead?

At 20spokes, we use machine learning to provide a better user experience for our clients' customers by leveraging data on individual user behavior to make more accurate recommendations and suggestions. We're continuing to look at how we can apply it to different sets of data, from providing better insights of reports for large data sets to sending us real-time updates based on trained patterns. We are also using machine learning to improve chatbots to create more dynamic dialogue.

In order to deliver trusted insights on consumer packaged goods, Label Insights Senior Data Scientist James Buban said they have to first process large amounts of data. Using machine learning and automation, data collection processes can be finished quickly and more accurately for customers.

What are the top tech trends you're watching in 2020?

The top tech trends that well be watching in 2020 are artificial intelligence and machine learning, with an eye toward task automation. In particular, we are interested in advancements in computer vision, such as object detection and recognition. We are also interested in natural language processing, such as entity tagging and text classification. In general, we believe that machine learning automation will play a big role in both the data collection industry and in e-commerce, particularly in the relatively new addition of the food industry in the retail space.

We plan to use computer vision and natural language processing toautomate tasksthroughout 2020.

How are you applying these trends in your work in the year ahead?

At Label Insight, we are building up a large database of attributes for consumables based on package information. To do so, we first need to collect all package data, which has traditionally been accomplished through a team of dedicated data entry clerks. Due to the huge volume of products that need to be added to our database, this data entry process is expensive, tedious and time-consuming.

Therefore, we plan to use computer vision and natural language processing to begin automating these tasks throughout 2020. We are also planning to use this technology to make our e-commerce solutions more scalable.

Visit link:
Technology Trends to Keep an Eye on in 2020 - Built In Chicago

Forget Machine Learning, Constraint Solvers are What the Enterprise Needs – – RTInsights

Constraint solvers take a set of hard and soft constraints in an organization and formulate the most effective plan, taking into account real-time problems.

When a business looks to implement an artificial intelligence strategy, even proper expertise can be too narrow. Its what has led many businesses to deploy machine learning or neural networks to solve problems that require other forms of AI, like constraint solvers.

Constraint solvers take a set of hard and soft constraints in an organization and formulate the most effective plan, taking into account real-time problems. It is the best solution for businesses that have timetabling, assignment or efficiency issues.

In a RedHat webinar, principal software engineer, Geoffrey De Smet, ran through three use cases for constraint solvers.

Vehicle Routing

Efficient delivery management is something Amazon has seemingly perfected, so much so its now an annoyance to have to wait 3-5 days for an item to be delivered. Using RedHats OptaPlanner, businesses can improve vehicle routing by 9 to 18 percent, by optimizing routes and ensuring drivers are able to deliver an optimal amount of goods.

To start, OptaPlanner takes in all the necessary constraints, like truck capacity and driver specialization. It also takes into account regional laws, like the amount of time a driver is legally allowed to drive per day and creates a route for all drivers in the organization.

SEE ALSO: Machine Learning Algorithms Help Couples Conceive

In a practical case, De Smet said RedHat saved a technical vehicle routing company over $100 million in savings per year with the constraint solver. Driving time was reduced by 25 percent and the business was able to reduce its headcount by 10,000.

The benefits [of OptaPlanner] are to reduce cost, improve customer satisfaction, employee well-being and save the planet, said De Smet. The nice thing about some of these are theyre complementary, for example reducing travel time also reduces fuel consumption.

Employee timetabling

Knowing who is covering what shift can be an infuriating task for managers, with all the requests for time off, illness and mandatory days off. In a place where 9 to 5 isnt regular, it can be even harder to keep track of it all.

RedHats OptaPlanner is able to take all of the hard constraints (two days off per week, no more than eight-hour shifts) and soft constraints (should have up to 10 hours rest between shifts) and can formulate a timetable that takes all that into account. When someone asks for a day off, OptaPlanner is able to reassign workers in real-time.

De Smet said this is useful for jobs that need to run 24/7, like hospitals, the police force, security firms, and international call centers. According to RedHats simulation, it should improve employee well-being by 19 to 85 percent, alongside improvements in retention and customer satisfaction.

Task assignment

Even within a single business department, there are skills only a few employees have. For instance, in a call center, only a few will be able to speak fluently in both English and French. To avoid customer annoyance, it is imperative for employees with the right skill-set to be assigned correctly.

With OptaPlanner, managers are able to add employee skills and have the AI assign employees correctly. Using the call center example again, a bilingual advisor may take all calls in French for one day when theres a high demand for it, but on others have a mix of French and English.

For customer support, the constraint solver would be able to assign a problem to the correct advisor, or to the next best thing, before the customer is connected, thus avoiding giving out the wrong advice or having to pass the customer on to another advisor.

In the webinar, De Smet said that while the constraint solver is a valuable asset for businesses looking to reduce costs, this shouldnt be their only aim.

Without having all stakeholders involved in the implementation, the AI could end up harming other areas of the business, like customer satisfaction or employee retention. This is a similar warning given from all analysts on AI implementation it needs to come from a genuine desire to improve the business to get the best outcome.

Read more from the original source:
Forget Machine Learning, Constraint Solvers are What the Enterprise Needs - - RTInsights

Too many AI researchers think real-world problems are not relevant – MIT Technology Review

Any researcher whos focused on applying machine learning to real-world problems has likely received a response like this one: The authors present a solution for an original and highly motivating problem, but it is an application and the significance seems limited for the machine-learning community.

These words are straight from a review I received for a paper I submitted to the NeurIPS (Neural Information Processing Systems) conference, a top venue for machine-learning research. Ive seen the refrain time and again in reviews of papers where my coauthors and I presented a method motivated by an application, and Ive heard similar stories from countless others.

This makes me wonder: If the community feels that aiming to solve high-impact real-world problems with machine learning is of limited significance, then what are we trying to achieve?

The goal of artificial intelligence (pdf) is to push forward the frontier of machine intelligence. In the field of machine learning, a novel development usually means a new algorithm or procedure, orin the case of deep learninga new network architecture. As others have pointed out, this hyperfocus on novel methods leads to a scourge of papers that report marginal or incremental improvements on benchmark data sets and exhibit flawed scholarship (pdf) as researchers race to top the leaderboard.

Meanwhile, many papers that describe new applications present both novel concepts and high-impact results. But even a hint of the word application seems to spoil the paper for reviewers. As a result, such research is marginalized at major conferences. Their authors only real hope is to have their papers accepted in workshops, which rarely get the same attention from the community.

This is a problem because machine learning holds great promise for advancing health, agriculture, scientific discovery, and more. The first image of a black hole was produced using machine learning. The most accurate predictions of protein structures, an important step for drug discovery, are made using machine learning. If others in the field had prioritized real-world applications, what other groundbreaking discoveries would we have made by now?

This is not a new revelation. To quote a classic paper titled Machine Learning that Matters (pdf), by NASA computer scientist Kiri Wagstaff: Much of current machine learning research has lost its connection to problems of import to the larger world of science and society. The same year that Wagstaff published her paper, a convolutional neural network called AlexNet won a high-profile competition for image recognition centered on the popular ImageNet data set, leading to an explosion of interest in deep learning. Unfortunately, the disconnect she described appears to have grown even worse since then.

Marginalizing applications research has real consequences. Benchmark data sets, such as ImageNet or COCO, have been key to advancing machine learning. They enable algorithms to train and be compared on the same data. However, these data sets contain biases that can get built into the resulting models.

More than half of the images in ImageNet (pdf) come from the US and Great Britain, for example. That imbalance leads systems to inaccurately classify images in categories that differ by geography (pdf). Popular face data sets, such as the AT&T Database of Faces, contain primarily light-skinned male subjects, which leads to systems that struggle to recognize dark-skinned and female faces.

While researchers try to outdo one another on contrived benchmarks, one in every nine people in the world is starving.

When studies on real-world applications of machine learning are excluded from the mainstream, its difficult for researchers to see the impact of their biased models, making it far less likely that they will work to solve these problems.

One reason applications research is minimized might be that others in machine learning think this work consists of simply applying methods that already exist. In reality, though, adapting machine-learning tools to specific real-world problems takes significant algorithmic and engineering work. Machine-learning researchers who fail to realize this and expect tools to work off the shelf often wind up creating ineffective models. Either they evaluate a models performance using metrics that dont translate to real-world impact, or they choose the wrong target altogether.

For example, most studies applying deep learning to echocardiogram analysis try to surpass a physicians ability to predict disease. But predicting normal heart function (pdf) would actually save cardiologists more time by identifying patients who do not need their expertise. Many studies applying machine learning to viticulture aim to optimize grape yields (pdf), but winemakers want the right levels of sugar and acid, not just lots of big watery berries, says Drake Whitcraft of Whitcraft Winery in California.

Another reason applications research should matter to mainstream machine learning is that the fields benchmark data sets are woefully out of touch with reality.

New machine-learning models are measured against large, curated data sets that lack noise and have well-defined, explicitly labeled categories (cat, dog, bird). Deep learning does well for these problems because it assumes a largely stable world (pdf).

But in the real world, these categories are constantly changing over time or according to geographic and cultural context. Unfortunately, the response has not been to develop new methods that address the difficulties of real-world data; rather, theres been a push for applications researchers to create their own benchmark data sets.

The goal of these efforts is essentially to squeeze real-world problems into the paradigm that other machine-learning researchers use to measure performance. But the domain-specific data sets are likely to be no better than existing versions at representing real-world scenarios. The results could do more harm than good. People who might have been helped by these researchers work will become disillusioned by technologies that perform poorly when it matters most.

Because of the fields misguided priorities, people who are trying to solve the worlds biggest challenges are not benefiting as much as they could from AIs very real promise. While researchers try to outdo one another on contrived benchmarks, one in every nine people in the world is starving. Earth is warming and sea level is rising at an alarming rate.

As neuroscientist and AI thought leader Gary Marcus once wrote (pdf): AIs greatest contributions to society could and should ultimately come in domains like automated scientific discovery, leading among other things towards vastly more sophisticated versions of medicine than are currently possible. But to get there we need to make sure that the field as whole doesnt first get stuck in a local minimum.

For the world to benefit from machine learning, the community must again ask itself, as Wagstaff once put it: What is the fields objective function? If the answer is to have a positive impact in the world, we must change the way we think about applications.

Hannah Kerner is an assistant research professor at the University of Maryland in College Park. She researches machine learning methods for remote sensing applications in agricultural monitoring and food security as part of the NASA Harvest program.

See the article here:
Too many AI researchers think real-world problems are not relevant - MIT Technology Review

Pear Therapeutics Expands Pipeline with Machine Learning, Digital Therapeutic and Digital Biomarker Technologies – Business Wire

BOSTON & SAN FRANCISCO--(BUSINESS WIRE)--Pear Therapeutics, Inc., the leader in Prescription Digital Therapeutics (PDTs), announced today that it has entered into agreements with multiple technology innovators, including Firsthand Technology, Inc., leading researchers from the Karolinska Institute in Sweden, Cincinnati Childrens Hospital Medical Center, Winterlight Labs, Inc., and NeuroLex Laboratories, Inc. These new agreements continue to bolster Pears PDT platform, by adding to its library of digital biomarkers, machine learning algorithms, and digital therapeutics.

Pears investment in these cutting-edge technologies further supports its strategy to create the broadest and deepest toolset for the development of PDTs that redefine standard of care in a range of therapeutic areas. With access to these new technologies, Pear is positioned to develop PDTs in new disease areas, while leveraging machine learning to personalize and improve its existing PDTs.

We are excited to announce these agreements, which expand the leading PDT platform, said Corey McCann, M.D., Ph.D., President and CEO of Pear. "Accessing external technologies allows us to continue to broaden the scope and efficacy of PDTs.

The field of digital health is evolving rapidly, and PDTs are going to increasingly play a big part because they are designed to allow doctors to treat disease in combination with drug products more effectively than with drugs alone, said Alex Pentland, Ph.D., a leading expert in voice analytics and MIT Professor. For PDTs to make their mark in healthcare, they will need to continually evolve. Machine learning and voice biomarker algorithms are key to guide that evolution and personalization.

About Pear Therapeutics

Pear Therapeutics, Inc. is the leader in prescription digital therapeutics. We aim to redefine medicine by discovering, developing, and delivering clinically validated software-based therapeutics to provide better outcomes for patients, smarter engagement and tracking tools for clinicians, and cost-effective solutions for payers. Pear has a pipeline of products and product candidates across therapeutic areas, including severe psychiatric and neurological conditions. Our lead product, reSET, for the treatment of Substance Use Disorder, was the first prescription digital therapeutic to receive marketing authorization from the FDA to treat disease. Pears second product, reSET-O, for the treatment of Opioid Use Disorder, received marketing authorization from the FDA in December 2018. For more information, visit us at http://www.peartherapeutics.com.

________________________________

1. Jones, T., Moore, T., & Choo, J. (2016). The Impact of Virtual Reality on Chronic Pain. PloS one, 11(12), e0167523. doi:10.1371/journal.pone.0167523

2. Ljtsson B, Hesser H, Andersson E, Lackner JM, Alaoui El S, Falk L, Aspvall K, Fransson J, Hammarlund K, Lfstrm A, Nowinski S, Lindfors P, Hedman E. Provoking symptoms to relieve symptoms: A randomized controlled dismantling study of exposure therapy in irritable bowel syndrome. Beh Res Ther. 2014 Feb 10;55C:2739. PMID:24584055

3. Ljtsson B, Hedman E, Andersson E, Hesser H, Lindfors P, Hursti T, Rydh S, Rck C, Lindefors N, Andersson G. Internet-delivered exposure-based treatment vs. stress management for irritable bowel syndrome: a randomized trial. Am J Gastroenterol. 2011 Aug;106(8):148191. PMID:21537360

4. Ljtsson B, Andersson G, Andersson E, Hedman E, Lindfors P, Andrewitch S, Rck C, Lindefors N. Acceptability, effectiveness, and cost-effectiveness of internet-based exposure treatment for irritable bowel syndrome in a clinical sample: a randomized controlled trial. BMC Gastroenterol. 2011;11(1):110. PMID:21992655

5. Ljtsson B, Falk L, Vesterlund AW, Hedman E, Lindfors P, Rck C, Hursti T, Andrewitch S, Jansson L, Lindefors N, Andersson G. Internet-delivered exposure and mindfulness based therapy for irritable bowel syndrome - a randomized controlled trial. Beh Res Ther. 2010 Jun;48(6):5319. PMID:20362976

Read the original post:
Pear Therapeutics Expands Pipeline with Machine Learning, Digital Therapeutic and Digital Biomarker Technologies - Business Wire

Machine Learning Operationalization Software Market (2020-2026) | Where Should Participant Focus To Gain Maximum ROI | Exclusive Report By DataIntelo…

The Global Machine Learning Operationalization Software Market analysis report published on Dataintelo.com is a detailed study of market size, share and dynamics covered in XX pages and is an illustrative sample demonstrating market trends. This is a latest report, covering the current COVID-19 impact on the market. The pandemic of Coronavirus (COVID-19) has affected every aspect of life globally. This has brought along several changes in market conditions. The rapidly changing market scenario and initial and future assessment of the impact is covered in the report. It covers the entire market with an in-depth study on revenue growth and profitability. The report also delivers on key players along with strategic standpoint pertaining to price and promotion.

Get FREE Exclusive PDF Sample Copy of This Report: https://dataintelo.com/request-sample/?reportId=60428

The Global Machine Learning Operationalization Software Market report entails a comprehensive database on future market estimation based on historical data analysis. It enables the clients with quantified data for current market perusal. It is a professional and a detailed report focusing on primary and secondary drivers, market share, leading segments and regional analysis. Listed out are key players, major collaborations, merger & acquisitions along with upcoming and trending innovation. Business policies are reviewed from the techno-commercial perspective demonstrating better results. The report contains granular information & analysis pertaining to the Global Machine Learning Operationalization Software Market size, share, growth, trends, segment and forecasts from 2020-2026.

With an all-round approach for data accumulation, the market scenarios comprise major players, cost and pricing operating in the specific geography/ies. Statistical surveying used are SWOT analysis, PESTLE analysis, predictive analysis, and real-time analytics. Graphs are clearly used to support the data format for clear understanding of facts and figures.

Customize Report and Inquiry for The Machine Learning Operationalization Software Market Report: https://dataintelo.com/enquiry-before-buying/?reportId=60428

Get in touch with our sales team, who will guarantee you to get a report that suits your necessities.

Primary research, interviews, news sources and information booths have made the report precise having valuable data. Secondary research techniques add more in clear and concise understanding with regards to placing of data in the report.

The report segments the Global Machine Learning Operationalization Software Market as:Global Machine Learning Operationalization Software Market Size & Share, by Regions

Global Machine Learning Operationalization Software Market Size & Share, by ProductsCloud BasedOn Premises

Global Machine Learning Operationalization Software Market Size & Share, ApplicationsBFSIEnergy and Natural ResourcesConsumer IndustriesMechanical IndustriesService IndustriesPublice SectorsOther

Key PlayersMathWorksSASMicrosoftParallelMAlgorithmiaH20.aiTIBCO SoftwareSAPIBMDominoSeldonDatmoActicoRapidMinerKNIME

Avail the Discount on this Report @ https://dataintelo.com/ask-for-discount/?reportId=60428

Dataintelo offers attractive discounts on customization of reports as per your need. This report can be personalized to meet your requirements. Get in touch with our sales team, who will guarantee you to get a report that suits your necessities.

About DataIntelo:DATAINTELO has set its benchmark in the market research industry by providing syndicated and customized research report to the clients. The database of the company is updated on a daily basis to prompt the clients with the latest trends and in-depth analysis of the industry. Our pool of database contains various industry verticals that include: IT & Telecom, Food Beverage, Automotive, Healthcare, Chemicals and Energy, Consumer foods, Food and beverages, and many more. Each and every report goes through the proper research methodology, validated from the professionals and analysts to ensure the eminent quality reports.

Contact Info: Name: Alex MathewsAddress: 500 East E Street, Ontario, CA 91764, United States.Phone No: USA: +1 909 545 6473 | IND: +91-7000061386Email: [emailprotected]Website: https://dataintelo.com

More:
Machine Learning Operationalization Software Market (2020-2026) | Where Should Participant Focus To Gain Maximum ROI | Exclusive Report By DataIntelo...

On the soccer field, or the classroom, William Tobin is a winner – Times-West Virginian

Through countless all-nighters studying for tests, soccer games and science fairs, William Tobin made the most of his four years in high school.

While Tobin earned recognition for his work on an individual basis, possibly his biggest achievement yet ties all of his work together as one. On May 27, it was announced Tobin has been named one of two U.S. Presidential Scholars for the entire state of West Virginia.

What it means to me to get it is really just its a culmination of everything Ive done in high school, said Tobin, who just graduated from Fairmont Senior High. The different national science fairs, countless hours of studying, the all-nighters Ive pulled, different competitions Ive went to, it really just brings all those together to just one award that recognizes it.

The Presidential Scholar program aims to recognize and reward high school seniors for achievements in test scores and extracurricular activities. Tobins extra-curricular resume is impressive having served as president of the West Virginia Association of Student Councils, vice president of the National Honor Society, vice president of Math Honors and captain of his schools math team.

Though this honor typically includes a trip to Washington, D.C., this year because of the coronavirus, the scholars will be awarded the Presidential Scholars Medallion, sponsored by the White House, and honored for their accomplishments during an online recognition event to ensure the health and safety of the award recipients.

Tobin plans to continue learning at Washington and Lee University, in Lexington City, Virginia, which will be aided by his full ride scholarship he earned as a U.S. Presidential Scholar.

I hope to study computer science and business at Washington & Lee University, Tobin said. After college, I hope to work a couple years in the industry, maybe with machine learning, then I hope to finally start a company with machine learning that combats issues.

Tobin thanks everyone he came in contact with at Fairmont Senior for their role in his high school career. He said he has good relationships with the faculty and administration at the school, having made his mark through his achievements.

He is a good student and an all around good kid, said James Greene, assistant principal at Fairmont Senior. I have dealt with Billy a number of times, and he is definitely worthy of the award, and I also think it goes to support the idea that Fairmont Senior is a top academic institution, and I think the teachers and students that we have here winning these kinds of awards reinforces that.

Greene said that Tobin is the first student at the school to get this award, at least in a while.

This is my sixth year here and I dont recall anyone else winning, Greene said. We have definitely had some high end academic students, but I think part of what separated Billy is his test scores. Getting a perfect ACT is very rare.

Along with tremendous academic success, Tobin played four years on the Polar Bears soccer team and contributed to the teams 2019 state championship in which they defeated Robert C. Byrd High 2-1 in sudden-death overtime. But head coach Darrin Paul said Tobins contributions to the team were not only on the field, but in the classroom with his teammates.

Billy was a great student and a great player, Paul said. He was always willing to help his teammates whether it was to become better players or help them with their homework after school.

Tobin was part of the team that took home the championship last year, which he also said was one of his biggest accomplishments.

Tobin said that even through the coronavirus pandemic, his motivation to pursue machine learning was not hindered. He said the isolation actually drove him to further expand his knowledge in the field.

I was pretty motivated to go into machine learning before this, Tobin said. Especially during this pandemic, Ive had a lot of free time, and I really tried to hone my interests into different types of machine learning.

Tobin also said he hopes to make a difference in situations like this pandemic, because machine learning can be used to study data to predict future events.

I think there will be a lot of different PhDs and dissertations done on this, especially in machine learning, Tobin said. Right now were collecting a bunch of data, but we dont really know what it means... Thats exactly what machine learning does, it looks at a bunch of data and tries to analyze trends.

We are making critical coverage of the coronavirus available for free. Please consider subscribing so we can continue to bring you the latest news and information on this developing story.

Continued here:
On the soccer field, or the classroom, William Tobin is a winner - Times-West Virginian

PayMyTuition Develops AI and Machine Learning Technology to Settle Real-Time Cross-Border Tuition Payments for Educational Institutions – PRNewswire

TORONTO and JERSEY CITY, N.J., March 10, 2020 /PRNewswire/ -- While educational institutions are trying to evolve and become more adapted to the digital age, colleges and universities have still lagged when it comes to improved processes for cross-border tuition payments. Fortunately, PayMyTuition, a leading provider of technology-driven global payment processing solutions for international tuition payments, announced today its solution to this problem. By way of their newly developed artificial intelligence (AI) and machine learning technology, the PayMyTuition platform solution can now enable colleges and universities to settle international tuition payments in real-time.

"Today, we have the ability to make digital payments instantly from our smart-phones, but until now, to make international tuition payments, both students and educational institutions experience a high level of friction within the customer experience, manual reconciliation processes, and delays in the availability of funds to the institution, hindering students from immediate enrollment access," said Arif Harji, Chief Market Strategist at MTFX Group. "PayMyTuition AI and machine learning technology was developed specifically for educational institutions, providing them an alternative solution that can remove all the friction and restrictions that exist within current offerings, while enabling real-time settlement for the first time."

In the always-on digital environment that we live in, customers expect optimal convenience and digital solutions across the entire payment ecosystem, and the element of real-time settlement has, until now, been lacking.

PayMyTuition enables educational institution student information systems to optimize payment processing methods, giving students payment methods and timing flexibility. This technology will help institutions to reduce costs, prevent errors and improve overall speed with the ability of real-time settlement. The utilization of AI and machine learning technology within the platform will also provide institutions with large and complete amounts of rich data, including student information and payment statuses, of which they didn't have visibility on before, making end-to-end payment transactions simple and transparent.

PayMyTuition's real-time cross border tuition payment solution is an industry first and can be seamlessly integrated, by way of their real-time API, into most student information systems including: Banner, Colleague, PeopleSoft, Workday and Jenzabar.

The company is expanding rapidly, with plans to enable 30 educational institutions across North America with real-time tuition settlement in the next 60 days. PayMyTuition will continue working with customers across the globe to be able to provide unparalleled customer experience to all students, while significant efficiencies are delivered to the institution, now, all in real-time.

For more information, visit http://www.paymytuition.com.

About PayMyTuition by MTFX

PayMyTuition is part of the MTFX Group of Companies, a foreign exchange and global payments solution provider with a track record of 23+ years, facilitating payments for over 8,000 corporate and institutional clients across North America.

Media ContactCrystal ReizePayMyTuition[emailprotected]

Related Images

paymytuition-mtfx-group.png PayMyTuition - MTFX Group PayMyTuition is part of the MTFX Group of Companies, a foreign exchange and global payments solution provider with a track record of 23+ years, facilitating payments for over 8,000 corporate and institutional clients across North America.

SOURCE PayMyTuition

http://www.paymytuition.com

Read more:
PayMyTuition Develops AI and Machine Learning Technology to Settle Real-Time Cross-Border Tuition Payments for Educational Institutions - PRNewswire