JP Morgan expands dive into machine learning with new London research centre – The TRADE News

JP Morgan is expanding its foray into machine learning and artificial intelligence (AI) with the launch of a new London-based research centre, as it explores how it can use the technology for new trading solutions.

The US investment bank has recently launched a Machine Learning Centre of Excellence (ML CoE) in London and has hired Chak Wong who will be responsible for overseeing a new team of machine learning engineers, technologists, data engineers and product managers.

Wong was most recently a professor at the Hong Kong University of Science and Technology, where he taught Masters and PhD level courses on AI and derivatives. He was also a senior quant trader at Morgan Stanley and Goldman Sachs in London.

According to JP Morgans website, the ML CoE teams partner across the firm to create and share Machine Learning Solutions for our most challenging business problems. The bank hopes the expansion of the machine learning centre to Europe will accelerate the deployment of the technology in regions outside of the US.

JP Morgan will look to build on the success of a similar New York-based centre it launched in 2018 under the leadership of Samik Chandarana, head of corporate and investment banking applied AI and machine learning.

These ventures include the application of the technology to provide an optimised execution tool in FX algo trading, and the development of Robotrader as a tool to automate pricing and hedging of vanilla equity options, using machine learning.

In November last year, JP Morgan also made a strategic investment in FinTech firm Limeglass, which deploys AI, machine learning and natural language processing (NLP) to analyse institutional research.

AI and machine learning technology has been touted to revolutionise quantitative and algorithmic trading techniques. Many believe its ability to quantify and analyse huge amounts of data will enable them to make more informed investment decisions. In addition, as data sets become more complex, trading strategies are increasingly being built around new machine and deep learning tools.

Speaking at an industry event in Gaining the Edge Hedge Fund Leadership conference in New York last year, representatives from the hedge fund and allocator industry discussed the significant importance the technology will have on investment strategies and processes.

AI and machine learning is going to raise the bar across everything. Those that are not paying attention to it now will fall behind, said one panellist from a $6 billion alternative investment manager, speaking under Chatham House Rules.

Original post:
JP Morgan expands dive into machine learning with new London research centre - The TRADE News

UoB uses machine learning and drone technology in wildlife conservation – Education Technology

The university's new innovations could transform wildlife conservation projects around the globe

The University of Bristol (UoB) has partnered with Bristol Zoological Society (BZS) to develop a trailblazing approach to wildlife conservation, harnessing the power of machine learning and drone technology to transform wildlife conservation around the world.

Backed by the Cabot Institute for the Environment, BZS and EPSRCs CASCADE grant, a team of researchers travelled to Cameroon in December last year to test a number of drones, sensor technologies and deployment techniques to monitor the critically endangered Kordofan giraffe populations in Bnou National Park.

There has been significant and drastic decline recently of larger mammals in the park and it is vital that accurate measurements of populations can be established to guide our conservation actions, said Dr Grinne McCabe, head of field conservation and science at BZS.

In related news: Sustainable Livestock Systems Collaborative seeks to solve the food crisis through tech

Bnou National Park is very difficult to patrol on foot and large parts are virtually inaccessible, presenting a huge challenge for wildlife monitoring. Whats more, the giraffe are very well camouflaged and often found in small, transient groups, said Dr Caspian Johnson, conservation science lecturer at BZS.

Striving to uncover the best method for airborne wildlife monitoring, BZS reached out to Dr Matt Watson from the UoBs School of Earth Sciences, and Dr Tom Richardson from the Universitys Aerospace Department, as well as a member of the Bristol Robotics Laboratory (BRL). The team forged successful collaborations using drones to monitor and measure volcanic emissions to create a system for wildlife monitoring.

A machine learning based system that we develop for the Kordofan giraffe will be applicable to a range of large mammals. Combine that with low-cost aircraft systems capable of automated deployment without the need for large open spaces to launch and land, and we will be able to make a real difference to conservation projects worldwide, said Dr Watson.

Read the original:
UoB uses machine learning and drone technology in wildlife conservation - Education Technology

AI Is Coming to a Grocery Store Near You – Built In

For the consumer packaged goods industry CPG for short the Super Bowl presents both an opportunity and a challenge. The National Retail Federation estimates that almost 194 million Americans will watch Super Bowl LIV.The report claims that each one will spend an average of $88.65 on food, drinks, merchandise and party supplies. Really.

To secure valuable shopping cart space, big food and beverage brands like PepsiCo, Anheuser-Busch InBev and Tyson Foods pull out all the stops, offering promotions on soda, beer and hot dogs designed to be so tempting that they stop consumers in their tracks. Once a promos set, brands need to ensure they have the right amount of product in the right places. Its a process known as demand forecasting, where historical sales data helps estimate consumer demand.Getting that forecast right can make or break the success of a campaign.

Demand forecasts play an important role in a CPG brands day-to-day operations, but they take on a special significance during events like the Super Bowl, where billions of dollars are a stake. If a forecast underestimates demand, brands cede sales to competitors with readily available products. Companies that overestimate demand run the risk of overstocking the wrong store shelves or watching inventory expire in distribution centers.

Increasingly, the brands that come out on top are victorious because of technology. At Kraft Heinz, for example, machine learning models do much of the heavy lifting to generate accurate demand forecasts for major events like the Super Bowl.

What you got probably five, seven years ago were a lot of the consulting firms pitching you on what AI can do, said Brian Pivar, senior director of data and analytics at Kraft Heinz. Now, youre seeing companies build these things out internally they see the value.

For the worlds biggest food and beverage brands, growth means mergers and acquisitions, with big brands often buying smaller competitors that have cornered the market on emerging trends. Acquiring a startup food brand isnt easy, but its much less complex than merging two multinationals that manage critical sales, supply chain and manufacturing processes using customized software platforms.

Thats the world Pivar stepped into when he arrived at Kraft Heinz in late 2018, three years after the merger of Kraft Foods and H.J. Heinz created the worlds fifth-largest CPG brand. In the years since, the company has doubled down on artificial intelligence technologies, including machine learning. But Kraft Heinz, like many other companies in its space, is still playing catch up.

Theres a lot of opportunity to leverage AI to help us make better and smarter decisions.

Even when companies like Kraft Heinz want to move full-steam ahead and incorporate the latest tech into their operations, they still face challenges. Chief among them is the ability to implement technical builds successfully.

CPG companies dont always have strong data foundations, said Pivar. So what you see sometimes is a data scientist spending most of their time getting and properly structuring data. Lets say 80 percent of their time is spent doing that and 20 percent is spent building ML or AI tools instead of the reverse, which is what you want to see.

When Pivar came to Kraft Heinz, his first order of business was to develop a five-year strategy that gave leadership visibility into both his teams goals and their roadmap. Instead of hiring a crew of data scientists right off the bat, Pivar instead brought on data engineers to ensure that his team had the necessary foundation to build advanced analytics. The company also spent four months evaluating and testing cloud partners to find the perfect fit.

In the two years since Pivar joined Kraft Heinz, his team has built machine learning models to generate more accurate demand forecasts around major events with distinctive promotions, like the Super Bowl. Prior to Pivars arrival, these forecasts were generated manually in spreadsheets like Excel.

Machine learning can do amazing things.Read more about the latest ML technology.

His team also built atool that relies on recent sales data and inventory numbers at stores and distribution centers to predict when supermarkets will need to be resupplied and with what products, along with insights about the cause of low stock.

Were looking across the business, from sales to our operations and supply chain teams, said Pivar. Within all of those spaces, theres a lot of opportunity to leverage AI to help us make better and smarter decisions.

Kraft Heinz isnt the only big player in the CPG space thats incorporating AI across its business. Frito-Lay, a PepsiCo subsidiary, is working on a project that uses computer vision and a custom algorithm to optimize the potato-peeling process.Beer giant AB InBev uses machine learning to ensure compliance and fight fraud.And Tyson Foods is considering the viability of using AI-powered drones to monitor animal health and safety.

Even grocery stores are getting in on the action. Walmart has built a 50,000-square-foot store in Levittown, New York, filled with artificial intelligence technology.

Walmarts Intelligent Research Lab, or IRL, is both a technology testbed and a fully functioning store covering 50,000 square feet.The store is filled with sensors and cameras and can automatically alert store associates when a product is out of stock, shopping carts need collecting or more registers are necessary to quell long lines.Theres enough cable in IRL to scale Mt. Everest five times, and the store has enough computing power to download 27,000 hours of music per second.

CPG brands are still figuring out how best to leverage artificial intelligence, which means that, at least in the short term, the shopping experience might not change drastically. But that doesnt mean consumers wont be driving change at least, according to Shastri Mahadeo, founder and CEO of Unioncrate.

Unioncrate is a New York-based startup whose AI-powered supply chain-planning platform generates demand forecasts based on consumer activity and the factors that impact purchasing decisions. For Mahadeo, AI has the potential to both save brands money and reduce waste by aligning production decisions with consumer demand.

If a brand can accurately predict what a retailer is going to order based on what consumers are going to buy, then theyll produce whats needed so they dont have money tied up in working capital, said Mahadeo. Similarly, if a retailer can accurately predict what consumers will buy, then they can stock accordingly.

In addition to streamlining back-end processes related to manufacturing and supply chain management, Pivar said that within 10 years, AI could be used to create a more personalized shopping experience, one where brands customize promotions to consumers with the same focus seen on platforms like Instagram or Facebook.

What does that mean to CPG? asked Pivar. Were still figuring that out, but thats where I see things going.

Cant get enough AI?Read more about its latest applications here.

View post:
AI Is Coming to a Grocery Store Near You - Built In

Google shows off far-flung A.I. research projects as calls for regulation mount – CNBC

Google senior fellow Jeff Dean speaks at a 2017 event in China.

Source: Chris Wong | Google

Artificial intelligence and machine learning are crucial to Google and its parent company Alphabet. Recently promoted Alphabet CEO Sundar Pichai has been talking about an "AI-first world" since 2016, and the company uses the technology across many of its businesses, from search advertising to self-driving cars.

But regulators are expressing concern about the growing power and lack of understanding about how AI works and what it can do. The European Union is exploring new AI regulation, including a possible temporary ban on the use of facial recognition in public, and New York Rep. Carolyn Maloney, who chairs the House Oversight and Reform Committee, recently suggested that AI regulation could be on the way in the U.S., too. Pichai recently called for "clear-eyed" AI regulation amid a rise in fake videos and abuse of facial recognition technology.

Against this backdrop, the company held an event Tuesday to showcase the positive side of AI by showing some of the long-term projects the company is working on.

"Right now, one of the problems in machine learning is we tend to tackle each problem separately," said Jeff Dean, head of Google AI, at Google's San Francisco offices Tuesday. "These long arcs of research are really important to pick fundamental important problems and continue to make progress on them."

While most of Google's projects are still years out from broad use, Dean said they are important in moving Google products along.

Here's a sampling of some of the company's more speculative and long-term AI projects:

Google's robotic kitten helps it understand locomotion.

Google's D'Kitty is a four-legged robot that the company says learned to walk on its own by studying locomotion and using machine learning techniques. Dean said he hopes Google's research and development findings will contribute to machines learning how physical hardware can function in "the real world."

Using braided electronics in soft materials, Google's artificial intelligence technology can connect gestures with media controls. One prototype showed sweatshirt drawstrings that could be twisted to adjust music volume. The user could pinch the drawstrings to play or pause connected music.

Google's tech-woven fabric can control music.

A new transcription feature in Google Translate will convert speech to written transcript and will be available on Android phones at some point in the future. Natural language processing, which is a subset of artificial intelligence, is "of particular interest" to the company, Dean said.

Google Translate currently supports 59 languages.

Google Health announced new research Tuesday, showing that when the company's AI is applied to retinal scans, it can help determine if a patient is anemic. It can also detect diabetic eye diseases and glaucoma, Dean said. The company hopes to analyze other diseases in the future.

Google examines eye health

Google is using sensing tools to track underwater sea life. Using sound detection and artificial intelligence, the company said it can now detect orcas in real time and send messages to harbor managers to help them protect the endangered species.

Google announced Tuesday that it's teaming up with organization DFO and Rainforest Connection to track critically endangered Southern Resident killer whales in Canada. The company's also in the early stages of working with the Monterey Bay Aquarium to help detect species in the ocean nearby.

Google's artificial intelligence can detect certain sea animals based on sounds.

Google's working on a project called MediaPipe, which analyzes video of bodily movements including hand tracking. Dean said the company hopes to read and analyze sign language.

"Video is the next logical frontier for a lot of this work" Dean said.

Google is working on an AI project that detects sign language.

See the original post:
Google shows off far-flung A.I. research projects as calls for regulation mount - CNBC

Blue Prism Adds Conversational AI, Automated Machine Learning and Integration with Citrix to its Digital Workforce – PRNewswire

LONDON andAUSTIN,Texas, Jan. 29, 2020 /PRNewswire/ --Looking to empower enterprises with the latest and most innovative intelligent automation solutions, Blue Prism(AIM: PRSM) today announced theaddition of DataRobot, ServisBOTandUltimato its Technology Alliance Program (TAP) as affiliate partners. These partners extend Blue Prism's reach by making their software accessible to customers viaBlue Prism's Digital Exchange (DX), an intelligent automation "app store" and online community.

Blue Prism's DX is unique in that, every week new intelligent automation capabilities get added to the forum which has resulted intens of thousands of assets being downloaded, making it the ideal online community foraugmenting and extending traditional RPA deployments. The latest capabilities on the DX includedealing with conversational AI (working with chatbots), adding automated machine learning as well as new integrations with Citrix. With just a few clicks users can drag and drop these new capabilities into Blue Prism's Digital Workforceno coding required.

"Blue Prism's vision of providing a Digital Workforce for Every Enterprise is extended with our DX community, which continues to push the boundaries of intelligent automation," says Linda Dotts, SVP Global Partner Strategy and Programs for Blue Prism. "Our DX ecosystem is the catalyst and cornerstone for driving broader innovations with our Digital Workforce. It provides everyone with an a la carte menu of automation options that are drag and drop easy to use."

Below is a quick summary of the new capabilities being brought to market by these TAP affiliate partners:

DataRobot: The integration of DataRobot with Blue Prism provides enterprises with the intelligent automation needed to transform business processes at scale. By combining RPA with AI, the integration automates data-driven predictions and decisions to improve the customer experience, as well as process efficiencies and accuracy. The resulting business process improvements help move the bottom line for businesses by removing repetitive, replicable, and routine tasks for knowledge workers so they can focus on more strategic work.

"The powerful combination of RPA with AI what we call intelligent process automation unlocks tremendous value for enterprises who are looking to operationalize AI projects and solve real business problems," says Michael Setticasi, VP of Alliances at DataRobot. "Our partnership with Blue Prism will extend our ability to deliver intelligent process automation to more customers and drive additional value to global enterprises."

ServisBOT:ServisBOT offers the integration of an insurance-focused chatbot solution to Blue Prism's Robotic Process Automation (RPA), enabling customers to file an insurance claim with their provider using the convenience and 24/7 availability of a chatbot. This integration with ServisBOT's natural language technology adds a claims chatbot skill to the Blue Prism platform, helping insurance companies increase efficiencies and reduce costs across the complete claims management journey and within a Blue Prism defined workflow.

"Together we are providing greater efficiencies in managing insurance claims through chatbots combined with AI-powered automation," says Cathal McGloin, CEO of ServisBOT. "This drives down operational costs while elevating a positive customer experience through faster claims resolution times and reduced friction across all customer interactions."

Ultima: The integration of Ultima IA-Connect with Blue Prism enables fast, secure automation of business processes over Citrix Cloud and Citrix virtual apps and desktops sessions (formerly known as XenApp and XenDesktop). The new IA-Connect tool allows users to automate processes across Citrix ICA or Microsoft RDP virtual channels, without needing to resort to screen scraping or surface automation.

"We know customers who decided not to automate because they were nervous about using cloud-based RPA or because running automations over Citrix was simply too painful," says Scott Dodds, CEO of Ultima. "We've addressed these concerns, with IA-Connect now available on the DX. It gives users the ability to automate their business processes faster while helping reduce overall maintenance and support costs."

Joining the TAP is easier than ever with a new self-serve function on the Digital Exchange itself. To find out more please visit:https://digitalexchange.blueprism.com/site/global/partner/index.gsp

About Blue PrismBlue Prism's vision is to provide a Digital Workforce for Every Enterprise. The company's purpose is to unleash the collaborative potential of humans, operating in harmony with a Digital Workforce, so every enterprise can exceed their business goals and drive meaningful growth, with unmatched speed and agility.

Fortune 500 and public-sector organizations, among customers across 70 commercial sectors, trust Blue Prism's enterprise-grade connected-RPA platform, which has users in more than 170 countries. By strategically applying intelligent automation, these organizations are creating new opportunities and services, while unlocking massive efficiencies that return millions of hours of work back into their business.

Available on-premises, in the cloud, hybrid, or as an integrated SaaS solution, Blue Prism's Digital Workforce automates ever more complex, end-to-end processes that drive a true digital transformation, collaboratively, at scale and across the entire enterprise.

Visit http://www.blueprism.com to learn more or follow Blue Prism on Twitter @blue_prism and on LinkedIn.

2020 Blue Prism Limited. "Blue Prism", "Thoughtonomy", the "Blue Prism" logo and Prism device are either trademarks or registered trademarks of Blue Prism Limited and its affiliates. All Rights Reserved.

SOURCE Blue Prism

https://www.blueprism.com

More:
Blue Prism Adds Conversational AI, Automated Machine Learning and Integration with Citrix to its Digital Workforce - PRNewswire

Parascript Ushers in New Era of Check Processing With Latest CheckXpert.Ai – AiThority

Parascript follows-up its ground-breaking deep-learning-based CheckXpert.AI product with new features for international markets

Parascript, which offers intelligent capture software that processes over 100 billion documents annually, announced the availability of several new payment-related automation products including CheckXpert.AI 1.1 and CheckXpert.AI United Kingdom (UK). Each is based on completely new machine learning algorithms including deep learning neural networks that enable performance, which is better than human speed and accuracy.

As banks transform branches to support higher-value interactions, automation of transactional, low-value tasks becomes more important than ever, said Greg Council, Parascripts Vice President of Marketing and Product Management. Our family of CheckXpert.AI products offers the financial industry the ability to continue to improve customer experience and supports a multi-channel strategy while also reducing costs.

CheckXpert.AI 1.1 now supports reading the payee line for both personal and business checks, enabling support for new applications including compliance and fraud prevention where the identity of the payee is critical. This includes Anti-Money Laundering (AML) efforts using blacklists and payee match.

Recommended News: VoiceFoundry Announces its Consultant Listing on Salesforce AppExchange, the Worlds Leading Enterprise Cloud Marketplace

Beyond compliance, access to the payee enables use cases where the need to provide a pre-defined list is not available. Through new machine learning algorithms, the ability to extract the payee line offers high-quality data without the need to pre-configure the system with payee database information.

CheckXpert.AI UK now provides the same level of human-like performance as CheckXpert.AI. Parascript is also announcing the planned availability of two more CheckXpert.AI products for the first-half of 2020. These products will provide high performance for Canada and Brazil.

CheckXpert.AI is the game changer for banking customers looking to satisfy their remote deposit and branch transformation needs, said Ati Azemoun, Vice President of Business Development at Parascript. CheckXpert.AI frees bank tellers to do thereal work of improving the customer experience and meeting their customers more complex financial needs while the Bank operations can capture valuable data for all payment document transactions from all channels.

Recommended News: Virtual Reality and Augmented Reality Projected as a Game Changer for Future of Content

Today, the CheckXpert.AI family offers the industrys highest accuracy check recognition. By leveraging Parascripts proprietary deep learning algorithms, CheckXpert.AI processes checks in a significantly smarter, more human-like way. CheckXpert.AI takes care of the full stream of documents for Proof of Deposit (POD) and Remittance applications. This includes:

In addition, CheckXpert.AIautomaticallylocates and recognizes:

Recommended News: AiThority Interview with David Keane, CEO at Bigtincan

More:
Parascript Ushers in New Era of Check Processing With Latest CheckXpert.Ai - AiThority

OReilly and Formulatedby Unveil the Smart Cities & Mobility Ecosystems Conference – Yahoo Finance

Conference to showcase the practical, real-life enterprise use of data science, machine learning, AI, IoT, and open data in cities and mobility industries

OReilly, the premier source for insight-driven learning on technology and business, and Formulatedby today announced a new conference focused on how machine learning is transforming the future of urban communities and mobility industries around the world. The inaugural Smart Cities & Mobility Ecosystems (SCME) conference will take place in Phoenix, AZ from April 15-16, 2020 followed by a second event in Miami, FL from June 3-4, 2020.

Rapid technological advancements are challenging cities and the mobility industry with new business models, methodologies in development and manufacturing, unprecedented levels of automation, and the need for new infrastructure. From predictive analytics to policy, the Smart Cities & Mobility Ecosystems conference examines the role of governments, enterprises, and individuals in driving positive change as communities become increasingly connected.

"How we plan, build, and improve our cities has fundamentally changed, driven by powerful new technologies that can make life better for all the constituencies cities hope to serve," said Roger Magoulas, VP of Radar at OReilly and chair of the Smart Cities & Mobility Ecosystems conference. "This conference helps take the pulse of what we expect to change and what is possible for communities and mobility over the coming years."

The focused event brings together enterprise practitioners, technical experts, and executives to discuss how data, artificial intelligence (AI), machine learning, and cutting-edge technologies impact the future of our communities. Attendees can also workshop real-world applications of deep learning, sensor fusion, data processing and AI, automotive camera technology and computer vision algorithms, and reinforcement learning.

"The conversation around AI and ML has moved mainstream in applications like Smart Cities and Mobility Ecosystems," said Anna Anisin, founder and CEO at Formulatedby. "We're excited to collaborate with OReilly to connect our audience of ML practitioners and executives with the policymakers and stakeholders who will participate in taking this technology to the next level to improve lives at scale."

Key speakers at the Smart Cities & Mobility Ecosystems conference in Phoenix include:

Key speakers at the Smart Cities & Mobility Ecosystems conference in Miami include:

Registration for the upcoming Smart Cities and Mobility Ecosystems conference is now open for Phoenix and Miami. A limited number of media passes are also available for qualified journalists and analysts. Please contact info@formulated.by for media or analyst registration. Follow #SCME on Twitter for the latest news and updates.

About Formulatedby

Formulatedby is a marketing agency specializing in building data science, machine learning and AI communities. Female-owned and formulated in Miami, its best known for the Data Science Salon, a vertically focused conference series around AI and ML, and for working throughout the technology landscape in B2B enterprise marketing and experiential marketing. For more information, visit formulated.by.

About OReilly

For 40 years, OReilly has provided technology and business training, knowledge, and insight to help companies succeed. Our unique network of experts and innovators share their knowledge and expertise at OReilly conferences and through the companys SaaS-based training and learning solution, OReilly online learning. OReilly delivers highly topical and comprehensive technology and business learning solutions to millions of users across enterprise, consumer, and university channels. For more information, visit http://www.oreilly.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200129005576/en/

Contacts

Allison Stokesfama PR for OReilly617-986-5010OReilly@famapr.com

Here is the original post:
OReilly and Formulatedby Unveil the Smart Cities & Mobility Ecosystems Conference - Yahoo Finance

An Open Source Alternative to AWS SageMaker – Datanami

(Robert Lucian Crusitu/Shutterstock)

Theres no shortage of resources and tools for developing machine learning algorithms. But when it comes to putting those algorithms into production for inference, outside of AWSs popular SageMaker, theres not a lot to choose from. Now a startup called Cortex Labs is looking to seize the opportunity with an open source tool designed to take the mystery and hassle out of productionalizing machine learning models.

Infrastructure is almost an afterthought in data science today, according to Cortex Labs co-founder and CEO Omer Spillinger. A ton of energy is going into choosing how to attack problems with data why, use machine learning of course! But when it comes to actually deploying those machine learning models into the real world, its relatively quiet.

We realized there are two really different worlds to machine learning engineering, Spillinger says. Theres the theoretical data science side, where people talk about neural networks and hidden layers and back propagation and PyTorch and Tensorflow. And then you have the actual system side of things, which is Kubernetes and Docker and Nvidia and running on GPUs and dealing with S3 and different AWS services.

Both sides of the data science coin are important to building useful systems, Spillinger says, but its the development side that gets most of the glory. AWS has captured a good chunk of the market with SageMaker, which the company launched in 2017 and which has been adopted by tens of thousands of customers. But aside from just a handful of vendors working in the area, such as Algorithmia, the general data-building public has been forced to go it alone when it comes to inference.

A few years removed from UC Berkeleys computer science program and eager to move on from their tech jobs, Spillinger and his co-founders were itching to build something good. So when it came to deciding what to do, Spillinger and his co-founders decided to stick with what they knew, which was working with systems.

(bluebay/Shutterstock.com)

We thought that we could try and tackle everything, he says. We realized were probably never going to be that good at the data science side, but we know a good amount about the infrastructure side, so we can help people who actually know how to build models get them into their stack much faster.

Cortex Labs software begins where the development cycle leaves off. Once a model has been created and trained on the latest data, then Cortex Labs steps in to handle the deployment into customers AWS accounts using its Kubernetes engine (AWS is the only supported cloud at this time; on-prem inference clusters are not supported).

Our starting point is a trained model, Spillinger says. You point us at a model, and we basically convert it into a Web API. We handle all the productionalization challenges around it.

That could be shifting inference workloads from CPUs to GPUs in the AWS cloud, or vice versa. It could be we automatically spinning up more AWS servers under the hood when calls to the ML inference service are high, and spinning down the servers when that demand starts to drop. On top of its build-in AWS cost-optimization capabilities, the Cortex Labs software logs and monitors all activities, which is a requirement in todays security- and regulatory-conscious climate.

Cortex Labs is a tool for scaling real-time inference, Spillinger says. Its all about scaling the infrastructure under the hood.

Cortex Labs delivers a command line interface (CLI) for managing deployments of machine learning models on AWS

We dont help at all with the data science, Spillinger says. We expect our audience to be a lot better than us at understanding the algorithms and understanding how to build interesting models and understanding how they affect and impact their products. But we dont expect them to understand Kubernetes or Docker or Nvidia drivers or any of that. Thats what we view as our job.

The software works with a range of frameworks, including TensorFlow, PyTorch, scikit-learn, and XGBoost. The company is open to supporting more. Theres going to be lots of frameworks that data scientists will use, so we try to support as many of them as we can, Spillinger says.

Cortex Labs software knows how to take advantage of EC2 spot instances, and integrates with AWS services like Elastic Kubernetes Service (EKS), Elastic Container Service (ECS), Lambda, and Fargate. The Kubernetes management alone may be worth the price of admission.

You can think about it as a Kubernetes thats been massaged for the data science use case, Spillinger says. Theres some similarities to Kubernetes in the usage. But its a much higher level of abstraction because were able to make a lot of assumptions about the use case.

Theres a lack of publicly available tools for productionalizing machine learning models, but thats not to say that they dont exist. The tech giants, in particular, have been building their own platforms for doing just this. Airbnb, for instance, has its BigHead offering, while Uber has talked about its system, called Michelangelo.

But the rest of the industry doesnt have these machine learning infrastructure teams, so we decided wed basically try to be that team for everybody else, Spillinger says.

Cortex Labs software is distributed under an open source license and is available for download from its GitHub Web page. Making the software open source is critical, Spillinger says, because of the need for standards in this area. There are proprietary offerings in this arena, but they dont have a chance of becoming the standard, whereas Cortex Labs does.

We think that if its not open source, its going to be a lot more difficult for it to become a standard way of doing things, Spillinger says.

Cortex Labs isnt the only company talking about the need for standards in the machine learning lifecycle. Last month, Cloudera announced its intention to push for standards in machine learning operations, or MLOps. Anaconda, which develops a data science platform, also is backing

Eventually, the Oakland, California-based company plans to develop a managed service offering based on its software, Spillinger says. But for now, the company is eager to get the tool into the hands of as many data scientists and machine learning engineers as it can.

Related Items:

Its Time for MLOps Standards, Cloudera Says

Machine Learning Hits a Scaling Bump

Inference Emerges As Next AI Challenge

Read more here:
An Open Source Alternative to AWS SageMaker - Datanami

How Machine Learning Will Lead to Better Maps – Popular Mechanics

Despite being one of the richest countries in the world, in Qatar, digital maps are lagging behind. While the country is adding new roads and constantly improving old ones in preparation for the 2022 FIFA World Cup, Qatar isn't a high priority for the companies that actually build out maps, like Google.

"While visiting Qatar, weve had experiences where our Uber driver cant figure out how to get where hes going, because the map is so off," Sam Madden, a professor at MIT's Department of Electrical Engineering and Computer Science, said in a prepared statement. "If navigation apps dont have the right information, for things such as lane merging, this could be frustrating or worse."

Madden's solution? Quit waiting around for Google and feed machine learning models a whole buffet of satellite images. It's faster, cheaper, and way easier to obtain satellite images than it is for a tech company to drive around grabbing street-view photos. The only problem: Roads can be occluded by buildings, trees, or even street signs.

So Madden, along with a team composed of computer scientists from MIT and the Qatar Computing Research Institute, came up with RoadTagger, a new piece of software that can use neural networks to automatically predict what roads look like behind obstructions. It's able to guess how many lanes a given road has and whether it's a highway or residential road.

RoadTagger uses a combination of two kinds of neural nets: a convolutional neural network (CNN), which is mostly used in image processing, and a graph neural network (GNN), which helps to model relationships and is useful with social networks. This system is what the researchers call "end-to-end," meaning it's only fed raw data and there's no human intervention.

First, raw satellite images of the roads in question are input to the convolutional neural network. Then, the graph neural network divides up the roadway into 20-meter sections called "tiles." The CNN pulls out relevant road features from each tile and then shares that data with the other nearby tiles. That way, information about the road is sent to each tile. If one of these is covered up by an obstruction, then, RoadTagger can look to the other tiles to predict what's included in the one that's obfuscated.

Parts of the roadway may only have two lanes in a given tile. While a human can easily tell that a four-lane road, shrouded by trees, may be blocked from view, a computer normally couldn't make such an assumption. RoadTagger creates a more human-like intuition in a machine learning model, the research team says.

"Humans can use information from adjacent tiles to guess the number of lanes in the occluded tiles, but networks cant do that," Madden said. "Our approach tries to mimic the natural behavior of humans ... to make better predictions."

The results are impressive. In testing out RoadTagger on occluded roads in 20 U.S. cities, the model correctly counted the number of lanes 77 percent of the time and inferred the correct road types 93 percent of the time. In the future, the team hopes to include other new features, like the ability to identify parking spots and bike lanes.

See more here:
How Machine Learning Will Lead to Better Maps - Popular Mechanics

Machine Learning Could Aid Diagnosis of Barrett’s Esophagus, Avoid Invasive Testing – Medical Bag

A risk prediction model consisting of 8 independent diagnostic variables, including age, sex, waist circumference, stomach pain frequency, cigarette smoking, duration of heartburn and acidic taste, and current history of antireflux medication use, can provide potential insight into a patients risk for Barretts esophagus before endoscopy, according to a study in published Lancet Digital Health.

The study assessed data from 2 prior case-control studies: BEST2 (ISRCTN Registry identifier: 12730505) and BOOST (ISRCTN Registry identifier: 58235785). Questionnaire data were assessed from the BEST2 study, which included responses from 1299 patients, of whom 67.7% (n=880) had Barretts esophagus, which was defined as endoscopically visible columnar-lined oesophagus (Prague classification C1 or M3), with histopathological evidence of intestinal metaplasia on at least one biopsy sample. An algorithm was used to randomly divide (6:4) the cohort into a training data set (n=776) and a testing data set (n=523). A total of 398 patients from the BOOST study, including 198 with Barretts esophagus, were included in this analysis as an external validation cohort. Another 200 control individuals were also included from the BOOST study.

Researchers used a univariate approach called information gain, as well as a correlation-based feature selection. These 2 machine learning filter techniques were used to identify independent diagnostic features of Barretts esophagus. Multiple classification tools were assessed to create a multivariable risk prediction model. The BEST2 testing data set was used for internal validation of the model, whereas the BOOST external validation data set was used for external validation.

In the BEST2 study, the investigators identified a total of 40 diagnostic features of Barretts esophagus. Although 19 of these features added information gain, only 8 features demonstrated independent diagnostic value after correlation-based feature selection. The 8 diagnostic features associated with an increased risk for Barretts esophagus were age, sex, cigarette smoking, waist circumference, frequency of stomach pain, duration of heartburn and acidic taste, and receiving antireflux medication.

The upper estimate of the predictive value of the model, which included these 8 features, had an area under the curve (AUC) of 0.87 (95% CI, 0.84-0.90; sensitivity set, 90%; specificity, 68%). In addition, the testing data set demonstrated an AUC of 0.86 (95% CI, 0.83-0.89; sensitivity set, 90%; specificity, 65%), and the external validation data set featured an AUC of 0.81 (95% CI, 0.74-0.84; sensitivity set, 90%; specificity, 58%).

The study was limited by the fact that it collected data solely from at-risk patients, which enriched the overall cohorts for patients with Barrets esophagus.

The researchers concluded that the risk prediction panels generated from this study would be easy to implement into medical practice, allowing patients to enter their symptoms into a smartphone app and receive an immediate risk factor analysis. After receiving results, the authors suggest, these data could then be uploaded to a central database (eg, in the cloud) that would be updated after that person sees their medical professional.

Reference

Rosenfeld A, Graham DG, Jevons S, et al; BEST2 study group. Development and validation of a risk prediction model to diagnose Barretts oesophagus (MARK-BE): a case-control machine learning approach [published online December 5, 2019]. Lancet Digit Health. doi:10.1016/S2589-7500(19)30216-X.

Read the rest here:
Machine Learning Could Aid Diagnosis of Barrett's Esophagus, Avoid Invasive Testing - Medical Bag