Nudges and machine learning triples advanced care conversations – Penn Today

An electronic nudge to clinicianstriggered by an algorithm that used machine learning methods to flag patients with cancer who would most benefit from a conversation around end-of-life goalstripled the rate of those discussions. This is according to a new prospective, randomized study of nearly 15,000 patients from Penn Medicine and published in JAMA Oncology.

Early and frequent conversations with patients suffering from serious illness, particularly cancer, have been shown to increase satisfaction, quality of life, and care thats consistent with their values and goals. However, many do not get the opportunity to have those discussions with a physician or loved ones because their disease has progressed too far and theyre too ill.

Within and outside of cancer, this is one of the first real-time applications of a machine learning algorithm paired with a prompt to actually help influence clinicians to initiate these discussions in a timely manner, before something unfortunate may happen, says co-lead author Ravi B. Parikh, an assistant professor of medical ethics and health policy and medicine in the Perelman School of Medicine and a staff physician at the Corporal Michael J. Crescenz VA Medical Center. And its not just high-risk patients. It nearly doubled the number of conversations for patients who werent flaggedwhich tells us its eliciting a positive cultural change across the clinics to have more of these talks.

Christopher Manz, of the Dana Farber Cancer Institute, who was a fellow in the Penn Center for Cancer Care Innovation at the time of the study, serves as co-lead author.

In a separate JAMA Oncology study, the research team validated the Penn Medicine-developed machine learning tools effectiveness at predicting short-term mortality in patients in real-time using clinical data from the electronic health record. The algorithm considers more than 500 variablesage, hospitalizations, and co-morbidities, for examplefrom patient records, all the way up until their appointment. Thats one of the advantages of using the EHR to identify patients who may benefit from a timely conversation.

Read more at Penn Medicine News.

Read the rest here:
Nudges and machine learning triples advanced care conversations - Penn Today

Mastercard Says its AI and Machine Learning Solutions Aim to Stop Fraudulent Activites which have Increased Significantly due to COVID – Crowdfund…

Ajay Bhalla, President, Cyber and Intelligence Solutions, Mastercard, notes that artificial intelligence (AI) algorithms are part of the payment companys first line of defense in protecting over 75 billion transactions that Mastercard processes on its network every year.

Bhalia recently revealed the different ways that Mastercard applies its AI expertise to solve some of the most pressing global challenges from cybersecurity to healthcare and the impact the COVID-19 pandemic has had on the way we conduct our lives and interact with those around us.

Cybersecurity fraud rates have reached record highs, with nearly 50% of businesses now claiming that they may have been targeted by cybercriminals during the past two years. Fraudulent activities carried out via the Internet may have increased significantly due to the Coronavirus crisis, because many more people are conducting transactions online.

Mastercard aims to protect consumers from becoming a victim of online fraud. The payments company has added AI-based algorithms to its networks multi-layered security strategy. This allows Mastercards network to support a coordinated set of AI-enabled services to act within milliseconds to potential online security threats. Last year, Mastercard reportedly helped save around $20 billion of fraud via its AI-enhanced systems (which include SafetyNet, Decision Intelligence and Threat Scan)

In statements shared with Arab News, Bhalla noted:

One of the impacts of this pandemic is the rapid migration to digital technologies. Recent data shows that we vaulted five years forward in digital adoption, both consumer and business, in a matter of eight weeks. Whether its online shopping, contactless payments or banks transitioning to remote sales and service teams, this trend is here to stay it is not the new normal, it is the next normal.

Bhalia also mentioned that with many more consumers interacting and performing transactions via the Internet, were now creating large amounts of data. He revealed that, by 2025, well be creating approximately 463 exabytes of data per day and this number is going to keep increasing rapidly.

He further noted that more professionals are now working from the comfort of their home and that this may have also opened new doors for cybercriminals and hackers.

He remarked:

The current crisis is breeding fear, anxiety and stress, with people understandably worried about their health, safety, family and jobs. Unfortunately, that creates a fertile breeding ground for criminals preying on those insecurities, resulting in more cyberattacks and fraud.

He confirmed that Mastercards NuData tech has seen cyberattacks increase in volume and their level of sophistication has also increased, with around one in every three online attacks now being able to closely emulate human behavior.

Bhalla claims that Mastercard has made considerable investments in AI for over a decade and it has also added AI capabilities to all key parts of its business operations.

He also noted:

Our AI and machine learning solutions stop fraud, reduce credit risk, fight financial crime, prevent health care fraud and so much more. In health care, were working with organizations on cyber assessments to help safeguard their cyber systems, staff and patients at this challenging time. In retail, criminals are increasingly targeting digital channels as we shift to shopping online.

He revealed that the Card Not Present fraud currently accounts for about 90% of all fraudulent activities carried out via online platforms. This type of fraud accounted for 75% of all Internet fraud before COVID, Bhalia said. He claims that Mastercards AI was able to rapidly learn this new behavior and changed its scoring to reflect the new pattern, delivering a stronger performance during the pandemic.

Continue reading here:
Mastercard Says its AI and Machine Learning Solutions Aim to Stop Fraudulent Activites which have Increased Significantly due to COVID - Crowdfund...

Learn about data science and coding with Fei Fei, the hero from the Netflix Original, ‘Over the Moon’ – Microsoft

This summer, Microsoftlaunched theGlobal Skills Initiativeaimed at helping 25 million people worldwide acquire new digital skills.And since that announcement, weve helped 10 million people gain skills to better navigate digital transformation.

We believe its imperative to help everyone who wants it to have access to learning technology that powers the digital economy. Those who create technology will shape our future, and there shouldnt be barriers to learning the skills required to do so. Were helping prepare todays learners for jobs of tomorrow in multiple technical fields, from development to data science and machine learning, and more. Our goal is to ignite the passion to solve important problems relevant to their lives, families and communities.

One way we bring that goal to life is through story-driven partnerships with leading creators like Netflix. We began that journey with NASA, Wonder Woman 1984 and the SmithsonianLearning Labs. And now, were excited to release a new learning experience featuring a young female hero who has a passion for science and is empowered by her intelligence to explore space! This has already been an exciting week for space developments at Microsoft now we want to help you explore space with some new learning experiences.

Launching today

Inspired by the newNetflix Original, Over the Moon, today were launchingthree new Microsoft Learn modules that guide learners through beginning concepts in data science, machine learning and artificial intelligence. The new Explore Space with Over the Moon learning path includes three parts:

These modules utilize Visual Studio Code and Azure Cognitive Services so learners walk away with practical skills for the careers of tomorrow. Though Over the Moon features a young hero, the storyline and technical learning aspect has broad appeal for upskilling professionals and post-secondary students alike. Some coding skills are recommended but not required to progress. For more details on the tech in these lessons, check out our Azure Developer Community blog post.

Netflix is excited to partner with Microsoft to bring some of the challenges of space travel that Fei Fei overcame in Over The Moon to life with real world technical application in this new Microsoft Learn path. Magno Herran, head of UCAN Marketing Partnerships, Netflix

The movies story takes place in a beautifully animated universe and tackles problems real-life space engineers face. Over the Moon is a film about Fei Fei, a girl who builds her own space rocket and uses her creativity, resourcefulness and imagination to reach the moon. With its diverse cast and young female protagonist, the film creates an inclusivity to STEM (science, technology, engineering and math) thats so important in inspiring upskilling pros and new learners alike. We hope to inspire students, career changers, and even expert coders to learn something new, because anyone can pursue their dreams, no matter how out of this world they may seem.Explore more about Over the Moon on Microsofts InCulture experience.

Want to hear what the actors of Over the Moon said about new skilling experiences? Start here:

This is such important movie because Fei Feis determination and passion for science is shared by millions of girls and women around the world. Cathy Ang

YouTube Video

Click here to load media

This is an inspiring story of determination and making dreams come true through the love and support of your community we invite you to start your journey of using artificial intelligence and machine learning to support space exploration! Phillipa Soo

YouTube Video

Click here to load media

You can also learn how to create a character like Fei Fei and solve complex problems like her, too with a drawing tutorial from director Glen Keane.

Additional programs to inspire and engage learners

Imagine Cup 2020 Student developers making a difference through coding, collaboration and competition. Over the past 19years, more than 2 million competitors have taken part in the Imagine Cup, Microsofts global student technologycompetition. This season of Imagine Cup is a global virtual experience with four new categories: Earth, Education, Health and Lifestyle. By leveraging Microsoft tools, resources and learning materials students can bring their bold ideas to life. Prizes include mentorship from Microsoft experts, cash, the chance to showcase their work on a global stage, and a mentoring session with Microsoft CEO Satya Nadella.

New Beginners learning on Microsoft Learn and Learn TV. We continue to expand our paths for beginning learners that teach coding, explore new frameworks and libraries, and experiment with emerging technologies. Heres a compilation of our more recent additions:

Wherever you are on the skilling journey, we have something for you! Please join us in helping todays learners build the job skills of tomorrow (and have some fun doing it).

Tags: education, Global skills initiative, STEM

Read more:
Learn about data science and coding with Fei Fei, the hero from the Netflix Original, 'Over the Moon' - Microsoft

Soleadify secures seed funding for database that uses machine learning to track 40M businesses – TechCrunch

Usually, databases about companies have to be painstakingly updated by humans. Soleadify is a startup that uses machine learning to create profiles for businesses in any industry. The first of the companys products is a business search engine that keeps over 40 million business profiles updated, currently used by hundreds of companies in the USA, Europe and Asia for sales and marketing activities.

Its now secured $1.5 million in seed-round funding from European venture firms GapMinder Venture Partners and DayOne Capital, as well as several prominent business angels, through Seedblink, an equity crowdfunding platform based out of Bucharest, Romania.

The company plans to use the funds to further improve their technology, build partnerships and expand their marketing capabilities.

On top of Soleadifys data, they build solutions for prospecting, market research, customer segmentation and industry monitoring.

The way its done is by frequently scanning billions of webpages, identifying and classifying relevant data points and creating connections between them. The result is a database of business data, which is normally only available through laborious, manual research.

Read more here:
Soleadify secures seed funding for database that uses machine learning to track 40M businesses - TechCrunch

NXP Announces Expansion of its Scalable Machine Learning Portfolio and Capabilities – GlobeNewswire

NXP Image

NXP expands scalable machine learning capabilities

EINDHOVEN,The Netherlands, Oct. 19, 2020 (GLOBE NEWSWIRE) -- NXP Semiconductors N.V.(NASDAQ: NXPI) todayannouncedthat it is enhancing its machine learning development environment and product portfolio. Through an investment, NXP has established an exclusive, strategic partnership with Canada-based Au-Zone Technologies to expand NXPs eIQ Machine Learning (ML) software development environment with easy-to-use ML tools and expand its offering of silicon-optimized inference engines for Edge ML.

Additionally, NXP announced that it has been working with Arm as the lead technology partner in evolving Arm Ethos-U microNPU (Neural Processing Unit) architecture to support applications processors. NXP will integrate the Ethos-U65 microNPU into its next generation of i.MX applications processors to deliver energy-efficient, cost-effective ML solutions for the fast-growing Industrial and IoT Edge.

NXPs scalable applications processors deliver an efficient product platform and a broad ecosystem for our customers to quickly deliver innovative systems, said Ron Martino, Senior Vice President and General Manager of Edge Processing at NXP Semiconductors. Through these partnerships with both Arm and Au-Zone, in addition to technology developments within NXP, our goal is to continuously increase the efficiency of our processors while simultaneously increasing our customers productivity and reducing their time to market. NXPs vision is to help our customers achieve lower cost of ownership, maintain high levels of security with critical data, and to stay safe with enhanced forms of human-machine-interaction.

EnablingMachine Learning for All

Au-Zones DeepView ML Tool Suite will augment eIQ with an intuitive,graphical user interface (GUI) and workflow, enabling developers of all experience levels to import datasets and models, rapidly train, and deploy NN models and ML workloads acrossthe NXPEdge processing portfolio. To meet the demanding requirements of todays industrial and IoTapplications, NXPs eIQ-DeepViewML Tool Suite will provide developers with advanced features to prune,quantize, validate, and deploypublic or proprietary NNmodels on NXP devices. Its on-target, graph-level profiling capability will provide developers with unique, run-time insights tooptimize NN model architectures, system parameters, and run-time performance. By adding Au-Zones DeepView run-time inference engine to complement open source inference technologies in NXP eIQ, users will be able to quickly deploy and evaluate ML workloads and performance across NXP devices with minimal effort. A key feature of this run-time inference engine is that it optimizes the system memory usage and data movement uniquely for each SoC architecture.

Au-Zone is incredibly excited to announce this investment and strategic partnership with NXP, especially with its exciting roadmap for additional ML accelerated devices, said Brad Scott, CEO of Au-Zone. We created DeepViewTM to provide developers with intuitive tools and inferencing technology, so this partnership represents a great union of world class silicon, run-time inference engine technology, and a development environment that will further accelerate the deployment of embedded ML features. This partnership builds on a decade of engineering collaboration with NXP and will serve as a catalyst to deliver more advanced Machine Learning technologies and turnkey solutions as OEMs continue to transition inferencing to the Edge.

ExpandingMachine Learning Acceleration

Toacceleratemachine learningin awiderrange ofEdgeapplications, NXPwill expand itspopulari.MXapplications processors for the Industrial and IoT Edge with the integration of the Arm Ethos-U65microNPU, complementing the previously announced i.MX 8M Plus applications processor with integrated NPU. The NXP and Arm technologypartnershipfocused ondefiningthe system-levelaspectsof this microNPUwhichsupportsup to1 TOPS(512 parallelmultiply-accumulateoperationsat 1GHz). The Ethos-U65 maintains the MCU-class power efficiency of the Ethos-U55 while extending its applicability to higher performance Cortex-A-based system-on-chip (SoC)s. The Ethos-U65 microNPU works in concert with the Cortex-M core already present in NXPs i.MX families of heterogeneous SoCs, resulting in improved efficiency.

There has been a surge of AI and ML across industrial and IoT applications driving demand for more on-device ML capabilities, said Dennis Laudick, Vice President of Marketing, Machine Learning Group, at Arm. The Ethos-U65 will power a new wave of edge AI, providing NXP customers with secure, reliable, and smart on-device intelligence.

Availability

Arm Ethos-U65 will be available in future NXPs i.MX applications processors. The eIQ-DeepViewMLTool SuiteandDeepViewrun-time inference engine, integrated into eIQ,will be available Q1, 2021. The end-to-end software enablement,fromtraining, validatingand deployingexisting or new neural network modelsfor i.MX 8M Plusand other NXP SoCs, as well as future devices integrating the Ethos-U55 and U65, will be accessible through NXPseIQ Machine Learning software development environment. To learn more read our blog and register for the joint NXP and Arm webinar on November 10.

About NXP SemiconductorsNXP Semiconductors N.V. enables secure connections for a smarter world, advancing solutions that make lives easier, better, and safer. As the world leader in secure connectivity solutions for embedded applications, NXP is driving innovation in the automotive, industrial & IoT, mobile, and communication infrastructure markets. Built on more than 60 years of combined experience and expertise, the company has approximately 29,000 employees in more than 30 countries and posted revenue of $8.88 billion in 2019. Find out more at http://www.nxp.com.

NXP, the NXP logo and EdgeVerse are trademarks of NXP B.V. All other product or service names are the property of their respective owners. Amazon Web Services and all related logos and motion marks are trademarks of Amazon.com, Inc. or its affiliates. The Bluetooth word mark and logos are registered trademarks owned by Bluetooth SIG, Inc. and any use of such marks by NXP Semiconductors is under license. All rights reserved. 2020 NXP B.V.

For more information, please contact:

NXP-IoTNXP-Smart HomeNXP-Corp

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/ea5038f6-c957-4d81-866f-e613cbe439f6

Originally posted here:
NXP Announces Expansion of its Scalable Machine Learning Portfolio and Capabilities - GlobeNewswire

Teaming Up with Arm, NXP Ups Its Place in the Machine Learning Industry – News – All About Circuits

One of the most popular topics in the technology industry, even for electrical engineers, is machine learning. The newest company to make headlines in the field is NXP Semiconductors withtwo big announcements today.

Looking to further establish its place in the machine learning industry, NXP has made two strategic partnerships, one with Arm and one with Canadia-based Au-Zone. All About Circuits had a sit down with executives at NXP to understand what the news really means.

On the hardware side of things, NXP announced today that it has been collaborating with Arm as the lead technology partner on thenew ArmEthos-U65 microNPU (neural processing unit). This technology partnership allows NXP to integrate the Ethos-U65 microNPU into its next generation of i.MX applications processors with the hopes of delivering energy-efficient, cost-effective ML solutions.

NXP is particularly excited about this partnership becausethis new microNPU is able to maintain the MCU-class power efficiency of the Ethos-U55, but is capable of being used in systems with higher performance Cortex-A-based SoCs.

Some standout features of the Ethos-U65 includemodel compression, on-the-fly weight decompression, and optimization strategies for DRAM and SRAM.

Whats particularly unique about this SoC is that the NPU works alongside a Cortex-M based processor. In our interview, Ben Eckermann, Senior Principal Engineer andSystems Architect at NXP Semiconductors, explained why this feature is advantageous.

Eckermann explains, What's key here is that, similar to the U-55, [the Ethos-U65]doesn't attempt to do everything as one standalone black box. It relies on the Cortex-M processor sitting beside it."

He continues, "The Cortex-M processor is able to handle any network operators that either occur so infrequently that there's no point in dedicating hardware resources in the U-65 to it or some that just don't provide you enough bang for yourbuck, where some things can be done efficiently on the CPU like the very last layers of a NN.

On the software side of things, NXP today announced that it has established an exclusive partnership with Au-Zone to expand NXPs eIQmachine learning (ML) software development environment.

What NXP was really after was Au-Zones DeepViewML Tool Suite, which is said to augment eIQ with an intuitive, graphical user interface (GUI) and workflow. The hope is that this added functionality will make the development, training, and deployment of NN models and ML workloads straightforward and easy for designers of all experience levels.

The tool includes features to prune, quantize, validate, and deploy public or proprietary NN models on NXP devices.

Together, Au-Zone and NXP look to optimize NNs for NXP-based SoCs, providing developers with run-time insights on NN model architectures, system parameters, and run-time performance.

A key feature of this run-time inference engine is that it optimizes the system memory usage and data movement uniquely for each SoC architecture.

Gowri Chindalore, head of NXP's business and technology strategy for edge processing, claims that this feature offerscustomers a double optimization," optimizing both the neural network and then further optimizing for the specific hardware.

With the introduction of the Arm Ethos U-65 microNPU, NXP will be able to provide new functionality and energy savings in future lines of i.MX application processors. This may make way for more powerful and low-energy designs for IoT and other edge applications.

Introducing Au-Zones DeepView Tool Suite will also benefit design engineers becausethe training, optimization, and deployment of NNs will not only be made more simple but will also be optimized for the specific hardware they are running on.

This too should only benefit future developments in IoT and edge applications on NXP-based SoCs.

Read the rest here:
Teaming Up with Arm, NXP Ups Its Place in the Machine Learning Industry - News - All About Circuits

New tool can diagnose stroke with a smartphone – Times of India

A new tool created by researchers could diagnose a stroke based on abnormalities in a patient's speech ability and facial muscular movements, and with the accuracy -- all within minutes from an interaction with a smartphoneAccording to a study, researchers have developed a machine learning model to aid in, and potentially speed up, the diagnostic process by physicians in a clinical setting."Currently, physicians have to use their past training and experience to determine at what stage a patient should be sent for a CT scan," said study author James Wang from Penn State University in the US."We are trying to simulate or emulate this process by using our machine learning approach," Wang added.

The team's novel approach analysed the presence of stroke among actual emergency room patients with suspicion of stroke by using computational facial motion analysis and natural language processing to identify abnormalities in a patient's face or voice, such as a drooping cheek or slurred speech.

To train the computer model, the researchers built a dataset from more than 80 patients experiencing stroke symptoms at Houston Methodist Hospital in Texas.

Read more:
New tool can diagnose stroke with a smartphone - Times of India

Commentary: Can AI and machine learning improve the economy? – FreightWaves

The views expressed here are solely those of the author and do not necessarily represent the views of FreightWaves or its affiliates.

In this installment of the AI in Supply Chain series (#AIinSupplyChain), I tried to discern the outlines of an answer to the question posed in the headline above by reading three academic papers. This article distills what I consider the most important takeaways from the papers.

Although the context of the investigations that resulted in these papers looks at the economy as a whole, there are implications that are applicable at the level of an individual firm. So, if you are responsible for innovation, corporate development and strategy at your company, its probably worth your time to read each of them and then interpret the findings for your own firm.

In this paper, Erik Brynjolfsson, Daniel Rock and Chad Syverson explore the paradox that while systems using artificial intelligence are advancing rapidly, measured economywide productivity has declined.

Recent optimism about AI and machine learning is driven by recent and dramatic improvements in machine perception and cognition. These skills are essential to the ways in which people get work done. So this has fueled hopes that machines will rapidly approach and possibly surpass people in their ability to do many different tasks that today are the preserve of humans.

However, productivity statistics do not yet reflect growth that is driven by the advances in AI and machine learning. If anything, the authors cite statistics to suggest that labor productivity growth fell in advanced economies starting in the mid-2000s and has not recovered to its previous levels.

Therein lies the paradox: AI and machine learning boosters predict it will transform entire swathes of the economy, yet the economic data do not point to such a transformation taking place. What gives?

The authors offer four possible explanations.

First, it is possible that the optimism about AI and machine learning technologies is misplaced. Perhaps they will be useful in certain narrow sectors of the economy, but ultimately their economywide impact will be modest and insignificant.

Second, it is possible that the impact of AI and machine learning technologies is not being measured accurately. Here it is pessimism about the significance of these technologies that prevents society from accurately measuring their contribution to economic productivity.

Third, perhaps these new technologies are producing positive returns to the economy, BUT these benefits are being captured by a very small number of firms and as such the rewards are enjoyed by only a minuscule fraction of the population.

Fourth, the benefits of AI and machine learning will not be reflected in the wider economy until investments have been made to build up complementary technologies, processes, infrastructure, human capital and other types of assets that make it possible for society to realize and measure the transformative benefits of AI and machine learning.

The authors argue that AI, machine learning and their complementary new technologies embody the characteristics of general purpose technologies (GPTs). A GPT has three primary features: It is pervasive or can become pervasive; it can be improved upon as time elapses; and it leads directly to complementary innovations.

Electricity. The internal combustion engine. Computers. The authors cite these as examples of GTPs, with which readers are familiar.

Crucially, the authors state that a GPT can at one moment both be present and yet not affect current productivity growth if there is a need to build a sufficiently large stock of the new capital, or if complementary types of capital, both tangible and intangible, need to be identified, produced, and put in place to fully harness the GPTs productivity benefits.

It takes a long time for economic production at the macro- or micro-scale to be reorganized to accommodate and harness a new GPT. The authors point out that computers took 25 years before they became ubiquitous enough to have an impact on productivity. It took 30 years for electricity to become widespread. As the authors state, the changes required to harness a new GPT take substantial time and resources, contributing to organizational inertia. Firms are complex systems that require an extensive web of complementary assets to allow the GPT to fully transform the system. Firms that are attempting transformation often must reevaluate and reconfigure not only their internal processes but often their supply and distribution chains as well.

The authors end the article by stating: Realizing the benefits of AI is far from automatic. It will require effort and entrepreneurship to develop the needed complements, and adaptability at the individual, organizational, and societal levels to undertake the associated restructuring. Theory predicts that the winners will be those with the lowest adjustment costs and that put as many of the right complements in place as possible. This is partly a matter of good fortune, but with the right roadmap, it is also something for which they, and all of us, can prepare.

In this paper, Brynjolfsson, Xiang Hui and Meng Liu explore the effect that the introduction of eBay Machine Translation (eMT) had on eBays international trade. The authors describe eMT as an in-house machine learning system that statistically learns how to translate among different languages. They also state: As a platform, eBay mediated more than 14 billion dollars of global trade among more than 200 countries in 2014. Basically, eBay represents a good approximation of a complex economy within which to examine the economywide benefits of this type of machine translation.

The authors state: We show that a moderate quality upgrade increases exports on eBay by 17.5%. The increase in exports is larger for differentiated products, cheaper products, listings with more words in their title. Machine translation also causes a greater increase in exports to less experienced buyers. These heterogeneous treatment effects are consistent with a reduction in translation-related search costs, which comes from two sources: (1) an increased matching relevance due to improved accuracy of the search query translation and (2) better translation quality of the listing title in buyers language.

They report an accompanying 13.1% increase in revenue, even though they only observed a 7% increase in the human acceptance rate.

They also state: To put our result in context, Hui (2018) has estimated that a removal of export administrative and logistic costs increased export revenue on eBay by 12.3% in 2013, which is similar to the effect of eMT. Additionally, Lendle et al. (2016) have estimated that a 10% reduction in distance would increase trade revenue by 3.51% on eBay. This means that the introduction of eMT is equivalent of [sic] the export increase from reducing distances between countries by 37.3%. These comparisons suggest that the trade-hindering effect of language barriers is of first-order importance. Machine translation has made the world significantly smaller and more connected.

In this paper, Brynjolfsson, Rock and Syverson develop a model that shows how GPTs like AI enable and require significant complementary investments, including co-invention of new processes, products, business models and human capital. These complementary investments are often intangible and poorly measured in the national accounts, even when they create valuable assets for the firm AND they develop a model that shows how this leads to an underestimation of productivity growth in the early years of a new GPT, and how later, when the benefits of intangible investments are harvested, productivity growth will be overestimated. Their model generates a Productivity J-Curve that can explain the productivity slowdowns often accompanying the advent of GPTs, as well as the increase in productivity later.

The authors find that, first, As firms adopt a new GPT, total factor productivity growth will initially be underestimated because capital and labor are used to accumulate unmeasured intangible capital stocks. Then, second, Later, measured productivity growth overestimates true productivity growth because the capital service flows from those hidden intangible stocks generates measurable output. Finally, The error in measured total factor productivity growth therefore follows a J-curve shape, initially dipping while the investment rate in unmeasured capital is larger than the investment rate in other types of capital, then rising as growing intangible stocks begin to contribute to measured production.

This explains the observed phenomenon that when a new technology like AI and machine learning, or something like blockchain and distributed ledger technology, is introduced into an area such as supply chain, it generates furious debate about whether it creates any value for incumbent suppliers or customers.

If we consider the reported time it took before other GPTs like electricity and computers began to contribute measurably to firm-level and economywide productivity, we must admit that it is perhaps too early to write off blockchains and other distributed ledger technologies, or AI and machine learning, and their applications in sectors of the economy that are not usually associated with internet and other digital technologies.

Give it some time. However, I think we are near the inflection point of the AI and Machine Learning Productivity J-curve. As I have worked on this #AIinSupplyChain series, I have become more convinced that the companies that are experimenting with AI and machine learning in their supply chain operations now will have the advantage over their competitors over the next decade.

I think we are a bit farther away from the inflection point of a Blockchain and Distributed Ledger Technologies Productivity J-Curve. I cannot yet make a cogent argument about why this is true, although in March 2014, I published #ChainReaction: Who Will Own The Age of Cryptocurrencies? part of an ongoing attempt to understand when blockchains and other distributed technologies might become more ubiquitous than they are now.

Examining this topic has added to my understanding of why disruption happens. The authors of the Productivity J-Curve paper state that the more transformative the new technology, the more likely its productivity effects will initially be underestimated.

The long duration during which incumbent firms underestimate the productivity effects of a relatively new GPT is what contributes to the phenomenon studied by Rebecca Henderson and Kim Clark in Architectural Innovation: The Reconfiguration of Existing Product Technologies and the Failure of Established Firms. It is also described as Supply Side Disruption by Josgua Gans in his book, The Disruption Dilemma, and summarized in this March 2016 HBR article, The Other Disruption.

If we focus on AI and machine learning specifically, in an exchange on Twitter on Sept. 27, Brynjolfsson said, The machine translation example is in many ways the exception. More often it takes a lot of organizational reinvention and time before AI breakthroughs translate into productivity gains.

By the time entrenched and industry-leading incumbents awaken to the threats posed by newly developed GPTs, a crop of challengers who had no option but to adopt the new GPT at the outset has become powerful enough to threaten the financial stability of an industry.

One example? E-commerce and its impact on retail in general.

If you are an executive, what experiments are you performing to figure out if and how your companys supply chain operations can be made more productive by implementing technologies that have so far been underestimated by you and other incumbents in your industry?

If you are not doing anything yet, are you fulfilling your obligations to your companys shareholders, employees, customers and other stakeholders?

If you are a team working on innovations that you believe have the potential to significantly refashion global supply chains, wed love to tell your story in FreightWaves. I am easy to reach on LinkedIn and Twitter. Alternatively, you can reach out to any member of the editorial team at FreightWaves at media@freightwaves.com.

Dig deeper into the #AIinSupplyChain Series with FreightWaves.

Commentary: Optimal Dynamics the decision layer of logistics?

Commentary: Combine optimization, machine learning and simulation to move freight

Commentary: SmartHop brings AI to owner-operators and brokers

Commentary: Optimizing a truck fleet using artificial intelligence

Commentary: FleetOps tries to solve data fragmentation issues in trucking

Commentary: Bulgarias Transmetrics uses augmented intelligence to help customers

Commentary: Applying AI to decision-making in shipping and commodities markets

Commentary: The enabling technologies for the factories of the future

Commentary: The enabling technologies for the networks of the future

Commentary: Understanding the data issues that slow adoption of industrial AI

Commentary: How AI and machine learning improve supply chain visibility, shipping insurance

Commentary: How AI, machine learning are streamlining workflows in freight forwarding, customs brokerage

Authors disclosure: I am not an investor in any early-stage startups mentioned in this article, either personally or through REFASHIOND Ventures. I have no other financial relationship with any entities mentioned in this article.

See the original post:
Commentary: Can AI and machine learning improve the economy? - FreightWaves

The secrets of small data: How machine learning finally reached the enterprise – VentureBeat

Over the past decade, big data has become Silicon Valleys biggest buzzword. When theyre trained on mind-numbingly large data sets, machine learning (ML) models can develop a deep understanding of a given domain, leading to breakthroughs for top tech companies. Google, for instance, fine-tunes its ranking algorithms by tracking and analyzing more than one trillion search queries each year. It turns out that the Solomonic power to answer all questions from all comers can be brute-forced with sufficient data.

But theres a catch: Most companies are limited to small data; in many cases, they possess only a few dozen examples of the processes they want to automate using ML. If youre trying to build a robust ML system for enterprise customers, you have to develop new techniques to overcome that dearth of data.

Two techniques in particular transfer learning and collective learning have proven critical in transforming small data into big data, allowing average-sized companies to benefit from ML use cases that were once reserved only for Big Tech. And because just 15% of companies have deployed AI or ML already, there is a massive opportunity for these techniques to transform the business world.

Above: Using the data from just one company, even modern machine learning models are only about 30% accurate. But thanks to collective learning and transfer learning, Moveworks can determine the intent of employees IT support requests with over 90% precision.

Image Credit: Moveworks

Of course, data isnt the only prerequisite for a world-class machine learning model theres also the small matter of building that model in the first place. Given the short supply of machine learning engineers, hiring a team of experts to architect an ML system from scratch is simply not an option for most organizations. This disparity helps explain why a well-resourced tech company like Google benefits disproportionately from ML.

But over the past several years, a number of open source ML models including the famous BERT model for understanding language, which Google released in 2018 have started to change the game. The complexity of creating a model the caliber of BERT, whose aptly named large version has about 340 million parameters, means that few organizations can even consider quarterbacking such an initiative. However, because its open source, companies can now tweak that publicly available playbook to tackle their specific use cases.

To understand what these use cases might look like, consider a company like Medallia, a Moveworks customer. On its own, Medallia doesnt possess enough data to build and train an effective ML system for an internal use case, like IT support. Yet its small data does contain a treasure trove of insights waiting for ML to unlock them. And by leveraging new techniques to glean these insights, Medallia has become more efficient, from recognizing which internal workflows need attention to understanding the company-specific language its employees use when asking for tech support.

So heres the trillion-dollar question: How do you take an open source ML model designed to solve a particular problem and apply that model to a disparate problem in the enterprise? The answer starts with transfer learning, which, unsurprisingly, entails transferring knowledge gained from one domain to a different domain that has less data.

For example, by taking an open source ML model like BERT designed to understand generic language and refining it at the margins, it is now possible for ML to understand the unique language employees use to describe IT issues. And language is just the beginning, since weve only begun to realize the enormous potential of small data.

Above: Transfer learning leverages knowledge from a related domain typically one with a greater supply of training data to augment the small data of a given ML use case.

Image Credit: Moveworks

More generally, this practice of feeding an ML model a very small and very specific selection of training data is called few-shot learning, a term thats quickly become one of the new big buzzwords in the ML community. Some of the most powerful ML models ever created such as the landmark GPT-3 model and its 175 billion parameters, which is orders of magnitude more than BERT have demonstrated an unprecedented knack for learning novel tasks with just a handful of examples as training.

Taking essentially the entire internet as its tangential domain, GPT-3 quickly becomes proficient at these novel tasks by building on a powerful foundation of knowledge, in the same way Albert Einstein wouldnt need much practice to become a master at checkers. And although GPT-3 is not open source, applying similar few-shot learning techniques will enable new ML use cases in the enterprise ones for which training data is almost nonexistent.

With transfer learning and few-shot learning on top of powerful open source models, ordinary businesses can finally buy tickets to the arena of machine learning. But while training ML with transfer learning takes several orders of magnitude less data, achieving robust performance requires going a step further.

That step is collective learning, which comes into play when many individual companies want to automate the same use case. Whereas each company is limited to small data, third-party AI solutions can use collective learning to consolidate those small data sets, creating a large enough corpus for sophisticated ML. In the case of language understanding, this means abstracting sentences that are specific to one company to uncover underlying structures:

Above: Collective learning involves abstracting data in this case, sentences with ML to uncover universal patterns and structures.

Image Credit: Moveworks

The combination of transfer learning and collective learning, among other techniques, is quickly redrawing the limits of enterprise ML. For example, pooling together multiple customers data can significantly improve the accuracy of models designed to understand the way their employees communicate. Well beyond understanding language, of course, were witnessing the emergence of a new kind of workplace one powered by machine learning on small data.

Read more:
The secrets of small data: How machine learning finally reached the enterprise - VentureBeat

Why organisations are poised to embrace machine learning – IT Brief Australia

Article by Snowflake senior sales engineer Rishu Saxena.

Once a technical novelty seen only in software development labs or enormous organisations, machine learning (ML) is poised to become an important tool for large numbers of Australian and New Zealand businesses.

Lured by promises of improved productivity and faster workflows, companies are investing in the technology in rising numbers. According to research firm Fortune Business Insights, the ML market will be worth US$117.19 billion by 2027.

Historically, ML was perceived to be an expensive undertaking that required massive upfront investment in people, as well as both storage and compute systems. Recently, many of the roadblocks that had been hindering adoption have now been removed.

One such roadblock was not having the right mindset or strategy when undertaking ML-related projects. Unlike more traditional software development, ML requires a flexible and open-ended approach. Sometimes it wont be possible to assess the result accurately, and this could well change during deployment and preliminary use.

A second roadblock was the lack of ML automation tools available on the market. Thanks to large investments and hard work by computer scientists, the latest generation of auto ML tools are feature-rich, intuitive and affordable.

Those wanting to put them to work no longer have to undertake extensive data science training or have a software development background. Dubbed citizen data scientists, these people can readily experiment with the tools and put their ideas into action.

The way data is stored and accessed by ML tools has also changed. Advances in areas such as cloud-based data warehouses and data lakes means an organisation can now have all its data in a single location. This means the ML tools can scan vast amounts of data relatively easily, potentially leading to insights that previously would have gone unnoticed.

The lowering of storage costs has further assisted this trend. Where an organisation may have opted to delete or archive data onto tape, that data can now continue to be stored in a production environment, making it accessible to the ML tools.

For those organisations looking to embrace ML and experience the business benefits it can deliver, there are a series of steps that should be followed:

When starting with ML, dont try to run before you walk. Begin with small, stand-alone projects that give citizen data scientists a chance to become familiar with the machine learning process, the tools, how they operate, and what can be achieved. Once this has been bedded down, its then easier to gradually increase the size and scope of activities.

To start your ML journey, lean on the vast number of auto ML tools available on the market instead of using open source notebook based IDEs that require high levels of skills and familiarity with ML.

There is an increasing number of ML tools on the market, so take time to evaluate options and select the ones best suited to your business goals. This will also give citizen data scientists required experience before any in-house development is undertaken.

ML is not something that has to be the exclusive domain of the IT department. Encourage the growth of a pool of citizen data scientists within the organisation who can undertake projects and share their growing knowledge.

To enable ML tools to do as much as possible, centralise the storage of all data in your organisation. One option is to make use of a cloud-based data platform that can be readily scaled as data volumes increase.

Once projects have been underway for some time, closely monitor the results being achieved. This will help to guide further investments and shape the types of projects that will be completed in the future.

Once knowledge and experience levels within the organisation have increased, consider tackling more complex projects. These will have the potential to add further value to the organisation and ensure that stored data is generating maximum value.

The potential for ML to support organisations, help them to achieve fresh insights, and streamline their operations is vast. By starting small and growing over time, its possible to keep costs under control while achieving benefits in a relatively short space of time.

Read more:
Why organisations are poised to embrace machine learning - IT Brief Australia