SiFive and CEVA Partner to Bring Machine Learning Processors to Mainstream Markets – PRNewswire

SAN MATEO and MOUNTAIN VIEW, Calif., Jan. 7, 2020 /PRNewswire/ --SiFive, Inc., the leading provider of commercial RISC-V processor IP and silicon solutions and CEVA, Inc. (NASDAQ: CEVA), the leading licensor of wireless connectivity and smart sensing technologies, today announced a new partnership to enable the design and creation of ultra-low-power domain-specific Edge AI processors for a range of high-volume end markets. The partnership, as part of SiFive's DesignShare program, is centered around RISC-V CPUs, CEVA's DSP cores, AI processors and software, which will be designed into SoCs targeting an array of end markets where on-device neural networks inferencing supporting imaging, computer vision, speech recognition and sensor fusion applications is required. Initial end markets include smart home, automotive, robotics, security and surveillance, augmented reality, industrial and IoT.

Machine Learning Processing at the EdgeDomain-specific SoCs which can handle machine learning processing on-device are set to become mainstream, as the processing workloads of devices increasingly includes a mix of traditional software and efficient deep neural networks to maximize performance, battery life and to add new intelligent features. Cloud-based AI inference is not suitable for many of these devices due to security, privacy and latency concerns. SiFive and CEVA are directly addressing these challenges through the development of a range of domain-specific scalable edge AI processor designs, with the optimal balance of processing, power efficiency and cost.

The Edge AI SoCs are supported by CEVA's award-winning CDNN Deep Neural Network machine learning software compiler that creates fully-optimized runtime software for the CEVA-XM vision processors, CEVA-BX audio DSPs and NeuPro AI processors. Targeted for mass-market embedded devices, CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized compute CNN and RNN libraries into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing. CEVA will also supply a full development platform for partners and developers based on the CEVA-XM and NeuPro architectures to enable the development of deep learning applications using the CDNN, targeting any advanced network, as well as DSP tools and libraries for audio and voice pre- and post-processing workloads.

SiFive DesignShare ProgramThe SiFive DesignShare IP program offers a streamlined process for companies seeking to partner with leading vendors to provide pre-integrated premium Silicon IP for bringing new SoCs to market. As part of SiFive's business model to license IP when ready for mass production, the flexibility and choice of the DesignShare IP program reduces the complexities of contract negotiation and licensing agreements to enable faster time to market through simpler prototyping, no legal red tape, and no upfront payment.

"CEVA's partnership with SiFive enables the creation of Edge AI SoCs that can be quickly and expertly tailored to the workloads, while also retaining the flexibility to support new innovations in machine learning," said Issachar Ohana, Executive Vice President, Worldwide Sales at CEVA. "Our market leading DSPs and AI processors, coupled with the CDNN machine learning software compiler, allow these AI SoCs to simplify the deployment of cloud-trained AI models in intelligent devices and provides a compelling offering for anyone looking to leverage the power of AI at the edge."

"Enabling future-proof, technology-leading processor designs is a key step in SiFive's mission to unlock technology roadmaps," said Dr. Naveed Sherwani, president and CEO, SiFive. "The rapid evolution of AI models combined with the requirements for low power, low latency, and high-performance demand a flexible and scalable approach to IP and SoC design that our joint CEVA / SiFive portfolio is superbly positioned to provide. The result is shorter time-to-market, while lowering the entry barriers for device manufacturers to create powerful, differentiated products."

AvailabilitySiFive's DesignShare program, including CEVA-BX Audio DSPs, CEVA-XM Vision DSPs and NeuPro AI processors, is available now. Visit http://www.sifive.com/designshare for more information.

About SiFiveSiFive is on a mission to free semiconductor roadmaps and declare silicon independence from the constraints of legacy ISAs and fragmented solutions. As the leading provider of market-ready processor core IP and silicon solutions based on the free and open RISC-V instruction set architecture SiFive helps SoC designers reduce time-to-market and realize cost savings with customized, open-architecture processor cores, and democratizes access to optimized silicon by enabling system designers in all markets to build customized RISC-V based semiconductors. Founded by the inventors of RISC-V, SiFive has 16 design centers worldwide, and has backing from Sutter Hill Ventures, Qualcomm Ventures, Spark Capital, Osage University Partners, Chengwei, Huami, SK Hynix, Intel Capital, and Western Digital. For more information,please visit http://www.sifive.com.

Stay current with the latest SiFive updates via LinkedIn, Twitter, Facebook, and YouTube.

About CEVA, Inc.CEVA is the leading licensor of wireless connectivity and smart sensing technologies. We offer Digital Signal Processors, AI processors, wireless platforms and complementary software for sensor fusion, image enhancement, computer vision, voice input and artificial intelligence, all of which are key enabling technologies for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, robotics, industrial and IoT. Our ultra-low-power IPs include comprehensive DSP-based platforms for 5G baseband processing in mobile and infrastructure, advanced imaging and computer vision for any camera-enabled device and audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For sensor fusion, our Hillcrest Labs sensor processing technologies provide a broad range of sensor fusion software and IMU solutions for AR/VR, robotics, remote controls, and IoT. For artificial intelligence, we offer a family of AI processors capable of handling the complete gamut of neural network workloads, on-device. For wireless IoT, we offer the industry's most widely adopted IPs for Bluetooth (low energy and dual mode), Wi-Fi 4/5/6 (802.11n/ac/ax) and NB-IoT. Visit us at http://www.ceva-dsp.comand follow us on Twitter, YouTube,Facebook, LinkedInand Instagram.

Logo: https://mma.prnewswire.com/media/74483/ceva__inc__logo.jpg

SOURCE CEVA, Inc.

http://www.ceva-dsp.com

Excerpt from:
SiFive and CEVA Partner to Bring Machine Learning Processors to Mainstream Markets - PRNewswire

Can We Do Deep Learning Without Multiplications? – Analytics India Magazine

A neural network is built around simple linear equations like Y = WX + B, which contain something called as weights W. These weights get multiplied with the input X and thus plays a crucial in how the model predicts.

Most of the computations in deep neural networks are multiplications between float-valued weights and float-valued activations during the forward inference.

The prediction scores can even go downhill if a wrong weight gets updated and as the network gets deeper i.e addition of more layers and columns of connected nodes, the error gets magnified and the results miss the target.

To make models lighter while not keeping the efficiency intact, many solutions have been developed, and one such solution is neural compression.

When we say neural compression, it actually means is the combination of the following techniques:

However, these methods facilitate a faster way of training models but do not eliminate underlying operations.

Convolutions are the gold standard of machine vision models, a default operation to extract features from visual data. And there hardly has been any attempt to replace convolution with another more efficient similarity measure, and that is why its better to only involve additions.

Instead of developing software and hardware solutions to cater for faster multiplications between layers, can we train models without multiplication?

To answer this question, researchers from Huawei labs and Peking University in collaboration with the University of Sydney have come up with AdderNet or adder networks that trade massive multiplications in deep neural networks, especially convolutional neural networks (CNNs), for much cheaper additions to reduce computation costs.

The notion here is that adding two numbers is easy compared to multiplying two numbers.

A norm in the context of linear algebra is the total length of all vectors in space.

For the vector, say X = [3,4]

The L 1 norm is calculated as:

The underlying working of AdderNets, according to Hanting Chen et al., is given as follows:

Input: An initialised adder network N with its training set X and the corresponding labels Y, along with the global learning rate and the hyper-parameter .

Output: A well-trained adder network N with almost no multiplications.

To validate the effectiveness of AdderNets, the following setup is used:

Benchmark datasets: MNIST, CIFAR and ImageNet.

Hardware: NVIDIA Tesla V100 GPU

Framework: PyTorch.

The results from the MNIST experiment show that the convolutional neural network achieves a 99.4% accuracy with 435K multiplications and 435K additions. By replacing the multiplications in convolution with additions, the proposed AdderNet achieves a 99.4% accuracy, which is the same as that of CNNs, with 870K additions and almost no multiplication.

The biggest difference between CNNs and AdderNets is that the convolutional neural network calculates the cross-correlation between filters and inputs. If filters and inputs are approximately normalised, the convolution operation then becomes equivalent to cosine distance between two vectors.

AdderNets on the other hand, utilise the L1-norm to distinguish different classes. Thus, the features tend to be clustered towards different class centres.

Features of CNNs in different classes are divided by their angles. In contrast, features of AdderNets tend to be clustered towards different class centres, since AdderNets use the L1-norm to distinguish different classes.

However, AdderNets still have a long way to go.

For example, lets say, X is the input feature, F is filter and Y is the output, the difference between the CNNs and AdderNets can be seen in the way where their variances are approximated as:

Usually, Var[F] or variance of the filter is a small value (~0.003). So, multiplying Var[F] in case of CNNs will result in smaller variances, which in turn will lead to a smooth flow of information in the network.

Whereas due to addition in the AdderNets, the variance is larger, which means the gradient w.r.t X is smaller, and hence this will slow down the network updating.

AdderNets were proposed to make machine learning a lightweight task and we are here, already trading time. To avoid large variance effects, the authors in their work, recommend the use of an adaptive learning rate for different layers in AdderNet.

Machine learning is computationally intensive and there is always a tradeoff between accuracy and inference time(speed).

The high-power consumption of these high-end GPU cards has hindered the state-of-the-art machine learning models from being deployed on smartphones and other wearables.

Though companies like Apple with their A13 bionic chips are revolutionising deep learning for mobiles, it is required to have an effective investigation of the techniques that have been overlooked. Something as scary as imagining convolutions without multiplications can result in models like AdderNets.

comments

See the rest here:
Can We Do Deep Learning Without Multiplications? - Analytics India Magazine

Machine learning is innately conservative and wants you to either act like everyone else, or never change – Boing Boing

Next month, I'm giving a keynote talk at The Future of the Future: The Ethics and Implications of AI, an event at UC Irvine that features Bruce Sterling, Rose Eveleth, David Kaye, and many others!

Preparatory to that event, I wrote an op-ed for the LA Review of Books on AI and its intrinsic conservativism, building on Molly Sauter's excellent 2017 piece for Real Life.

Sauters insight in that essay: machine learning is fundamentally conservative, and it hates change. If you start a text message to your partner with Hey darling, the next time you start typing a message to them, Hey will beget an autosuggestion of darling as the next word, even if this time you are announcing a break-up. If you type a word or phrase youve never typed before, autosuggest will prompt you with the statistically most common next phrase from all users (I made a small internet storm in July 2018 when I documented autocompletes suggestion in my message to the family babysitter, which paired Can you sit with on my face and).

This conservativeness permeates every system of algorithmic inference: search for a refrigerator or a pair of shoes and they will follow you around the web as machine learning systems re-target you while you move from place to place, even after youve bought the fridge or the shoes. Spend some time researching white nationalism or flat earth conspiracies and all your YouTube recommendations will try to reinforce your interest. Follow a person on Twitter and you will be inundated with similar people to follow. Machine learning can produce very good accounts of correlation (this person has that persons address in their address-book and most of the time that means these people are friends) but not causation (which is why Facebook constantly suggests that survivors of stalking follow their tormentors who, naturally, have their targets addresses in their address books).

Our Conservative AI Overlords Want Everything to Stay the Same [Cory Doctorow/LA Review of Books]

(Image: Groundhog Day/Columbia Pictures)

I've been writing about the Aeropress coffee maker for years, an ingenious, compact, low-cost way of brewing outstanding coffee with vastly less fuss and variation than any other method. For a decade, I've kept an Aeropress in my travel bag, even adding a collapsible silicone kettle for those hotel rooms lacking even a standard coffee-maker to heat water with.

[I adored Cecil Castellucci and Jim Rugg's YA graphic novels The Plain Janes and Janes in Love, which were the defining titles for the late, lamented Minx imprint from DC comics. A decade later, the creators have gotten the rights back and there's a new edition Little, Brown. We're honored to have an exclusive transcript of Cecil and Jim in conversation, discussing the origins of Plain Janes. Make no mistake: this reissue is amazing news, and Plain James is an underappreciated monster of a classic, finally getting another day in the spotlight. If you haven't read it, consider yourself lucky, because you're about to get another chance. -Cory]

Clay Per Day is a Dutch sculptor whose Etsy store features grotesque, "realistic" sculptures that mash up the heads of angry babies with spiders and knurled fingers, about the right size for posing on your desk at work. (via Creepbay)

The best apps of 2019 could be the best deals of 2020. If you missed them last year, here are 10 of our Boing Boing reader favorites all on sale. Take advantage of deep discounts on apps dedicated to language learning, gaming, graphic design and many more. Degoo Premium: Lifetime 10TB Backup Plan With []

Missed that sale in the chaos of the holiday rush? No worries. Weve rounded up 10 of the best deals from the past year on tech, household items, audio gear and much more all still priced way down. LG B8 Series 55 OLED 4K HDR TV Consumer Reports flagged the B8 as one of []

Whatever your resolution is for the new year, youll be able to do it better with more sleep. Modern sleep masks are more than just blindfolds. They incorporate 3D contouring, ambient noise blocking and other features designed to help you shut out the world and slow down your busy, conscious mind. Heres 9 of our []

View post:
Machine learning is innately conservative and wants you to either act like everyone else, or never change - Boing Boing

Pear Therapeutics Expands Pipeline with Machine Learning, Digital Therapeutic and Digital Biomarker Technologies – Business Wire

BOSTON & SAN FRANCISCO--(BUSINESS WIRE)--Pear Therapeutics, Inc., the leader in Prescription Digital Therapeutics (PDTs), announced today that it has entered into agreements with multiple technology innovators, including Firsthand Technology, Inc., leading researchers from the Karolinska Institute in Sweden, Cincinnati Childrens Hospital Medical Center, Winterlight Labs, Inc., and NeuroLex Laboratories, Inc. These new agreements continue to bolster Pears PDT platform, by adding to its library of digital biomarkers, machine learning algorithms, and digital therapeutics.

Pears investment in these cutting-edge technologies further supports its strategy to create the broadest and deepest toolset for the development of PDTs that redefine standard of care in a range of therapeutic areas. With access to these new technologies, Pear is positioned to develop PDTs in new disease areas, while leveraging machine learning to personalize and improve its existing PDTs.

We are excited to announce these agreements, which expand the leading PDT platform, said Corey McCann, M.D., Ph.D., President and CEO of Pear. "Accessing external technologies allows us to continue to broaden the scope and efficacy of PDTs.

The field of digital health is evolving rapidly, and PDTs are going to increasingly play a big part because they are designed to allow doctors to treat disease in combination with drug products more effectively than with drugs alone, said Alex Pentland, Ph.D., a leading expert in voice analytics and MIT Professor. For PDTs to make their mark in healthcare, they will need to continually evolve. Machine learning and voice biomarker algorithms are key to guide that evolution and personalization.

About Pear Therapeutics

Pear Therapeutics, Inc. is the leader in prescription digital therapeutics. We aim to redefine medicine by discovering, developing, and delivering clinically validated software-based therapeutics to provide better outcomes for patients, smarter engagement and tracking tools for clinicians, and cost-effective solutions for payers. Pear has a pipeline of products and product candidates across therapeutic areas, including severe psychiatric and neurological conditions. Our lead product, reSET, for the treatment of Substance Use Disorder, was the first prescription digital therapeutic to receive marketing authorization from the FDA to treat disease. Pears second product, reSET-O, for the treatment of Opioid Use Disorder, received marketing authorization from the FDA in December 2018. For more information, visit us at http://www.peartherapeutics.com.

________________________________

1. Jones, T., Moore, T., & Choo, J. (2016). The Impact of Virtual Reality on Chronic Pain. PloS one, 11(12), e0167523. doi:10.1371/journal.pone.0167523

2. Ljtsson B, Hesser H, Andersson E, Lackner JM, Alaoui El S, Falk L, Aspvall K, Fransson J, Hammarlund K, Lfstrm A, Nowinski S, Lindfors P, Hedman E. Provoking symptoms to relieve symptoms: A randomized controlled dismantling study of exposure therapy in irritable bowel syndrome. Beh Res Ther. 2014 Feb 10;55C:2739. PMID:24584055

3. Ljtsson B, Hedman E, Andersson E, Hesser H, Lindfors P, Hursti T, Rydh S, Rck C, Lindefors N, Andersson G. Internet-delivered exposure-based treatment vs. stress management for irritable bowel syndrome: a randomized trial. Am J Gastroenterol. 2011 Aug;106(8):148191. PMID:21537360

4. Ljtsson B, Andersson G, Andersson E, Hedman E, Lindfors P, Andrewitch S, Rck C, Lindefors N. Acceptability, effectiveness, and cost-effectiveness of internet-based exposure treatment for irritable bowel syndrome in a clinical sample: a randomized controlled trial. BMC Gastroenterol. 2011;11(1):110. PMID:21992655

5. Ljtsson B, Falk L, Vesterlund AW, Hedman E, Lindfors P, Rck C, Hursti T, Andrewitch S, Jansson L, Lindefors N, Andersson G. Internet-delivered exposure and mindfulness based therapy for irritable bowel syndrome - a randomized controlled trial. Beh Res Ther. 2010 Jun;48(6):5319. PMID:20362976

Read more:
Pear Therapeutics Expands Pipeline with Machine Learning, Digital Therapeutic and Digital Biomarker Technologies - Business Wire

Cerner Expands Collaboration with Amazon Web as its Preferred Machine Learning Provider – Story of Future

Cerner is looking to capitalize on the latest technologies. As a part of their incumbent collaboration efforts, the company has chosen to expand its collaboration with Amazon Web Services (AWS) as its most preferred provider for machine learning and artificial intelligence. Cerner will continue using AWS technologies to improve the overall quality of patient care. Through this collaboration the company also expects to tackle healthcare costs and boost population health efforts.

Cerner will work to move center applications to AWS as a major aspect of the community oriented understanding, authorities said. Moreover, the organization is institutionalizing its AI and AI outstanding burdens on AWS to grow new prescient innovation.

One focal point of this new activity is the Cerner Machine Learning Ecosystem a platform designed with the help of Amazon SageMaker, Amazon Simple Storage Service, AWS Lambda, Amazon Simple Queue Service, AWS Step Functions and Amazon CloudWatch.

The organizations state the stage will help medicinal services information researchers building, send, screen and oversee machine models at scale and help Cerner discover progressively prescient and computerized demonstrative experiences for before wellbeing intercessions.

Among the principal new AI activities AWS and Cerner will handle are readmission anticipation and clinician burnout.

At Amazon Web Services re:Invent, CEO of Cerner, Brent Shafer noted a client whom it could serve serve by applying AI to recorded information moved to the AWS Cloud. It built up a model that helped the medicinal services framework arrive at the most reduced readmission rate in over 10 years.

Whats more, the new Amazon Transcribe Medical, and AI instruments like it, will likewise be sharpened with assistance from Cerner to decrease the documentation trouble looked by clinicians every day.

The digitization of human services has coincidentally caused an expansion in documentation for doctors, said Shafer. Working with AWS will enable us to catch specialist understanding collaboration and coordinate it straightforwardly into the electronic work process of the doctor. This new headway will help specialists and suppliers invest less energy rounding out structures and greater quality time with their patients.

Editor-in-Chief

20+ years of diverse and extensive experience in higher education including teaching, research, and university and community service in overseas universities and colleges.Associate Editor, and publications in international refereed journals and presented most of them in international conferences in the fields of Applied Multivariate Statistics, Mortality, Social Science, Economics.

Mail: globalqyresearch@gmail.com

See the original post here:
Cerner Expands Collaboration with Amazon Web as its Preferred Machine Learning Provider - Story of Future

NXP Debuts i.MX Applications Processor with Dedicated Neural Processing Unit for Advanced Machine Learning at the Edge – GlobeNewswire

NXP debuts new i.MX 8M Plus heterogeneous application processor with dedicated neural network accelerator at CES 2020

The range of applications made possible with the cost-effective i.MX 8M Plus spans people and object recognition for public safety, industrial machine vision, robotics, hand gesture, and emotion detection with natural language processing for seamless human-to-device interaction with ultra-fast response time and high accuracy.

NXP USA, Inc.

LAS VEGAS, Jan. 06, 2020 (GLOBE NEWSWIRE) -- (CES 2020) NXP Semiconductors N.V. (NASDAQ: NXPI) today expanded its industry-leading EdgeVerse portfolio with the i.MX 8M Plus application processor the first i.MX family to integrate a dedicated Neural Processing Unit (NPU) for advanced machine learning inference at the industrial and IoT (Internet-of-Things) edge.

The i.MX 8M Plus combines a high-performance NPU delivering 2.3 TOPS (Tera Operations Per Second) with a Quad-core Arm Cortex-A53 sub-system running at up to 2GHz, an independent real-time sub-system with an 800MHz Cortex-M7, a high-performance 800 MHz audio DSP for voice and natural language processing, dual camera Image Signal Processors (ISP), and a 3D GPU for rich graphics rendering. With the combination of high-performance Cortex-A53 cores and NPU, edge devices will be able to make intelligent decisions locally by learning and inferring inputs with little or no human intervention. The range of applications made possible with the cost-effective i.MX 8M Plus spans people and object recognition for public safety, industrial machine vision, robotics, hand gesture, and emotion detection with natural language processing for seamless human-to-device interaction with ultra-fast response time and high accuracy.

The edge is the perfect destination to deploy machine learning applications, especially as technology advancements are enabling accurate localized decision-making, said Martyn Humphries, vice president and general manager of i.MX application processors for consumer and industrial markets at NXP. With the i.MX 8M Plus we are enabling leading companies to transform the smart edge to an intelligent edge in the consumer and industrial IoT marketplace, and we look forward with great excitement to the innovative products they will be introducing based on this new trendsetting solution.

Driving an Intelligent Breed of Edge Devices with Immersive Multi-media

Built in advanced 14nm LPC FinFET process technology, the NXP i.MX 8M Plus can execute multiple, highly-complex neural networks simultaneously, such as multi-object identification, speech recognition of 40,000+ English words, and medical imaging. For example, the powerful NPU is capable of processing MobileNet v1, a popular image classification network, at over 500 images per second.

Developers can off-load machine learning inference functions to the NPU, allowing the high-performance Cortex-A and Cortex-M cores, DSP, and GPUs to execute other system-level or user applications tasks. The vision pipeline is anchored by dual integrated ISPs that support two high-definition cameras for real-time stereo vision or a single 12 MPixel resolution camera and includes High Dynamic Range (HDR) and fisheye lens correction. These features enable real-time image processing applications such as surveillance, smart retail applications, robot vision, and home health monitors.

To enable voice applications, the i.MX 8M Plus integrates a high-performance HiFi 4 DSP that enhances natural language processing with pre- and post-processing of voice streams. The powerful Cortex-M7 domain can be used to run real-time response systems while the applications processor domain executes complex non-real-time applications to reduce the overall system-level power consumption by turning off the application processor domain while keeping only the Cortex-M domain alive for wake word detection. The i.MX 8M Plus extends advanced multimedia, and video processing with a system that can compress multiple video feeds using the H.265 or H.264 HD video encoder and decoder for cloud streaming or local storage and a rich user experience enabled by 3D/2D graphics, and Immersiv3D audio with Dolby Atmos and DTS:X.

Elevating Intelligence for Industrial IoT

The i.MX 8M Plus advances industrial productivity and automation with machines that can inspect, measure, precisely identify objects, and enable predictive maintenance by accurately detecting anomalies in machine operation. In addition, the factory human-machine interfaces can be made more intuitive and secure by combining accurate face recognition with voice/command recognition and even gesture recognition. Supporting Industry 4.0 IT/OT convergence, the i.MX 8M Plus integrates Gigabit Ethernet with Time-Sensitive Networking (TSN), which combined with Arm Cortex M7 real-time processing provides deterministic wired network connectivity and processing. NXP also offers tailor-made, optimized Plus Power Management Solutions PMIC (PCA9450C) to support the i.MX 8M Plus.

To meet the high quality and reliability standards required for industrial applications, the i.MX 8M Plus features Error Correction Code (ECC) for internal memories and the DDR interface. The family is expected to be qualified to meet the stringent industrial temperature range (-40C to 105C ambient), power-on profile (100 percent power-on), and is planned to be part of NXPs industry-best longevity commitment (15 years).

Product Availability and Demonstration

NXP is sampling the i.MX 8M Plus applications processors to customers now. The company will showcase its i.MX applications processor families at CES 2020 in its booth, CP-18, in Las Vegas between January 8-11.

For more information, please contact a local NXP sales representative.

NXP CES 2020 Press Kit: https://media.nxp.com/press-kit

About NXP SemiconductorsNXP Semiconductors N.V. enables secure connections for a smarter world, advancing solutions that make lives easier, better, and safer. As the world leader in secure connectivity solutions for embedded applications, NXP is driving innovation in the automotive, industrial & IoT, mobile, and communication infrastructure markets. Built on more than 60 years of combined experience and expertise, the company has approximately 30,000 employees in more than 30 countries and posted revenue of $9.41 billion in 2018. Find out more at http://www.nxp.com.

NXP, EdgeVerse, Immersiv3D, and the NXP logo are trademarks of NXP B.V. All other products or service names are the property of their respective owners. ARM and Cortex are trademarks or registered trademarks of ARM Ltd or its subsidiaries in the EU and/or elsewhere. All rights reserved. 2020 NXP B.V.

For more information, please contact:

NXP-Smart CityNXP-IoTNXP-Smart Home

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/a00fdfd5-6f22-4e92-8131-80ead4b1648b

Go here to read the rest:
NXP Debuts i.MX Applications Processor with Dedicated Neural Processing Unit for Advanced Machine Learning at the Edge - GlobeNewswire

Fighting the Risks Associated with Transparency of AI Models – EnterpriseTalk

Firms need to understand how Machine Learning (ML) models function in order to trust them. Increasing model transparency will not only grow the rewards but also multiply the risks.

As firms move towards the adoption of machine learning, Artificial Intelligence (AI) is generating substantial security risks.

One of the most significant risks associated with AI remains the ML-based models operating as black boxes. The deep learning models composed of artificial neural networks have complicated the process of deriving automated inferences. These complications increase the risks associated with AI models. ML-based applications may inadvertently get influenced by biases and other adverse factors while producing automated decisions. To mitigate the risks, firms are starting to demand enhanced transparency into how ML operates, focusing on the entire workflow in which models are trained, built, and deployed.

Read more: Angel, a Machine Learning Platform Contributed by Tencent, Graduates from Linux Foundation AI

There are many frameworks for maintaining the algorithmic transparency of AI models to ensure explainability, interpretability, and accountability. Business demands flexibility, but IT needs control. This has pushed the need of firms to rely on different frameworks to secure algorithm transparency. All these tools and techniques assist the data scientists in generating explanations to understand which data inputs drove different algorithmic inferences under various circumstances. However,sadly, these frameworks can be easily hacked, thereby reducing trust in the explanations they generate and exposing the risks they create:

Algorithmic deceptions may sneak into the public record Dishonest parties may hack the narrative explanations generated by these algorithms to obscure or misrepresent any biases. In other words, perturbation-based approaches can be trickedinto creating safe reasons for algorithmic behaviors that are definitely biased.

Technical vulnerabilities may get disclosed accidentally Revealing informationabout machine learning algorithms can make them highly vulnerable to attacks.Complete transparency into how machine learning models function will expose them to attacks designed either to trick the inferences from live operational data or by injecting bogus data into their training workflows.

Intellectual property theft may be encouraged EntireMLalgorithms and training data sets can get stolen through their APIs and other features. Transparency regarding howMLmodels operate may enable the underlying models to be reconstructed with full reliability. Similarly, transparency will also make it possible to partially or entirely reconstruct training data sets, which is an attack known as model inversion.

Read more: Incorporating Machine Learning into Clinical Episode Grouping

Privacy violations may run rampant.ML transparency may make it possible for unauthorized third parties to ascertain a particular individuals data record through a membership inference attack, to enable hackers to unlock considerable amounts of privacy-sensitive data.

To mitigate such technical risks of algorithmic transparency, enterprise data professionals need to adhere to the below strategies:

Without sacrificing ML transparency, firms need to have a clear objective of mitigating these broader business risks .

Enterprises will need to monitor these explanations for irregularities continually, to derive evidence that they or the models have been hacked. This is a critical concern because trust in the AI technology will come tumbling down if the enterprises that build and train ML models cant vouch for the transparency of the models official documentation.

Read more: DataRobot is acquiring Paxata to add data prep to machine learning platform

Read more:
Fighting the Risks Associated with Transparency of AI Models - EnterpriseTalk

How AI And Machine Learning Can Make Forecasting Intelligent – Demand Gen Report

The CRM is no longer a data repository and a basic workflow engine that creates static reports. Thanks to AI and ML, predictive and prescriptive insights can be embedded into CRM. This is known as the intelligent experience. The intelligent experience is a natural fit for sales forecasting.

We all know data is incredibly powerful, but often its potential goes untapped in businesses. In order to utilize data in a meaningful way, a company needs to have the right skills and tools to convert data into learnings, and learnings into actions and outcomes. This is exactly what the intelligent experience does. For example, a company using a standard CRM is able to look at their pipeline and opportunity data and see forecasted sales figures. A company with intelligent forecasting sees all of that same data but goes further. Their ML will analyze their past opportunities, successes, misses, win rates and other criteria to create a recommended forecast and provide insights to help their sales team take action. Intelligent forecasting is more than making predictions on revenue or deals closed. It is transparent and explanatory, which informs workflows, helps improve sales strategies and opens the door to increasing win rates.

An overwhelming majority of data projects fail. Why? Typically, it is due to companies thinking they need to gather the perfect data or create custom models in data silos that end up having limited value to the business. The reality is, they dont need to have the perfect data to gather meaningful insights, nor do they require a massive data pool. A business can start small and keep it simple.

Forecasting is complex, so naturally, many companies struggle to maximize its value. Often, companies will use spreadsheets and do calculations to aggregate historical direct and channel sales results with cyclical growth assumptions. Unfortunately, this is a subjective approach that typically is inaccurate, time-consuming and not actionable in real-time. The best way to develop a trusted and actionable forecast is by combining traditional approaches with AI approaches.

To get started with intelligent forecasting, an organization must define the intelligent experience it wants to create for its users and customers. Next, custom ML models and AI systems are built to generate the necessary insights. Two major approaches to gaining insights are propensity-based predictions and aggregate forecasting. Propensity-based models examine individual opportunities and score them. Aggregate forecasting looks at aggregate sales volumes across segments of the business (i.e. channel, geography, product, etc.). To maximize value, it is best to combine both approaches. Once implemented, the insights are then integrated into user workflows within the CRM and presented as recommend actions. Finally, the data models are tuned and redefined. Creating an intelligent experience is a journey because as a business changes, so does its data. In order for the ML to remain successful over time, it needs to also change and adjust.

Companies can combine the impact of data, analytics and AI to make decisions faster, increase productivity and make customers happier. While most companies know the benefits, they think its out of reach for them. However, the intelligent experience is more accessible than ever. Through the power of AI and ML, companies are able to inform their workflows and sales strategies, leading to more wins.

As a Co-Founder and SVP of Customer Engagement, Geoff Birnes is responsible for Atriums customer outcomes. Birnes brings extensive experience in large-scale business transformation programs across sales, marketing, service and middle office. Prior to Atrium, Geoff led strategic accounts for Appirio-Wipro, and has spent 20 years in the consulting space, focused on CRM and business intelligence sales and delivery. Geoff attended Penn State University where he earned a B.S. in Engineering.

Read the original here:
How AI And Machine Learning Can Make Forecasting Intelligent - Demand Gen Report

Stare into the mind of God with this algorithmic beetle generator – SB Nation

God has an inordinate fondness for beetles, said, or so it is claimed, the British evolutionary biologist J. B. S. Haldane. On quantity alone, he was absolutely correct. There are about 400,000 species of beetle on the planet a cool 395,000 more than mammals can offer and while this sort of number-gaming is fraught with the risk of glibness, the assertion that beetles make up between a quarter and a third of extant animal species is probably not too far off.

Im on less firm ground if I assert that articles about machine learning make up between a quarter and a third of todays internet, but sometimes thats what it feels like. Normally Id be sorry for adding to the mass fervour for what mostly amounts to snake oil, but I think we have a special case today.

Machine learning is very bad at a lot of things, and frequently bad in surprisingly ways. But its very good at some things. Playing with datasets which combine cleanliness and predictability with mind-boggling diversity is, perhaps, The Thing it is best at. Happily, the long-tradition of scientific beetle-drawing has produced sheet upon sheet of beautiful, anatomically correct and aesthetically similar pictures. Piping those into a generative adversarial network gives you ... well, it gives you this.

BEHOLD! THE ALGORITHMIC BEETLE GENERATOR:

Despite this video being 100 percent shapeshifting beetles its also, somehow, extremely relaxing. I hope you enjoy it as much as I did. If so, you can get an extra kick from the mangled semi-beetles that constituted Cunicodes first attempt at this.

PS: Its funny to think that most of these machine-generated beetles probably already exist. An inordinate fondness, indeed.

More:
Stare into the mind of God with this algorithmic beetle generator - SB Nation

US announces AI software export restrictions – The Verge

The US will impose new restrictions on the export of certain AI programs overseas, including to rival China.

The ban, which comes into force on Monday, is the first to be applied under a 2018 law known as the Export Control Reform Act or ECRA. This requires the government to examine how it can restrict the export of emerging technologies essential to the national security of the United States including AI. News of the ban was first reported by Reuters.

When ECRA was announced in 2018, some in the tech industry feared it would harm the field of artificial intelligence, which benefits greatly from the exchange of research and commercial programs across borders. Although the US is generally considered to be the world leader in AI, China is a strong second place and gaining fast.

But the new export ban is extremely narrow. It applies only to software that uses neural networks (a key component in machine learning) to discover points of interest in geospatial imagery; things like houses or vehicles. The ruling, posted by the Bureau of Industry and Security, notes that the restriction only applies to software with a graphical user interface a feature that makes programs easier for non-technical users to operate.

Reuters reports that companies will have to apply for licenses to export such software apart from when it is being sold to Canada.

The US has previously imposed other trade restrictions affecting the AI world, including a ban on American firms from doing business with Chinese companies that produce software and hardware that powers AI surveillance.

Using machine learning to process geospatial imagery is an extremely common practice. Satellites that photograph the Earth from space produce huge amounts of data, which machine learning can quickly sort to flag interesting images for human overseers.

Such programs are useful to many customers. Environmentalists can use the technology to monitor the spread of wildfires, for example, while financial analysts can use it to track the movements of cargo ships out of a port, creating a proxy metric for trading volume.

But such software is of growing importance to military intelligence, too. The US, for example, is developing an AI analysis tool named Sentinel, which is supposed to highlight anomalies in satellite imagery. It might flag troop and missile movements, for example, or suggest areas that human analysts should examine in detail.

Regardless of the importance of this software its unlikely an export ban will have much of an effect on China or other rivals development of these tools. Although certain programs may be restricted, its often the case that the underlying research is freely available online, allowing engineers to recreate any software for themselves.

Reuters notes that although the restriction will only affect US exports, American authorities could try and encourage other countries to follow suit, as they have with restrictions on Huaweis 5G technology. Future export bans could also affect more types of AI software.

See original here:
US announces AI software export restrictions - The Verge