Parasoft Unleashes Artificial Intelligence and Machine Learning to Accelerate Time to Market for the Safety-Critical Industry – PRNewswire

With this release, Parasoft introduced artificial intelligence (AI) and machine learning (ML) in its reporting and analytics dashboard, extending its capabilities tolearn from both historical interactions with the code base and prior static analysis findings to predict relevance and prioritize the new findings. As a result, teams can increase productivity by eliminating tedious and time-consuming tasks. Adding even more efficiency to the modern development workflow is the new Visual Studio Code extension for static analysisand the Coverage Advisor, which uses advanced static code analysis to boost unit test creation.

Parasoft Remains at the Forefront of Leading-Edge Technology With the Release of C/C++test 2020.1

The latest release introduces capabilities to improve all aspects of delivery in software quality including the following integrations:

"The growing complexity of software systems forces organizations to modernize their toolchains and workflows. They're switching to Git feature branch workflowsapplying Docker containers and CMake. We see heavy IDEs being replaced with lightweight editors like Visual Studio Code, which are a better fit for projects containing millions of lines of code. Modern workflows, however, need to support requirements traceability to facilitate risk assessment and functional safety certifications," said Miroslaw Zielinski, Product Manager for Parasoft. "Our latest release of Parasoft C/C++test with Visual Studio Code extension, Requirements View, streamlined Docker deployments and traceability enhancements fits perfectly into this trend."

Parasoft continues to provide leading support for automated enforcement of industry coding guidelines with expanded coverage for updated security standards (2019 CWE Top 25 and On the Cusp), AUTOSAR C++14, and the new MISRA C 2012 Amendment 2. Keeping pace with guideline requirements ensures that Parasoft's tools continue to meet the changing needs of the industry.

About Parasoft

Parasoft continuously delivers quality software with its market-proven, integrated suite of automated software testing tools. Parasoft supports software organizations as they develop and deploy applications for the embedded, enterprise, and IoT markets. Parasoft's technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software by integrating static and runtime analysis; unit, functional, and API testing; and service virtualization. With our developer testing tools, manager reporting/analytics, and executive dashboarding, Parasoft enables organizations to succeed in today's most strategic ecosystems and development initiativesreal-time, safety-critical, cybersecure, agile, continuous testing, and DevOps.

SOURCE Parasoft

http://www.parasoft.com

Link:
Parasoft Unleashes Artificial Intelligence and Machine Learning to Accelerate Time to Market for the Safety-Critical Industry - PRNewswire

Oracle Offers Machine Learning Workshop to Transform DBA Skills – Database Trends and Applications

AI and machine learning are turning a corner, marking this year with new and improved platforms and use cases. However, database administrators dont always have the tools and skills necessary to manage this new minefield of technology.

DBTA recently held a webinar featuring Charlie Berger, senior director, product management, machine learning, AI, and, Cognitive Analytics, Oracle who discussed how to gain an attainable, logical, evolutionary path to add machine learning to users Oracle data skills.

Operational DBAs spend a lot of time on maintenance, security, and reliability, Berger said. The Oracle Autonomous Database can help. It automates all database and infrastructure management, monitoring, tuning; protects from both external attacks and malicious internal users; and protects from all downtime including planned maintenance.

The Autonomous Database removes tactical drudgery, allowing more time for strategic contribution, according to Berger.

Machine learning allows algorithms to automatically sift through large amounts of data to discover hidden patterns, new insights, and make predictions, he explained.

Oracle Machine Learning extends Oracle Autonomous Database and enables users to build AI applications and analytics dashboards. OML delivers powerful in-database machine learning algorithms, automated ML functionality, and integration with open source Python and R.

From a database developer to a data scientist, Oracle can transform the data management platform into a combined/hybrid data management and machine learning platform.

There are 6 major steps to becoming a data scientist that include:

An archived on-demand replay of this webinar is availablehere.

Follow this link:
Oracle Offers Machine Learning Workshop to Transform DBA Skills - Database Trends and Applications

Canaan’s Kendryte K210 and the Future of Machine Learning – CapitalWatch

Author: CapitalWatch Staff

Canaan Inc. (Nasdaq: CAN) became publicly traded in New York in late November. It raised $90 million in its IPO, which Canaan's founder, chairman, and chief executive officer,Nangeng Zhang modestly called "a good start." Since that time, the company has met significant milestones in its mission to disrupt the supercomputing industry.

Operating since 2013, Hangzhou-based Canaan delivers supercomputing solutions tailored to client needs. The company focuses on the research and development of artificial intelligence (AI) technology specifically, AI chips, AI algorithms, AI architectures, system on a chip (SoC) integration, and chip integration. Canaan is also known as a top manufacturer of mining hardware in China the global leader in digital currency mining.

Since IPO, Canaan has made strides in accomplishing new projects, despite the hard-hit cross-industry crisis Covid-19 has caused worldwide. In a recent announcement, Canaan said it has developed a SaaS product which its partners can use to operate a cloud mining platform. Cloud mining allows users to mine digital currency without having to buy and maintain mining hardware and spend on electricity a trend that has been gaining popularity.

A Chip of the Future

Earlier this year, Canaan participatedat the 2020 International Consumer Electronics Show in Las Vegas, the world's largest tech show that attracts innovators from across the globe. Canaan impressed, showcasing its Kendryte K210 the world's first-ever RISC-V-based edge AI chip. The chip was released in September 2018 and has been in mass-production ever since.

K210 is Canaan's first chip. The AI chip is designed to carry out machine learning. The primary functions of the K210 are machine vision and semantic, which includes the KPU for computing convolutional neural networks and an APU for processing microphone array inputs. KPU is a general-purpose neural network processor with built-in convolution, batch normalization, activation, and pooling operations. The next-generation chip can detect faces and objects in real-time. Despite the high computing power, K210 consumes only 0.3W while other typical devices consume 1W.

More Than Just Chipping Away at Sales

As of September 30, 2019, Canaan has shipped more than 53,000 AI chips and development kits to AI product developers since release.

Currently, the sales of K210 are growing exponentially, according to CEO Zhang .

The company has moved quickly to the commercialization of chips, and developed modules, products and back-end SaaS, offering customers a "full flow of AI solutions."

Based on the first generation of K210, Canaan has formed critical strategic partnerships.

For example, the company launched joint projects with a leading AI algorithm provider, a top agricultural science and technology enterprise, and a well-known global soft drink manufacturer to deliversmart solutionsfor variousindustrialmarkets.

The Booming Blockchain Industry

Currently, Canaan is working under the development strategy of "Blockchain + AI." The company has made several breakthroughs in the blockchain and AI industry, including algorithm development and optimization, standard unit design, low-voltage and high-efficiency operation, high-performance design system and heat dissipation, etc. The company has also accumulated extensive experience in ASIC chip manufacturing, laying the foundation for its future growth.

Canaan released first-generation products based on Samsung's 8nm and SMIC's 14nm technologies in Q4 last year. The former has been shipped in Q1 this year, while the latter will be shipped in Q2. In February, it launched the second generation of the product which is more efficient, more cost-effective and offers better performance.

Currently, TSMC's 5nm technology is under development. This technology will further improve the company's machines' computing power and ensure Canaan's leading position in the blockchain hardware space.

"We are the leader in the industry," says Zhang.

Canaan's Covid-19 Strategy

During the Covid-19 outbreak, Canaan improved the existing face recognition access control system. The new software can detect and identify people wearing masks. At the same time, the intelligent attendance system has been integrated to assist human resource management

Integrating mining machine learning and AI, the K210 chip has been used on Avalon mining machine, which can identify and monitor potential network viruses through intelligent algorithms. The company will explore more innovative integration in the future.

Second-Generation Gem

In terms of AI, the company will launch the second-generation AI chip K510 this year. The design of its architecture has been "greatly" optimized, and the computing power is several times more robust than the K210. Later this year, Canaan will use this tech in areas including smart energy consumption, smart industrial parks, smart driving, smart retail, and smart finance.

Canaan's Cash

In terms of operating costs and R&D, the company's last-year operating cost dropped 13.3% year-on-year. In 2018 and 2019, Canaan recorded R&D expenses of 189.7 million yuan and 169 million yuan, respectively347 million yuan were used to incentivize core R&D personnel.

In addition, the company currently has more than 500 million yuan in cash ($70.5 million), will continue to operate under the "blockchain + AI" strategy, with a continued focus on the commercialization of its AI technology.

A Fruitful Future

Canaan began as a manufacturer of Bitcoin mining machines, but it has become more than that. In the short term, the Bitcoin halving cycle is approaching (Estimated to occur on May 11, 2020 CW); this should promote the sales of company's mining machine, In the long term, now a global leader in ASIC technology, Canaan could be in a unique position to meet supercomputing demand.

"Blockchain is a good start, but we'll go beyond that," says Zhang. "When a seed grows up to be a big tree, it will bear fruit."

So far, it has done just that. Just how high that "tree" can get remains to be seen, but one thing is certain: The Kendryte K210 chip will be the driving force fueling the company's growth.

Excerpt from:
Canaan's Kendryte K210 and the Future of Machine Learning - CapitalWatch

Comprehensive Report on Machine Learning in Education Market 2020 | Trends, Growth Demand, Opportunities & Forecast To 2026 | IBM, Microsoft,…

Machine Learning in Education Market research is an intelligence report with meticulous efforts undertaken to study the right and valuable information. The data which has been looked upon is done considering both, the existing top players and the upcoming competitors. Business strategies of the key players and the new entering market industries are studied in detail. Well explained SWOT analysis, revenue share and contact information are shared in this report analysis.

Machine Learning in Education Market is growing at a High CAGR during the forecast period 2020-2026. The increasing interest of the individuals in this industry is that the major reason for the expansion of this market.

Get the PDF Sample Copy of This Report:

https://www.a2zmarketresearch.com/sample?reportId=252837

Top Key Players Profiled in This Report:

IBM, Microsoft, Google, Amazon, Cognizan, Pearson, Bridge-U, DreamBox Learning, Fishtree, Jellynote, Quantum Adaptive Learning

The key questions answered in this report:

Various factors are responsible for the markets growth trajectory, which are studied at length in the report. In addition, the report lists down the restraints that are posing threat to the global Machine Learning in Education market. It also gauges the bargaining power of suppliers and buyers, threat from new entrants and product substitute, and the degree of competition prevailing in the market. The influence of the latest government guidelines is also analyzed in detail in the report. It studies the Machine Learning in Education markets trajectory between forecast periods.

If You Have Any Query, Ask Our Experts:

https://www.a2zmarketresearch.com/enquiry?reportId=252837

Reasons for buying this report:

Table of Contents:

Global Machine Learning in Education Market Research Report

Chapter 1 Machine Learning in Education Market Overview

Chapter 2 Global Economic Impact on Industry

Chapter 3 Global Market Competition by Manufacturers

Chapter 4 Global Production, Revenue (Value) by Region

Chapter 5 Global Supply (Production), Consumption, Export, Import by Regions

Chapter 6 Global Production, Revenue (Value), Price Trend by Type

Chapter 7 Global Market Analysis by Application

Chapter 8 Manufacturing Cost Analysis

Chapter 9 Industrial Chain, Sourcing Strategy and Downstream Buyers

Chapter 10 Marketing Strategy Analysis, Distributors/Traders

Chapter 11 Market Effect Factors Analysis

Chapter 12 Global Machine Learning in Education Market Forecast

Buy Exclusive Research Report @:

https://www.a2zmarketresearch.com/buy?reportId=252837

See the rest here:
Comprehensive Report on Machine Learning in Education Market 2020 | Trends, Growth Demand, Opportunities & Forecast To 2026 | IBM, Microsoft,...

Quantzig Launches New Article Series on COVID-19’s Impact – ‘Understanding Why Online Food Delivery Companies Are Betting Big on AI and Machine…

LONDON--(BUSINESS WIRE)--As a part of its new article series that analyzes COVID-19s impact across industries, Quantzig, a premier analytics services provider, today announced the completion of its recent article Why Online Food Delivery Companies are Betting Big on AI and Machine Learning

The article also offers comprehensive insights on:

Human activity has slowed down due to the pandemic, but its impact on business operations has not. We offer transformative analytics solutions that can help you explore new opportunities and ensure business stability to thrive in the post-crisis world. Request a FREE proposal to gauge COVID-19s impact on your business.

With machine learning, you dont need to babysit your project every step of the way. Since it means giving machines the ability to learn, it lets them make predictions and also improve the algorithms on their own, says a machine learning expert at Quantzig.

After several years of being confined to technology labs and the pages of sci-fi books, today artificial intelligence (AI) and big data have become the dominant focal point for businesses across industries. Barely a day passes by without new magazine and paper articles, blog entries, and tweets about such advancements in the field of AI and machine learning. Having said that, its not very surprising that AI and machine learning in the food and beverage industry have played a crucial role in the rapid developments that have taken place over the past few years.

Talk to us to learn how our advanced analytics capabilities combined with proprietary algorithms can support your business initiatives and help you thrive in todays competitive environment.

Benefits of AI and Machine Learning

Want comprehensive solution insights from an expert who decodes data? Youre just a click away! Request a FREE demo to discover how our seasoned analytics experts can help you.

As cognitive technologies transform the way people use online services to order food, it becomes imperative for online food delivery companies to comprehend customer needs, identify the dents, and bridge gaps by offering what has been missing in the online food delivery business. The combination of big data, AI, and machine learning is driving real innovation in the food and beverage industry. Such technologies have been proven to deliver fact-based results to online food delivery companies that possess the data and the required analytics expertise.

At Quantzig, we analyze the current business scenario using real-time dashboards to help global enterprises operate more efficiently. Our ability to help performance-driven organizations realize their strategic and operational goals within a short span using data-driven insights has helped us gain a leading edge in the analytics industry. To help businesses ensure business continuity amid the crisis, weve curated a portfolio of advanced COVID-19 impact analytics solutions that not just focus on improving profitability but help enhance stakeholder value, boost customer satisfaction, and help achieve financial objectives.

Request more information to know more about our analytics capabilities and solution offerings.

About Quantzig

Quantzig is a global analytics and advisory firm with offices in the US, UK, Canada, China, and India. For more than 15 years, we have assisted our clients across the globe with end-to-end data modeling capabilities to leverage analytics for prudent decision making. Today, our firm consists of 120+ clients, including 45 Fortune 500 companies. For more information on our engagement policies and pricing plans, visit: https://www.quantzig.com/request-for-proposal

View post:
Quantzig Launches New Article Series on COVID-19's Impact - 'Understanding Why Online Food Delivery Companies Are Betting Big on AI and Machine...

Eta Compute Partners with Edge Impulse to Accelerate the Development and Deployment of Machine Learning at the Edge – Yahoo Finance

The partnership will transform the development process from concept to production for embedded machine learning in micropower devices.

Eta Compute and Edge Impulse announce that they are partnering to accelerate the development and deployment of machine learning using Eta Computes revolutionary ECM3532, the worlds lowest power Neural Sensor Processor, and Edge Impulse, the leading online TinyML platform. The partnership will speed the time-to-market for machine learning in billions of IoT consumer and industrial products where battery capacity has been a roadblock.

"Collaborating with Edge Impulse ensures our growing ECM3532 developer community is fully equipped to bring innovative designs in digital health, smart city, consumer, and industrial applications to market quickly and efficiently," said Ted Tewksbury, CEO of Eta Compute. "We believe that our partnership will help companies debut their ground-breaking solutions later in 2020."

Eta Computes ECM3532 ultra-low power Neural Sensor Processor SoC that enables machine learning at the extreme edge, and its ECM3532 EVB evaluation board are now supported by Edge Impulses end-to-end ML development and MLOps platform. Developers can register for free to gain access to advanced Eta Compute machine learning algorithms and development workflows through the Edge Impulse portal.

"Machine learning at the very edge has the potential to enable the use of the 99% of sensor data that is lost today because of cost, bandwidth, or power constraints," said Zach Shelby, CEO and Co-founder of Edge Impulse. "Our online SaaS platform and Eta Computes innovative processor are the ideal combination for development teams seeking to accurately collect data, create meaningful data sets, spin models, and generate efficient ML at a rapidly accelerated pace."

"Trillions of devices are expected to come online by 2035 and many will require some level of machine learning at the edge," said Dennis Laudick, vice president of marketing, Machine Learning Group, Arm. "The combination of Eta Computes TinyML hardware based on Arm Cortex and CMSIS-NN technology, and the SaaS TinyML solutions from Edge Impulse provides developers a complete solution for bringing power efficient, edge, or endpoint ML products to market at the fast pace required for this next era of compute."

For more information or to begin developing, visit EtaCompute.com or EdgeImpulse.com

About Eta Compute

Eta Compute was founded in 2015 with the vision that the proliferation of intelligent devices at the network edge will make daily life safer, healthier, comfortable and more convenient without sacrificing privacy and security. The company delivers the worlds lowest power embedded platform using patented Continuous Voltage Frequency Scaling to deliver unparalleled machine intelligence to energy-constrained products and remove battery capacity as a barrier in consumer and industrial applications. In 2018, the company received the Design Innovation Of The Year and Best Use Of Advanced Technologies awards at Arm TechCon. For more information visit EtaCompute.com or contact the company via email at info@etacompute.com.

About Edge Impulse

Edge Impulse is on a mission to enable developers to create the next generation of intelligent devices using embedded machine learning in industrial, enterprise and human centric applications. Machine learning at the very edge will enable valuable use of the 99% of sensor data that is discarded today due to cost, bandwidth or power constraints. The founders believe that machine learning can enable positive change in society and are dedicated to support applications for good. Sign up for free at edgeimpulse.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200512005318/en/

Contacts

Media Contacts: Eta Compute:Phyllis Grabot, 805.341.7269 / phyllis@corridorcomms.com Bonnie Quintanilla, 818.681.5777 / bonnie@corridorcomms.com

Edge Impulse:Zach Shelby, 408.203.9434 / hello@edgeimpulse.com

View original post here:
Eta Compute Partners with Edge Impulse to Accelerate the Development and Deployment of Machine Learning at the Edge - Yahoo Finance

Another deep learning processor appears in the ring: Grayskull from Tenstorrent – Electronics Weekly

It describes the technology behind the processor as: The first conditional execution architecture for artificial intelligence facilitating scalable deep learning. Tenstorrent has taken an approach that dynamically eliminates unnecessary computation, thus breaking the direct link between model size growth and compute/memory bandwidth requirements.

Conditional computation?

Conditional computation enables adaptation to both inference and training of a model to the exact input that was presented, like adjusting NLP model computations to the exact length of the text presented, and dynamically pruning portions of the model based on input characteristics, is how the company describes it.

Grayskull integrates 120 Tensix proprietary cores with 120Mbyte of local SRAM. It has eight channels of LPDDR4 for supporting up to 16Gbyte of external DRAM and 16 lanes of PCI-E Gen 4.

The Tensix cores have a packet processor, a programmable SIMD and maths computation block, five single-issue RISC cores and 1Mbyte of ram.

Associated software model

The array of Tensix cores is stitched together with a double 2D torus network-on-chip, which facilitates multi-cast flexibility, along with minimal software burden for scheduling coarse-grain data transfers, according to the company.At the chip thermal design power required for a 75W bus-powered PCIE card, Grayskull achieves 368Tops and up to 23,345 sentence/second using BERT-Base for the SQuAD 1.1 data set.

According to the Tenstorrent:

For artificial intelligence to reach the next level, machines need to go beyond pattern recognition and into cause-and-effect learning. Such machine learning models require computing infrastructure that allows them to continue growing by orders of magnitude for years to come. Machine learning computers can achieve this goal in two ways: by weakening the dependence between model size and raw compute power, through features like conditional execution and dynamic sparsity handling, and by facilitating compute scalability at hitherto unrivalled levels. Rapid changes in Machine learning models further require flexibility and programmability.

Claimed Grayskull benchmarks

Grayskull is aimed at inferencing in data centres, public cloud servers, private cloud servers, on-premises servers, edge servers and automotive.

Samples are said to be with partners, with the processor ready for production this autumn.

The Tenstorrent website is here

Read the rest here:
Another deep learning processor appears in the ring: Grayskull from Tenstorrent - Electronics Weekly

Twitter adds former Google VP and A.I. guru Fei-Fei Li to board as it seeks to play catch up with Google and Facebook – CNBC

Twitter has appointed Stanford professor and former Google vice president Fei-Fei Li to its board as an independent director.

The social media platform said that Li's expertise in artificial intelligence (AI) will bring relevant perspectives to the board. Li's appointment may also help Twitter to attract top AI talent from other companies in Silicon Valley.

Li left her role as chief scientist of AI/ML (artificial intelligence/machine learning) at Google Cloud in October 2018 after being criticized for comments she made in relation to the controversial Project Maven initiative with the Pentagon, which saw Google AI used to identify drone targets from blurry drone video footage.

When details of the project emerged, Google employees objected, saying that they didn't want their AI technology used in military drones. Some quit in protest and around 4,000 staff signed a petition that called for "a clear policy stating that neither Google nor its contractors will ever build warfare technology."

While Li wasn't directly involved in the project, a leaked email suggested she was more concerned about what the public would make of Google's involvement in the project as opposed to the ethics of the project itself.

"This is red meat to the media to find all ways to damage Google," she wrote, according to a copy of the emailobtained by the Intercept. "You probably heardElon Muskand his comment about AI causing WW3."

"I don't know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry. Google Cloud has been building our theme on Democratizing AI in 2017, and Diane (Greene, head of Google Cloud) and I have been talking about Humanistic AI for enterprise. I'd be super careful to protect these very positive images."

Up until that point, Li was seen very much as a rising star at Google. In the one year and 10 months she was there, she oversaw basic science AI research, all of Google Cloud's AI/ML products and engineering efforts, and a newGoogle AI lab in China.

While at Google she maintained strong links to Stanford and in March 2019 she launched the Stanford University Human-Centered AI Institute (HAI), which aims to advance AI research, education, policy and practice to benefit humanity.

"With unparalleled expertise in engineering, computer science and AI, Fei-Fei brings relevant perspectives to the board as Twitter continues to utilize technology to improve our service and achieve our long-term objectives," said Omid Kordestani, executive chairman of Twitter.

Twitter has been relatively slow off the mark in the AI race. Itacquired British start-up Magic Pony Technologies in 2016 for up to $150 million as part of an effort to beef up its AI credentials, but its AI efforts remain fairly small compared to other firms. It doesn't have the same reputation as companies like Google and Facebook when it comes to AI and machine-learning breakthroughs.

Today the company uses an AI technique called deep learning to recommend tweets to its users and it also uses AI to identify racist content and hate speech, or content from extremist groups.

Competition for AI talent is fierce in Silicon Valley and Twitter will no doubt be hoping that Li can bring in some big names in the AI world given she is one of the most respected AI leaders in the industry.

"Twitter is an incredible example of how technology can connect the world in powerful ways and I am honored to join the board at such an important time in the company's history," said Li.

"AI and machine learning can have an enormous impact on technology and the people who use it. I look forward to leveraging my experience for Twitter as it harnesses this technology to benefit everyone who uses the service."

Read the original here:
Twitter adds former Google VP and A.I. guru Fei-Fei Li to board as it seeks to play catch up with Google and Facebook - CNBC

Turns out converting files into images is a highly effective way to detect malware – PC Gamer UK

A branch of artificial intelligence called machine learning is all around us. It's employed by Facebook to help curate content (and target us with ads), Google uses it to filter millions of spam messages each day, and it's part of what enabled the OpenAI bot to beat the reigning Dota 2 champions last year in two out of three matches. There are seemingly endless uses. Adding one more to the pile, Microsoft and Intel have come up with a clever machine learning framework that is surprisingly accurate at detecting malware through a grayscale image conversion process.

Microsoft detailed the technology in a blog post (via ZDNet), which it calls static malware-as-image network analysis, or STAMINA. It consists of a three-step process. In simple terms, the machine learning project starts out by taking binary files and converting them into two-dimensional images.

The images are then fed into the framework. This second step is a process called transfer learning, which essentially helps the algorithm build upon its existing knowledge, while comparing images against its existing training.

Finally, the results are analyzed to see how effective the process was at detecting malware samples, how many it missed, and how many it incorrectly classified as malware (known as a false positive).

As part of the study, Microsoft and Intel sampled a dataset of 2.2 million files. Out of those, 60 percent were known malware files that were used to train the algorithm, and 20 percent were used to validate it. The remaining 20 percent were used to test the the actual effectiveness of the scheme.

Applying STAMINA to the files, Microsoft says the method accurately detected and classified 99.07 percent of the malware files, with a 2.58 percent false positive rate. Those are stellar results.

"The results certainly encourage the use of deep transfer learning for the purpose of malware classification. It helps accelerate training by bypassing the search for optimal hyperparameters and architecture searches, saving time and compute resources in the process," Microsoft says.

STAMINA is not without its limitations. Part of the process entails resizing images to make the number of pixels manageable for an application like this. However, for deeper analysis and bigger size applications, Microsoft says the method "becomes less effective due to limitations in converting billions of pixels into JPEG images and then resizing them."

In other words, STAMINA works great for testing files in a lab, but requires some fine tuning before it could feasibly be employed in greater capacity. This probably means Windows Defender won't benefit from STAMINA right away, but perhaps sometime down the line it will.

View original post here:
Turns out converting files into images is a highly effective way to detect malware - PC Gamer UK

Five Strategies for Putting AI at the Center of Digital Transformation – Knowledge@Wharton

Across industries, companies are applying artificial intelligence to their businesses, with mixed results. What separates the AI projects that succeed from the ones that dont often has to do with the business strategies organizations follow when applying AI, writes Wharton professor of operations, information and decisions Kartik Hosanagar in this opinion piece. Hosanagar is faculty director of Wharton AI for Business, a new Analytics at Wharton initiative that will support students through research, curriculum, and experiential learning to investigate AI applications. He also designed and instructs Wharton Onlines Artificial Intelligence for Business course.

While many people perceive artificial intelligence to be the technology of the future, AI is already here. Many companies across a range of industries have been applying AI to improve their businesses from Spotify using machine learning for music recommendations to smart home devices like Google Home and Amazon Alexa. That said, there have also been some early failures, such as Microsofts social-learning chatbot, Tay, which turned anti-social after interacting with hostile Twitter followers, and IBM Watsons inability to deliver results in personalized health care. What separates the AI projects that succeed from the ones that dont often has to do with the business strategies organizations follow when applying AI. The following strategies can help business leaders not only effectively apply AI in their organizations, but succeed in adapting it to innovate, compete and excel.

1. View AI as a tool, not a goal.

One pitfall companies might encounter in the process of starting new AI initiatives is that the concentrated focus and excitement around AI might lead to AI being viewed as a goal in and of itself. But executives should be cautious about developing a strategy specifically for AI, and instead focus on the role AI can play in supporting the broader strategy of the company. A recent report from MIT Sloan Management Review and Boston Consulting Group calls this backward from strategy, not forward from AI.

As such, instead of exhaustively looking for all the areas AI could fit in, a better approach would be for companies to analyze existing goals and challenges with a close eye for the problems that AI is uniquely equipped to solve. For example, machine learning algorithms bring distinct strengths in terms of their predictive power given high-quality training data. Companies can start by looking for existing challenges that could benefit from these strengths, as those areas are likely to be ones where applying AI is not only possible, but could actually disproportionately benefit the business.

The application of machine learning algorithms for credit card fraud detection is one example of where AIs particular strengths make it a very valuable tool in assisting with a longstanding problem. In the past, fraudulent transactions were generally only identified after the fact. However, AI allows banks to detect and block fraud in real time. Because banks already had large volumes of data on past fraudulent transactions and their characteristics, the raw material from which to train machine learning algorithms is readily available. Moreover, predicting whether particular transactions are fraudulent and blocking them in real time is precisely the type of repetitive task that an algorithm can do at a speed and scale that humans cannot match.

2. Take a portfolio approach.

Over the long term, viewing AI as a tool and finding AI applications that are particularly well matched with business strategy will be most valuable. However, I wouldnt recommend that companies pool all their AI resources into a single, large, moonshot project when they are first getting started. Rather, I advocate taking a portfolio approach to AI projects that includes both quick wins and long-term projects. This approach will allow companies to gain experience with AI and build consensus internally, which can then support the success of larger, more strategic and transformative projects later down the line.

Specifically, quick wins are smaller projects that involve optimizing internal employee touch points. For example, companies might think about specific pain points that employees experience in their day-to-day work, and then brainstorm ways AI technologies could make some of these tasks faster or easier. Voice-based tools for scheduling or managing internal meetings or voice interfaces for search are some examples of applications for internal use. While these projects are unlikely to transform the business, they do serve the important purpose of exposing employees, some of whom may initially be skeptics, to the benefits of AI. These projects also provide companies with a low-risk opportunity to build skills in working with large volumes of data, which will be needed when tackling larger AI projects.

The second part of the portfolio approach, long-term projects, is what will be most impactful and where it is important to find areas that support the existing business strategy. Rather than looking for simple ways to optimize the employee experience, long-term projects should involve rethinking entire end-to-end processes and potentially even coming up with new visions for what otherwise standard customer experiences could look like. For example, a long-term project for a car insurance company could involve creating a fully automated claims process in which customers can photograph the damage of their car and use an app to settle their claims. Building systems like this that improve efficiency and create seamless new customer experiences requires technical skills and consensus on AI, which earlier quick wins will help to build.

The skills needed for embarking on AI projects are unlikely to exist in sufficient numbers in most companies, making reskilling particularly important.

3. Reskill and invest in your talent.

In addition to developing skills through quick wins, companies should take a structured approach to growing their talent base, with a focus on both reskilling internal employees in addition to hiring external experts. Focusing on growing the talent base is particularly important given that most engineers in a company would have been trained in computer science before the recent interest in machine learning. As such, the skills needed for embarking on AI projects are unlikely to exist in sufficient numbers in most companies, making reskilling particularly important.

In its early days of working with AI, Google launched an internal training program where employees were invited to spend six months working in a machine learning team with a mentor. At the end of this time, Google distributed these experts into product teams across the company in order to ensure that the entire organization could benefit from AI-related reskilling. There are many new online courses to economically reskill employees in AI.

The MIT Sloan Management Review-BCG report mentioned above also found that, in addition to developing talent in producing AI technologies, an equally important area is that of consuming AI technologies. Managers, in particular, need to have skills to consult AI tools and act on recommendations or insights from these tools. This is because AI systems are unlikely to automate entire processes from the get-go. Rather, AI is likely to be used in situations where humans remain in the loop. Managers will need basic statistical knowledge in order to understand the limitations and capabilities of modern machine learning and to decide when to lean on machine learning models.

4. Focus on the long term.

Given that AI is a new field, it is largely inevitable that companies will experience early failures. Early failures should not discourage companies from continuing to invest in AI. Rather, companies should be aware of, and resist, the tendency to retreat after an early failure.

Historically, many companies have stumbled in their early initiatives with new technologies, such as when working with the internet and with cloud and mobile computing. The companies that retreated, that stopped or scaled back their efforts after initial failures, tended to be in a worse position long term than those that persisted. I anticipate that a similar trend will occur with AI technologies. That is, many companies will fail in their early AI efforts, but AI itself is here to stay. The companies that persist and learn to use AI well will get ahead, while those that avoid AI after their early failures will end up lagging behind.

AI shouldnt be abandoned given that the alternative, human decision-makers, are biased too.

5. Address AI-specific risks and biases aggressively.

Companies should be aware of new risks that AI can pose and proactively manage these risks from the outset. Initiating AI projects without an awareness of these unique risks can lead to unintended negative impacts on society, as well as leave the organizations themselves susceptible to additional reputational, legal, and regulatory risks (as mentioned in my book, A Humans Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control).

There have been many recent cases where AI technologies have discriminated against historically disadvantaged groups. For example, mortgage algorithms have been shown to have a racial bias, and an algorithm created by Amazon to assist with hiring was shown to have a gender bias, though this was actually caught by Amazon itself prior to the algorithm being used. This type of bias in algorithms is thought to occur because, like humans, algorithms are products of both nature and nurture. While nature is the logic of the algorithm itself, nurture is the data that algorithms are trained on. These datasets are usually compilations of human behaviors oftentimes specific choices or judgments that human decision-makers have previously made on the topic in question, such as which employees to hire or which loan applications to approve. The datasets are therefore made up of biased decisions from humans themselves that the algorithms learn from and incorporate. As such, it is important to note that algorithms are generally not creating wholly new biases, but rather learning from the historical biases of humans and exacerbating them by applying them on a much larger, and therefore even more damaging, scale.

AI shouldnt be abandoned given that the alternative, human decision-makers, are biased too. Rather, companies should be aware of the kinds of social harms that can result from AI technologies and rigorously audit their algorithms to catch biases before they negatively impact society. Proceeding with AI initiatives without an awareness of these social risks can lead to reputational, legal, and regulatory risks for firms, and most importantly can have extremely damaging impacts on society.

See the article here:
Five Strategies for Putting AI at the Center of Digital Transformation - Knowledge@Wharton