Machine Learning Algorithm From RaySearch Enhances Workflow at Swedish Radiation Therapy Clinic – PRNewswire

STOCKHOLM, June 29, 2020 /PRNewswire/ -- RaySearch Laboratories AB (publ) has announced that by using a machine learning algorithm in treatment planning RayStation*, Mlar Hospital in Eskilstuna, Sweden, has made significant time savings in dose planning for radiation therapy. The algorithm in question is a deep learning method for contouring the patients' organs. The decision to implement this advanced technology was made to save time, thereby alleviating the prevailing shortage of doctors specialized in radiation therapy at the hospital - which was also exacerbated by the COVID-19 situation.

When creating a plan for radiation treatment of cancer, it is critical to carefully define the tumor volume. In order to avoid unwanted side-effects, it is also necessary to identify different organs in the tumor's environment, so-called organs at risk. This process is called contouring and is usually performed using manual or semi-automatic tools.

The deep learning contouring feature in RayStation uses machine learning models that have been trained and evaluated on previous clinical cases to create contours of the patient's organs automatically and quickly. Healthcare staff can review and, if necessary, adjust the contours. The final result is reached much faster than with other methods.

Andreas Johansson, physicist at Region Srmland, which runs Mlar Hospital, says: "We used deep learning to contour the first patient on May 26 and the treatment was performed on June 9. From taking 45-60 minutes per patient, the contouring now only takes 10-15 minutes, which means a huge time saving."

Johan Lf, founder and CEO, RaySearch, says: "Mlar Hospital was very quick to implement RayStation in 2015 and now it has shown again how quickly new technology can be adopted and brought into clinical use. The fact that this helps to resolve a situation where hospital resources are unusually strained is of course also very positive."

CONTACT:

For further information, please contact:Johan Lf, Founder and CEO, RaySearch Laboratories AB (publ)Telephone: +46-(0)-8-510-530-00[emailprotected]

Peter Thysell, CFO, RaySearch Laboratories AB (publ)Telephone: +46-(0)-70-661-05-59[emailprotected]

This information was brought to you by Cision http://news.cision.com

https://news.cision.com/raysearch-laboratories/r/machine-learning-algorithm-from-raysearch-enhances-workflow-at-swedish-radiation-therapy-clinic,c3144587

The following files are available for download:

SOURCE RaySearch Laboratories

See original here:
Machine Learning Algorithm From RaySearch Enhances Workflow at Swedish Radiation Therapy Clinic - PRNewswire

Artificial Intelligence, Machine Learning and the Future of Graphs – BBN Times

object(stdClass)#33733 (59) { ["id"]=> string(4) "6201" ["title"]=> string(66) "Artificial Intelligence, Machine Learning and the Future of Graphs" ["alias"]=> string(65) "artificial-intelligence-machine-learning-and-the-future-of-graphs" ["introtext"]=> string(296) "

I am a skeptic of machine learning. There, I've said it. I say this not because I don't think that machine learning is a poor technology - it's actually quite powerful for what it does - but because machine-learning by itself is only half a solution.

To explain this (and the relationship that graphs have to machine learning and AI), it's worth spending a bit of time exploring what exactly machine learning does, how it works. Machine learning isn't actually one particular algorithm or piece of software, but rather the use of statistical algorithms to analyze large amounts of data and from that construct a model that can, at a minimum, classify the data consistently. If it's done right, the reasoning goes, it should then be possible to use that model to classify new information so that it's consistent with what's already known.

Many such systems make use of clustering algorithms - they take a look at data as vectors that can be described in an n-dimensional space. That is to say, there are n different facets that describe a particular thing, such as a thing's color, shape (morphology), size, texture, and so forth. Some of these attributes can be identified by a single binary (does the thing have a tail or not), but in most cases the attributes usually range along a spectrum, such as "does the thing have an an exclusively protein-based diet (an obligate carnivore) or does its does consist of a certain percentage of grains or other plants?". In either case, this means that it is possible to use the attribute as a means to create a number between zero and one (what mathematicians would refer to as a normal orthogonal vector).

Orthogonality is an interesting concept. In mathematics, two vectors are considered orthogonal if there exists some coordinate system in which you cannot express any information about one vector using the other. For instance, if two vectors are at right angles to one another, then there is one coordinate system where one vector aligns with the x-axis and the other with the y-axis. I cannot express any part of the length of a vector along the y axis by multiplying the length of the vector on the x-axis. In this case they are independent of one another.

This independence is important. Mathematically, there is no correlation between the two vectors - they represent different things, and changing one vector tells me nothing about any other vector. When vectors are not orthogonal, one bleeds a bit (or more than a bit) into another. One two vectors are parallel to one another, they are fully correlated - one vector can be expressed as a multiple of the other. A vector in two dimensions can always be expressed as the "sum" of two orthogonal vectors, a vector in three dimensions, can always be expressed as the "sum" of three orthogonal vectors and so forth.

If you can express a thing as a vector consisting of weighted values, this creates a space where related things will generally be near one another in an n-dimensional space. Cats, dogs, and bears are all carnivores, so in a model describing animals, they will tend to be clustered in a different group than rabbits, voles, and squirrels based upon their dietary habits. At the same time cats,, dogs and bears will each tend to cluster in different groups based upon size as even a small adult bear will always be larger than the largest cat and almost all dogs. In a two dimensional space, it becomes possible to carve out a region where you have large carnivores, medium-sized carnivores, small carnivores, large herbivores and so forth.

Machine learning (at its simplest) would recognize that when you have a large carnivore, given a minimal dataset, you're likely to classify that as a bear, because based upon the two vectors size and diet every time you are at the upper end of the vectors for those two values, everything you've already seen (your training set) is a bear, while no vectors outside of this range are classified in this way.

A predictive model with only two independent vectors is going to be pretty useless as a classifier for more than a small set of items. A fox and a dog will be indistinguishable in this model, and for that matter, a small dog such as a Shitsu vs. a Maine Coon cat will confuse the heck out of such a classifier. On the flip side, the more variables that you add, the harder it is to ensure orthogonality, and the more difficult it then becomes determine what exactly is the determining factor(s) for classification, and consequently increasing the chances of misclassification. A panda bear is, anatomically and genetically, a bear. Yet because of a chance genetic mutation it is only able to reasonably digest bamboo, making it a herbivore.

You'd need to go to a very fine-grained classifier, one capable of identifying genomic structures, to identify a panda as a bear. The problem here is not in the mathematics but in the categorization itself. Categorizations are ultimately linguistic structures. Normalization functions are themselves arbitrary, and how you normalize will ultimately impact the kind of clustering that forms. When the number of dimensions in the model (even assuming that they are independent, which gets harder to determine with more variables) gets too large, then the size of hulls for clustering becomes too small, and interpreting what those hulls actually significant become too complex.

This is one reason that I'm always dubious when I hear about machine learning models that have thousands or even millions of dimensions. As with attempting to do linear regressions on curves, there are typically only a handful of parameters that typically drive most of the significant curve fitting, which is ultimately just looking for adequate clustering to identify meaningful patterns - and typically once these patterns are identified, then they are encoded and indexed.

Facial recognition, for instance, is considered a branch of machine learning, but for the most part it works because human faces exist within a skeletal structure that limits the variations of light and dark patterns of the face. This makes it easy to identify the ratios involved between eyes, nose, and mouth, chin and cheekbones, hairlines and other clues, and from that reduce this information to a graph in which the edges reflect relative distances between those parts. This can, in turn, be hashed as a unique number, in essence encoding a face as a graph in a database. Note this pattern. Because the geometry is consistent, rotating a set of vectors to present a consistent pattern is relatively simple (especially for modern GPUs).

Facial recognition then works primarily due to the ability to hash (and consequently compare) graphs in databases. This is the same way that most biometric scans work, taking a large enough sample of datapoints from unique images to encode ratios, then using the corresponding key to retrieve previously encoded graphs. Significantly, there's usually very little actual classification going on here, save perhaps in using courser meshes to reduce the overall dataset being queried. Indeed, the real speed ultimately is a function of indexing.

This is where the world of machine learning collides with that of graphs. I'm going to make an assertion here, one that might get me into trouble with some readers. Right now there's a lot of argument about the benefits and drawbacks of property graphs vs. knowledge graphs. I contend that this argument is moot - it's a discussion about optimization strategies, and the sooner that we get past that argument, the sooner that graphs will make their way into the mainstream.

Ultimately, we need to recognize that the principal value of a graph is to index information so that it does not need to be recalculated. One way to do this is to use machine learning to classify, and semantics to bind that classification to the corresponding resource (as well as to the classifier as an additional resource). If I have a phrase that describes a drink as being nutty or fruity, then these should be identified as classifications that apply to drinks (specifically to coffees, teas or wines). If I come across flavors such as hazelnut, cashew or almond, then these should be correlated with nuttiness, and again stored in a semantic graph.

The reason for this is simple - machine learning without memory is pointless and expensive. Machine learning is fast facing a crisis in that it requires a lot of cycles to train, classify and report. Tie machine learning into a knowledge graph, and you don't have to relearn all the time, and you can also reduce the overall computational costs dramatically. Furthermore, you can make use of inferencing, which are rules that can make use of generalization and faceting in ways that are difficult to pull off in a relational data system. Something is bear-like if it is large, has thick fur, does not have opposable thumbs, has a muzzle, is capable of extended bipedal movement and is omnivorous.

What's more, the heuristic itself is a graph, and as such is a resource that can be referenced. This is something that most people fail to understand about both SPARQL and SHACL. They are each essentially syntactic sugar on top of graph templates. They can be analyzed, encoded and referenced. When a new resource is added into a graph, the ingestion process can and should run against such templates to see if they match, then insert or delete corresponding additional metadata as the data is folded in.

Additionally, one of those pieces of metadata may very well end up being an identifier for the heuristic itself, creating what's often termed a reverse query. Reverse queries are significant because they make it possible to determine which family of classifiers was used to make decisions about how an entity is classified, and from that ascertain the reasons why a given entity was classified a certain way in the first place.

This gets back to one of the biggest challenges seen in both AI and machine learning - understanding why a given resource was classified. When you have potentially thousands of facets that may have potentially been responsible for a given classification, the ability to see causal chains can go a long way towards making such a classification system repeatable and determining whether the reason for a given classification was legitimate or an artifact of the data collection process. This is not something that AI by itself is very good at, because it's a contextual problem. In effect, semantic graphs (and graphs in general) provide a way of making recommendations self-documenting, and hence making it easier to trust the results of AI algorithms.

One of the next major innovations that I see in graph technology is actually a mathematical change. Most graphs that exist right now can be thought of as collections of fixed vectors, entities connected by properties with fixed values. However, it is possible (especially when using property graphs) to create properties that are essentially parameterized over time (or other variables) or that may be passed as functional results from inbound edges. This is, in fact, an alternative approach to describing neural networks (both physical and artificial), and it has the effect of being able to make inferences based upon changing conditions over time.

This approach can be seen as one form of modeling everything from the likelihood of events happening given other events (Bayesian trees) or modeling complex cost-benefit relationships. This can be facilitated even today with some work, but the real value will come with standardization, as such graphs (especially when they are closed network circuits) can in fact act as trainable neuron circuits.

It is also likely that graphs will play a central role in Smart Contracts, "documents" that not only specify partners and conditions but also can update themselves transactional, can trigger events and can spawn other contracts and actions. These do not specifically fall within the mandate of "artificial intelligence" per se, but the impact that smart contracts play in business and society, in general, will be transformative at the very least.

It's unlikely that this is the last chapter on graphs, either (though it is the last in the series about the State of the Graph). Graphs, ultimately, are about connections and context. How do things relate to one another? How are they connected? What do people know, and how do they know them. They underlie contracts and news, research and entertainment, history and how the future is shaped. Graphs promise a means of generating knowledge, creating new models, and even learning. They remind us that, even as forces try to push us apart, we are all ultimately only a few hops from one another in many, many ways.

I'm working on a book calledContext, hopefully out by Summer 2020. Until then, stay connected.

View original post here:
Artificial Intelligence, Machine Learning and the Future of Graphs - BBN Times

Machine Learning Chips Market Growth by Top Companies, Trends by Types and Application, Forecast to 2026 – 3rd Watch News

Los Angeles, United State: QY Research recently published a research report titled, Global Machine Learning Chips Market Research Report 2020-2026. The research report attempts to give a holistic overview of the Machine Learning Chips market by keeping the information simple, relevant, accurate, and to the point. The researchers have explained each aspect of the market thoroughmeticulous research and undivided attention to every topic. They have also provided data in statistical data to help readers understand the whole market. The Machine Learning Chips Market report further provides historic and forecast data generated through primary and secondary research of the region and their respective manufacturers.

Get Full PDF Sample Copy of Report: (Including Full TOC, List of Tables & Figures, Chart) https://www.qyresearch.com/sample-form/form/1839774/global-machine-learning-chips-market

Global Machine Learning Chips Market report section gives special attention to the manufacturers in different regions that are expected to show a considerable expansion in their market share. Additionally, it underlines all the current and future trends that are being adopted by these manufacturers to boost their current market shares. This Machine Learning Chips Market report Understanding the various strategies being carried out by various manufacturers will help reader make right business decisions.

Key Players Mentioned in the Global Machine Learning Chips Market Research Report: Wave Computing, Graphcore, Google Inc, Intel Corporation, IBM Corporation, Nvidia Corporation, Qualcomm, Taiwan Semiconductor Manufacturing Machine Learning Chips

Global Machine Learning Chips Market Segmentation by Product: Neuromorphic Chip, Graphics Processing Unit (GPU) Chip, Flash Based Chip, Field Programmable Gate Array (FPGA) Chip, Other Machine Learning Chips

Global Machine Learning Chips Market Segmentation by Application: , Robotics Industry, Consumer Electronics, Automotive, Healthcare, Other

The Machine Learning Chips market is divided into the two important segments, product type segment and end user segment. In the product type segment it lists down all the products currently manufactured by the companies and their economic role in the Machine Learning Chips market. It also reports the new products that are currently being developed and their scope. Further, it presents a detailed understanding of the end users that are a governing force of the Machine Learning Chips market.

In this chapter of the Machine Learning Chips Market report, the researchers have explored the various regions that are expected to witness fruitful developments and make serious contributions to the markets burgeoning growth. Along with general statistical information, the Machine Learning Chips Market report has provided data of each region with respect to its revenue, productions, and presence of major manufacturers. The major regions which are covered in the Machine Learning Chips Market report includes North America, Europe, Central and South America, Asia Pacific, South Asia, the Middle East and Africa, GCC countries, and others.

Key questions answered in the report:

Get Complete Report in your inbox within 24 hours at USD( 4900): https://www.qyresearch.com/settlement/pre/63325f365f710a0813535ce285714216,0,1,global-machine-learning-chips-market

Table of Content

1 Study Coverage1.1 Machine Learning Chips Product Introduction1.2 Key Market Segments in This Study1.3 Key Manufacturers Covered: Ranking of Global Top Machine Learning Chips Manufacturers by Revenue in 20191.4 Market by Type1.4.1 Global Machine Learning Chips Market Size Growth Rate by Type1.4.2 Neuromorphic Chip1.4.3 Graphics Processing Unit (GPU) Chip1.4.4 Flash Based Chip1.4.5 Field Programmable Gate Array (FPGA) Chip1.4.6 Other1.5 Market by Application1.5.1 Global Machine Learning Chips Market Size Growth Rate by Application1.5.2 Robotics Industry1.5.3 Consumer Electronics1.5.4 Automotive1.5.5 Healthcare1.5.6 Other1.6 Study Objectives1.7 Years Considered 2 Executive Summary2.1 Global Machine Learning Chips Market Size, Estimates and Forecasts2.1.1 Global Machine Learning Chips Revenue Estimates and Forecasts 2015-20262.1.2 Global Machine Learning Chips Production Capacity Estimates and Forecasts 2015-20262.1.3 Global Machine Learning Chips Production Estimates and Forecasts 2015-20262.2 Global Machine Learning Chips, Market Size by Producing Regions: 2015 VS 2020 VS 20262.3 Analysis of Competitive Landscape2.3.1 Manufacturers Market Concentration Ratio (CR5 and HHI)2.3.2 Global Machine Learning Chips Market Share by Company Type (Tier 1, Tier 2 and Tier 3)2.3.3 Global Machine Learning Chips Manufacturers Geographical Distribution2.4 Key Trends for Machine Learning Chips Markets & Products2.5 Primary Interviews with Key Machine Learning Chips Players (Opinion Leaders) 3 Market Size by Manufacturers3.1 Global Top Machine Learning Chips Manufacturers by Production Capacity3.1.1 Global Top Machine Learning Chips Manufacturers by Production Capacity (2015-2020)3.1.2 Global Top Machine Learning Chips Manufacturers by Production (2015-2020)3.1.3 Global Top Machine Learning Chips Manufacturers Market Share by Production3.2 Global Top Machine Learning Chips Manufacturers by Revenue3.2.1 Global Top Machine Learning Chips Manufacturers by Revenue (2015-2020)3.2.2 Global Top Machine Learning Chips Manufacturers Market Share by Revenue (2015-2020)3.2.3 Global Top 10 and Top 5 Companies by Machine Learning Chips Revenue in 20193.3 Global Machine Learning Chips Price by Manufacturers3.4 Mergers & Acquisitions, Expansion Plans 4 Machine Learning Chips Production by Regions4.1 Global Machine Learning Chips Historic Market Facts & Figures by Regions4.1.1 Global Top Machine Learning Chips Regions by Production (2015-2020)4.1.2 Global Top Machine Learning Chips Regions by Revenue (2015-2020)4.2 North America4.2.1 North America Machine Learning Chips Production (2015-2020)4.2.2 North America Machine Learning Chips Revenue (2015-2020)4.2.3 Key Players in North America4.2.4 North America Machine Learning Chips Import & Export (2015-2020)4.3 Europe4.3.1 Europe Machine Learning Chips Production (2015-2020)4.3.2 Europe Machine Learning Chips Revenue (2015-2020)4.3.3 Key Players in Europe4.3.4 Europe Machine Learning Chips Import & Export (2015-2020)4.4 China4.4.1 China Machine Learning Chips Production (2015-2020)4.4.2 China Machine Learning Chips Revenue (2015-2020)4.4.3 Key Players in China4.4.4 China Machine Learning Chips Import & Export (2015-2020)4.5 Japan4.5.1 Japan Machine Learning Chips Production (2015-2020)4.5.2 Japan Machine Learning Chips Revenue (2015-2020)4.5.3 Key Players in Japan4.5.4 Japan Machine Learning Chips Import & Export (2015-2020)4.6 South Korea4.6.1 South Korea Machine Learning Chips Production (2015-2020)4.6.2 South Korea Machine Learning Chips Revenue (2015-2020)4.6.3 Key Players in South Korea4.6.4 South Korea Machine Learning Chips Import & Export (2015-2020) 5 Machine Learning Chips Consumption by Region5.1 Global Top Machine Learning Chips Regions by Consumption5.1.1 Global Top Machine Learning Chips Regions by Consumption (2015-2020)5.1.2 Global Top Machine Learning Chips Regions Market Share by Consumption (2015-2020)5.2 North America5.2.1 North America Machine Learning Chips Consumption by Application5.2.2 North America Machine Learning Chips Consumption by Countries5.2.3 U.S.5.2.4 Canada5.3 Europe5.3.1 Europe Machine Learning Chips Consumption by Application5.3.2 Europe Machine Learning Chips Consumption by Countries5.3.3 Germany5.3.4 France5.3.5 U.K.5.3.6 Italy5.3.7 Russia5.4 Asia Pacific5.4.1 Asia Pacific Machine Learning Chips Consumption by Application5.4.2 Asia Pacific Machine Learning Chips Consumption by Regions5.4.3 China5.4.4 Japan5.4.5 South Korea5.4.6 India5.4.7 Australia5.4.8 Taiwan5.4.9 Indonesia5.4.10 Thailand5.4.11 Malaysia5.4.12 Philippines5.4.13 Vietnam5.5 Central & South America5.5.1 Central & South America Machine Learning Chips Consumption by Application5.5.2 Central & South America Machine Learning Chips Consumption by Country5.5.3 Mexico5.5.3 Brazil5.5.3 Argentina5.6 Middle East and Africa5.6.1 Middle East and Africa Machine Learning Chips Consumption by Application5.6.2 Middle East and Africa Machine Learning Chips Consumption by Countries5.6.3 Turkey5.6.4 Saudi Arabia5.6.5 U.A.E 6 Market Size by Type (2015-2026)6.1 Global Machine Learning Chips Market Size by Type (2015-2020)6.1.1 Global Machine Learning Chips Production by Type (2015-2020)6.1.2 Global Machine Learning Chips Revenue by Type (2015-2020)6.1.3 Machine Learning Chips Price by Type (2015-2020)6.2 Global Machine Learning Chips Market Forecast by Type (2021-2026)6.2.1 Global Machine Learning Chips Production Forecast by Type (2021-2026)6.2.2 Global Machine Learning Chips Revenue Forecast by Type (2021-2026)6.2.3 Global Machine Learning Chips Price Forecast by Type (2021-2026)6.3 Global Machine Learning Chips Market Share by Price Tier (2015-2020): Low-End, Mid-Range and High-End 7 Market Size by Application (2015-2026)7.2.1 Global Machine Learning Chips Consumption Historic Breakdown by Application (2015-2020)7.2.2 Global Machine Learning Chips Consumption Forecast by Application (2021-2026) 8 Corporate Profiles8.1 Wave Computing8.1.1 Wave Computing Corporation Information8.1.2 Wave Computing Overview8.1.3 Wave Computing Production Capacity and Supply, Price, Revenue and Gross Margin (2015-2020)8.1.4 Wave Computing Product Description8.1.5 Wave Computing Related Developments8.2 Graphcore8.2.1 Graphcore Corporation Information8.2.2 Graphcore Overview8.2.3 Graphcore Production Capacity and Supply, Price, Revenue and Gross Margin (2015-2020)8.2.4 Graphcore Product Description8.2.5 Graphcore Related Developments8.3 Google Inc8.3.1 Google Inc Corporation Information8.3.2 Google Inc Overview8.3.3 Google Inc Production Capacity and Supply, Price, Revenue and Gross Margin (2015-2020)8.3.4 Google Inc Product Description8.3.5 Google Inc Related Developments8.4 Intel Corporation8.4.1 Intel Corporation Corporation Information8.4.2 Intel Corporation Overview8.4.3 Intel Corporation Production Capacity and Supply, Price, Revenue and Gross Margin (2015-2020)8.4.4 Intel Corporation Product Description8.4.5 Intel Corporation Related Developments8.5 IBM Corporation8.5.1 IBM Corporation Corporation Information8.5.2 IBM Corporation Overview8.5.3 IBM Corporation Production Capacity and Supply, Price, Revenue and Gross Margin (2015-2020)8.5.4 IBM Corporation Product Description8.5.5 IBM Corporation Related Developments8.6 Nvidia Corporation8.6.1 Nvidia Corporation Corporation Information8.6.2 Nvidia Corporation Overview8.6.3 Nvidia Corporation Production Capacity and Supply, Price, Revenue and Gross Margin (2015-2020)8.6.4 Nvidia Corporation Product Description8.6.5 Nvidia Corporation Related Developments8.7 Qualcomm8.7.1 Qualcomm Corporation Information8.7.2 Qualcomm Overview8.7.3 Qualcomm Production Capacity and Supply, Price, Revenue and Gross Margin (2015-2020)8.7.4 Qualcomm Product Description8.7.5 Qualcomm Related Developments8.8 Taiwan Semiconductor Manufacturing8.8.1 Taiwan Semiconductor Manufacturing Corporation Information8.8.2 Taiwan Semiconductor Manufacturing Overview8.8.3 Taiwan Semiconductor Manufacturing Production Capacity and Supply, Price, Revenue and Gross Margin (2015-2020)8.8.4 Taiwan Semiconductor Manufacturing Product Description8.8.5 Taiwan Semiconductor Manufacturing Related Developments 9 Machine Learning Chips Production Forecast by Regions9.1 Global Top Machine Learning Chips Regions Forecast by Revenue (2021-2026)9.2 Global Top Machine Learning Chips Regions Forecast by Production (2021-2026)9.3 Key Machine Learning Chips Production Regions Forecast9.3.1 North America9.3.2 Europe9.3.3 China9.3.4 Japan9.3.5 South Korea 10 Machine Learning Chips Consumption Forecast by Region10.1 Global Machine Learning Chips Consumption Forecast by Region (2021-2026)10.2 North America Machine Learning Chips Consumption Forecast by Region (2021-2026)10.3 Europe Machine Learning Chips Consumption Forecast by Region (2021-2026)10.4 Asia Pacific Machine Learning Chips Consumption Forecast by Region (2021-2026)10.5 Latin America Machine Learning Chips Consumption Forecast by Region (2021-2026)10.6 Middle East and Africa Machine Learning Chips Consumption Forecast by Region (2021-2026) 11 Value Chain and Sales Channels Analysis11.1 Value Chain Analysis11.2 Sales Channels Analysis11.2.1 Machine Learning Chips Sales Channels11.2.2 Machine Learning Chips Distributors11.3 Machine Learning Chips Customers 12 Market Opportunities & Challenges, Risks and Influences Factors Analysis12.1 Machine Learning Chips Industry12.2 Market Trends12.3 Market Opportunities and Drivers12.4 Market Challenges12.5 Machine Learning Chips Market Risks/Restraints12.6 Porters Five Forces Analysis 13 Key Finding in The Global Machine Learning Chips Study 14 Appendix14.1 Research Methodology14.1.1 Methodology/Research Approach14.1.2 Data Source14.2 Author Details14.3 Disclaimer

About Us:

QY Research established in 2007, focus on custom research, management consulting, IPO consulting, industry chain research, data base and seminar services. The company owned a large basic data base (such as National Bureau of statistics database, Customs import and export database, Industry Association Database etc), experts resources (included energy automotive chemical medical ICT consumer goods etc.

Read this article:
Machine Learning Chips Market Growth by Top Companies, Trends by Types and Application, Forecast to 2026 - 3rd Watch News

AI threat intelligence is the future, and the future is now – TechTarget

The next progression in organizations using threat intelligence is adding AI threat intelligence capabilities, in the form of machine learning technologies, to improve attack detection. Machine learning is a form of AI that enables computers to analyze data and learn its significance. The rationale for using machine learning with threat intelligence is to enable computers to more rapidly detect attacks than humans can and stop those attacks before more damage occurs. In addition, because the volume of threat intelligence is often so large, traditional detection technologies inevitably generate too many false positives. Machine learning can analyze the threat intelligence and condense it into a smaller set of things to look for, thereby reducing the number of false positives.

This sounds fantastic, but there's a catch -- actually, a few catches. Expecting AI to magically improve security is unrealistic, and deploying machine learning without preparation and ongoing support may make things worse.

Here are three steps enterprises should take to use AI threat intelligence tools with machine learning capabilities to improve attack detection.

AI threat intelligence products that use machine learning work by taking inputs, analyzing them and producing outputs. For attack detection, machine learning's inputs include threat intelligence, and its outputs are either alerts indicating attacks or automated actions stopping attacks. If the threat intelligence has errors, it will give "bad" information to the attack detection tools, so the tools' machine learning algorithms may produce "bad" outputs.

Many organizations subscribe to multiple sources of threat intelligence. These include feeds, which contain machine-readable signs of attacks, like the IP addresses of computers issuing attacks and the file names used by malware. Other sources of threat intelligence are services, which generally provide human-readable prose describing the newest threats. Machine learning can use feeds but not services.

Organizations should use the highest quality threat intelligence feeds for machine learning. Characteristics to consider include the following:

It's hard to directly evaluate the quality of threat intelligence, but you can indirectly evaluate it based on the number of false positives that occur from using it. High-quality threat intelligence should lead to minimal false positives when it's used directly by detection tools -- without machine learning.

False positives are a real concern if you're using threat intelligence with machine learning to do things like automatically block attacks. Mistakes will disrupt benign activity and could negatively affect operations.

Ultimately, threat intelligence is just one part of assessing risk. Another part is understanding context -- like the role, importance and operational characteristics of each computer. Providing contextual information to machine learning can help it get more value from threat intelligence. Suppose threat intelligence indicates a particular external IP address is malicious. Detecting outgoing network traffic from an internal database server to that address might merit a different action than outgoing network traffic to the same address from a server that sends files to subscribers every day.

The toughest part of using machine learning is providing the actual learning. Machine learning needs to be told what's good and what's bad, as well as when it makes mistakes so it can learn from them. This requires frequent attention from skilled humans. A common way of teaching machine learning-enabled technologies is to put them into a monitor-only mode where they identify what's malicious but don't block anything. Humans review the machine learning tool's alerts and validate them, letting it know which were erroneous. Without feedback from humans, machine learning can't improve on its mistakes.

Conventional wisdom is to avoid relying on AI threat intelligence that uses machine learning to detect attacks because of concern over false positives. That makes sense in some environments, but not in others. Older detection techniques are more likely to miss the latest attacks, which may not follow the patterns those techniques typically look for. Machine learning can help security teams find the latest attacks, but with potentially higher false positive rates. If missing attacks is a greater concern than the resources needed to investigate additional false positives, then more reliance on automation utilizing machine learning may make sense to protect those assets.

Many organizations will find it best to use threat intelligence without machine learning for some purposes, and to get machine learning-generated insights for other purposes. For example, threat hunters might use machine learning to get suggestions of things to investigate that would have been impossible for them to find in large threat intelligence data sets. Also, don't forget about threat intelligence services -- their reports can provide invaluable insights for threat hunters on the newest threats. These insights often include things that can't easily be automated into something machine learning can process.

Originally posted here:
AI threat intelligence is the future, and the future is now - TechTarget

Machine Learning and How it Is Transforming Transportation – IT Business Net

If you are in any way connected to the computer world, you have heard of the term machine learning. It is such an important concept, but it has been used as a buzzword so much that it is starting to lose its effectiveness. That said, machine learning is one of the most important developments in the computing world and if it can be utilized to its full potential, it is set to revolutionize the way we use computers. Because of its versatility and flexibility, machine learning can be used in almost any industry where tasks can be automated. These are industries where machines can learn to think like humans and be able to perform at the same level as or even better than humans. One of these areas is transportation.

When many people think about artificial intelligence (AI) and machine learning as it ties into the automotive industry, they think about driverless cars and fleets of cars communicating with each other in real-time. While this is one part of it, there is so much more. Machine learning can be used to:

When doing their research, scientists and computer programmers are starting to look at machine learning at a higher level and using it to revolutionize the engine and to help decision-makers make the best decisions about transportation systems.

In the past, computer programmers had to write code that told the computer what to do in specific situations. This code would get more complex and unmaintainable as computer programmers tried to plan for and provide code for every case their program would encounter. Now, programmers can write the base code and use neural networks to train computers on what to do in all these different scenarios. Because computers are able to crunch data faster than we could, they are able to discover cases we never could.

Now, computer programmers feed machine learning algorithms using:

The machines are then asked to find a relationship between the two. Once it is done, the data produced is used to create modes that are used to make predictions.

Researchers are using machine learning to explore how transportation systems are designed. This helps them understand what issues are contained therein and how they affect entire transportation systems.

Their research will help transportation departments:

Understanding the complexity of transportation systems is almost impossible unless researchers comb through a huge amount of data. Machine learning can help them not only decipher this data, but also help them find trends and relationships and see how both of these affect transportation systems.

The insights that come out of such explorations will help:

These insights also help with decision making as they can help people and autonomous vehicles make better decisions, help coordinate emergency responses and help planners minimize the impact of the disruption of a transportation system in a given area.

Machine learning is also being used to optimize engine designs and the processes used to produce these engines. For example, researchers have been able to develop new combustion models using machine learning. These models have reduced the amount of time it takes to complete engine combustion simulations.

Using neural networks, researchers have also been able to model complex properties that were previously not available. Now, scientists can create complex reaction pathways to see how combustion happens inside new engine models. Because of this, researchers and automotive manufacturers are able to better optimize their engines.

In the past, researchers were forced to reduce the complexity of their combustion models. This is because they did not have powerful tools to help them carry out complex simulations. This led to data that was not as accurate as it should have been. All this has now changed with the advancement of machine learning, deep learning, AI and neural networks.

Computers that run machine learning models are very good at making predictions using past data. This data can be used to optimize route planning for both drivers and fleet managers. Machine learning can help these parties understand:

Once drivers and fleet operators understand all these things, they can choose cars and routes that save fuel while saving time and maximizing transportation efficiency.

The only way to understand what is going to happen in the future is to make predictions that are as accurate as possible. This has been enabled by the use of machine learning. Machine learning is being used to predict how transportation systems will look in the future. Researchers are doing this with the aim of predicting the impact of the transportation system on the world around us as it continues to grow and how this growing transportation system will impact energy needs.

Researchers are forced to model their predictions using:

Using these predictions, researchers can see how different technologies will impact transportation systems of the future. This allows them to focus on the technologies that will have the most impact.

Machine learning has brought us new modes of transport. These include autonomous cars, driverless shuttles and more. You can click here to learn more about how future transportation is likely to look. Perhaps the most common mode of transportation impacted by machine learning is autonomous cars.

Autonomous cars are fitted with computers that run different scenarios as they drive or are driven around. This computer makes it possible for this car to identify:

All this data is used to identify the safest route to follow to avoid collisions and keep transportation systems as safe as possible. As it stands, these cars need a human to always be behind the wheel in case of an emergency. As this technology matures and the computers in autonomous cars become more powerful, we will have cars that can drive themselves. The possibilities are both exciting and endless.

Perhaps we will have other autonomous modes of transportation like driverless trucks and autonomous airplanes. At this point, we can only speculate.

It is almost impossible to talk about the future of transportation without talking about 5G technology. 5G is the fifth generation of mobile communication and it comes with so many advantages. The most important of these are:

As we look into the future of driverless fleets of cars, it becomes clear that these cars need some way to communicate with each other. This is for purposes like overtaking, turning at junctions, giving right of way and more.

Ideally, we want these cars to communicate in real-time or as close to it as we can get. With low latency times and fast speeds, 5G stands as the best option for this purpose. Of course, communication technology will continue to evolve and we might see better speeds and lower latency in the future. That said, we already have something we can use to enable fleets of driverless cars.

Machine learning is a very complex topic, with both upsides and downsides, because we are just starting to see its potential. That said, there are some upsides that we are already seeing:

Machine learning has some downsides too. One of the biggest ones is job loss. As machine learning creates jobs in some sectors, it will lead to massive job losses in the transportation sector. Just think about all the drivers who will be left without a job if we switch to driverless cars. All these taxi and long-haul drivers will have to find new jobs.

There is no denying that machine learning is here and it will revolutionize the transportation sector. Its impacts on the reduction of fuel and the time it takes to get from one place to another are touted as its biggest achievements, as is the development of fuel-efficient engines, something that will have a massive positive impact on the environment.

View post:
Machine Learning and How it Is Transforming Transportation - IT Business Net

How machine learning can bridge the communication gap – ComputerWeekly.com

In October 2019, an Amazon employee in Melbourne, Australia, bumped into another person while cycling on the road. As she was assuring that person that she would help, she realised he was deaf and mute and had no idea what she was saying.

That awkward situation could have been avoided if assistive technology was on hand to facilitate communication between the two parties. Following the incident, a team led by Santanu Dutt, head of technology for Southeast Asia at Amazon Web Services, got down to work.

Within 10 days or so, Dutts team had built a machine learning model that was trained on sign languages. Using images of a person gesturing in sign language that were captured from a camera, the model could recognise and translate gestures into text. The model also could convert spoken words into text for a deaf-mute person to see.

Dutt said the model can also be customised to translate speech into sign languages as the machine learning services and application programming interfaces (APIs) are available and open although he has not seen that demand yet. But once you write a small bit of code, training the machine learning model is easy, he said.

But there is still work to be done. As the training was performed with signs gestured against a white background, the efficacy of the model in its current form would be limited in actual use.

Our team had limited time to showcase this and we wanted to bump up something to showcase for experimental purposes, said Dutt, adding that organisations can use tools such as Amazon SageMaker to edit and train the model with more images and videos to recognise a larger variety of environments.

As the training process is intensive, Dutt said organisations with limited resources can use Amazon SageMaker Ground Truth to build training datasets for such machine learning models quickly. Besides automatic labelling, Ground Truth also provides access to human labellers through the Amazon Mechanical Turk crowdsourcing service.

This will also help to improve the models accuracy rate. The more data you have, the more accurate the model gets, said Dutt, adding that developers can set confidence levels and reject results that fall below a certain level of accuracy.

Dutt said AWSs public sector team has engaged non-profit organisations in Australia to conduct a proof of concept that makes use of the machine learning model, as well as those in other countries through credits that offset the cost of using AWS services to train and deploy the model.

Read more here:
How machine learning can bridge the communication gap - ComputerWeekly.com

From streaming hive data to acoustics, SAS uses machine learning, analytics to boost bee populations – WRAL Tech Wire

CARY SAS wants to help save the worlds No.1 food crop pollinator the honey bee. And its doing so right in the Triangles backyard.

To coincide with World Bee Day, the Cary-base software analytics firm today confirmed it is working on three separate projectswhere technology is monitoring, tracking and improving pollinator populations around the globe.

They include observing real-time conditions of beehives using an acoustic streaming system; working with Appalachian State University on the World Bee Count to visualize world bee population data; and decoding bee communication to maximize their food access.

By applying advanced analytics and artificial intelligence to beehive health, we have a better shot as a society to secure this critically important part of our ecosystem and, ultimately, our food supply, said Oliver Schabenberger, COO and CTO of SAS, in a statement.

Researchers from the SAS IoT Division are developing a bioacoustic monitoring system to non-invasively track real-time conditions of beehives using digital signal processing tools and machine learning algorithms available in SASEvent Stream Processingand SAS Viya software.

By connecting sensors to SAS four Bee Downtown hives at its headquarters in Cary, NC, the team startedstreaming hive datadirectly to the cloud to continuously measure data points in and around the hive, including weight, temperature, humidity, flight activity and acoustics. In-stream machine learning models were used to listen to the hive sounds, which can indicate health, stress levels, swarming activities and the status of the queen bee.

To ensure only the hum of the hive was being used to determine bees health and happiness, researchers used robust principal component analysis (RPCA), a machine learning technique, to separate extraneous or irrelevant noises from the inventory of sounds collected by hive microphones.

The researchers found that with RPCA capabilities, they could detect worker bees piping at the same frequency range at which a virgin queen pipes after a swarm, likely to assess whether a queen was present. The researchers then designed an automated pipeline to detect either queen piping following a swarm or worker piping that occurs when the colony is queenless.

SAS said the acoustic analysis can alert beekeepers to queen disappearances immediately, which is vitally important to significantly reducing colony loss rates. Its estimated the annual loss rates of US beehives exceed 40 percent and between 25-40 percent of these losses are due to queen failure.

With this system, SAS said beekeepers will have a deeper understanding of their hives without having to conduct time-consuming and disruptive manual inspections.

As a beekeeper myself, I know the magnitude of bees impact on our ecosystem, and Im inspired to find innovative ways to raise healthier bees to benefit us all, said Anya McGuirk, Distinguished Research Statistician Developer in the IoT division at SAS.

The researchers said they plan to implement the acoustic streaming system very soon and are continuing to look for ways to broaden the usage of technology to help honey bees and ultimately humankind.

SAS is also launching a data visualization that maps out bees counted around the globe for theWorld Bee Count, an initiative co-founded by theCenter for Analytics Research and Education(CARE) at Appalachian State University.

The goal: to engage citizens across the world to take pictures of bees as a first step toward understanding the reasons for their alarming decline, SAS says.

The World Bee Count allows us to crowdsource bee data to both visualize our planets bee population and create one of the largest, most informative data sets about bees to date, said Joseph Cazier, Professor and Executive Director at Appalachian State Universitys CARE, in a statement.

In early May, the World Bee Count app was launched for users both beekeepers and the general public, aka citizen data scientists to add data points to the Global Pollinator Map. Within the app, beekeepers can enter the number of hives they have, and any user can submit pictures of pollinators from their camera roll or through the in-app camera. Through SAS Visual Analytics, SAS has created avisualization mapto display the images users submit via the app which, it says, could potentially provide insights about the conditions that lead to the healthiest bee populations.

In future stages of this project, SAS said, the robust data set created from the app could help groups like universities and research institutes better strategize ways to save these vital creatures.

Representing the Nordic region, a team from Amesto NextBridge won the 2020 SAS EMEA Hackathon, which challenged participants to improve sustainability using SAS Viya. Their winning project used machine learning to maximize bees access to food, which would in turn benefit mankinds food supply.

In partnership withBeefutures, the team developed a system capable of automatically detecting, decoding and mapping bee waggle dances using Beefutures observation hives and SAS Viya.

Observing all of these dances manually is virtually impossible, but by using video footage from inside the hives and training machine learning algorithms to decode the dance, we will be able to better understand where bees are finding food, said Kjetil Kalager, lead of the Amesto NextBridge and Beefutures team. We implemented this information, along with hive coordinates, sun angle, time of day and agriculture around the hives into an interactive map in SAS Viya and then beekeepers can easily decode this hive information and relocate to better suited environments if necessary.

SAS said this systematic real-time monitoring of waggle dances allows bees to act as sensors for their ecosystems. It may also uncover other information bees communicate through dance that could help us save and protect their population.

Excerpt from:
From streaming hive data to acoustics, SAS uses machine learning, analytics to boost bee populations - WRAL Tech Wire

Teaching machine learning to check senses may avoid sophisticated attacks – University of Wisconsin-Madison

Complex machines that steer autonomous vehicles, set the temperature in our homes and buy and sell stocks with little human control are built to learn from their environments and act on what they see or hear. They can be tricked into grave errors by relatively simple attacks or innocent misunderstandings, but they may be able to help themselves by mixing their senses.

In 2018, a group of security researchers managed to befuddle object-detecting software with tactics that appear so innocuous its hard to think of them as attacks. By adding a few carefully designed stickers to stop signs, the researchers fooled the sort of object-recognizing computer that helps guide driverless cars. The computers saw an umbrella, bottle or banana but no stop sign.

Two multi-colored stickers attached to a stop sign were enough to disguise it to the eyes of an image-recognition algorithm as a bottle, banana and umbrella. UW-Madison

They did this attack physically added some clever graffiti to a stop sign, so it looks like some person just wrote on it or something and then the object detectors would start seeing it is a speed limit sign, says Somesh Jha, a University of WisconsinMadison computer sciences professor and computer security expert. You can imagine that if this kind of thing happened in the wild, to an auto-driving vehicle, that could be really catastrophic.

The Defense Advanced Research Projects Agency has awarded a team of researchers led by Jha a $2.7 million grant to design algorithms that can protect themselves against potentially dangerous deception. Joining Jha as co-investigators are UWMadison Electrical and Computer Engineering Professor Kassem Fawaz, University of Toronto Computer Sciences Professor Nicolas Papernot, and Atul Prakash, a University of Michigan professor of Electrical Engineering and Computer Science and an author of the 2018 study.

Kassem Fawaz

One of Prakashs stop signs, now an exhibit at the Science Museum of London, is adorned with just two narrow bands of disorganized-looking blobs of color. Subtle changes can make a big difference to object- or audio-recognition algorithms that fly drones or make smart speakers work, because they are looking for subtle cues in the first place, Jha says.

The systems are often self-taught through a process called machine learning. Instead of being programmed into rigid recognition of a stop sign as a red octagon with specific, blocky white lettering, machine learning algorithms build their own rules by picking distinctive similarities from images that the system may know only to contain or not contain stop signs.

The more examples it learns from, the more angles and conditions it is exposed to, the more flexible it can be in making identifications, Jha says. The better it should be at operating in the real world.

But a clever person with a good idea of how the algorithm digests its inputs might be able to exploit those rules to confuse the system.

DARPA likes to stay a couple steps ahead, says Jha. These sorts of attacks are largely theoretical now, based on security research, and wed like them to stay that way.

A military adversary, however or some other organization that sees advantage in it could devise these attacks to waylay sensor-dependent drones or even trick largely automated commodity-trading computers run into bad buying and selling patterns.

Somesh Jha

What you can do to defend against this is something more fundamental during the training of the machine learning algorithms to make them more robust against lots of different types of attacks, says Jha.

One approach is to make the algorithms multi-modal. Instead of a self-driving car relying solely on object-recognition to identify a stop sign, it can use other sensors to cross-check results. Self-driving cars or automated drones have cameras, but often also GPS devices for location and laser-scanning LIDAR to map changing terrain.

So, while the camera may be saying, Hey this is a 45-mile-per-hour speed limit sign, the LIDAR says, But wait, its an octagon. Thats not the shape of a speed limit sign, Jha says. The GPS might say, But were at the intersection of two major roads here, that would be a better place for a stop sign than a speed limit sign.

The trick is not to over-train, constraining the algorithm too much.

The important consideration is how you balance accuracy against robustness against attacks, says Jha. I can have a very robust algorithm that says every object is a cat. It would be hard to attack. But it would also be hard to find a use for that.

Share via Facebook

Share via Twitter

Share via Linked In

Share via Email

See the rest here:
Teaching machine learning to check senses may avoid sophisticated attacks - University of Wisconsin-Madison

How Machine Learning Is Redefining The Healthcare Industry – Small Business Trends

The global healthcare industry is booming. As per recent research, it is expected to cross the $2 trillion mark this year, despite the sluggish economic outlook and global trade tensions. Human beings, in general, are living longer and healthier lives.

There is increased awareness about living organ donation. Robots are being used for gallbladder removals, hip replacements, and kidney transplants. Early diagnosis of skin cancers with minimum human error is a reality. Breast reconstructive surgeries have enabled breast cancer survivors to partake in rebuilding their glands.

All these jobs were unthinkable sixty years ago. Now is an exciting time for the global health care sector as it progresses along its journey for the future.

However, as the worldwide population of 7.7 billion is likely to reach 8.5 billion by 2030, meeting health needs could be a challenge. That is where significant advancements in machine learning (ML) can help identify infection risks, improve the accuracy of diagnostics, and design personalized treatment plans.

source: Deloitte Insights 2020 global health care outlook

In many cases, this technology can even enhance workflow efficiency in hospitals. The possibilities are endless and exciting, which brings us to an essential segment of the article:

Do you understand the concept of the LACE index?

Designed in Ontario in 2004, it identifies patients who are at risk of readmission or death within 30 days of being discharged from the hospital. The calculation is based on four factors length of stay of the patient in the hospital, acuity of admission, concurring diseases, and emergency room visits.

The LACE index is widely accepted as a quality of care barometer and is famously based on the theory of machine learning. Using the past health records of the patients, the concept helps to predict their future state of health. It enables medical professionals to allocate resources on time to reduce the mortality rate.

This technological advancement has started to lay the foundation for closer collaboration among industry stakeholders, affordable and less invasive surgery options, holistic therapies, and new care delivery models. Here are five examples of current and emerging ML innovations:

From the initial screening of drug compounds to calculating the success rates of a specific medicine based on physiological factors of the patients the Knight Cancer Institute in Oregon and Microsofts Project Hanover are currently applying this technology to personalize drug combinations to cure blood cancer.

Machine learning has also given birth to new methodologies such as precision medicine and next-generation sequencing that can ensure a drug has the right effect on the patients. For example, today, medical professionals can develop algorithms to understand disease processes and innovative design treatments for ailments like Type 2 diabetes.

Signing up volunteers for clinical trials is not easy. Many filters have to be applied to see who is fit for the study. With machine learning, collecting patient data such as past medical records, psychological behavior, family health history, and more is easy.

In addition, the technology is also used to monitor biological metrics of the volunteers and the possible harm of the clinical trials in the long-run. With such compelling data in hand, medical professionals can reduce the trial period, thereby reducing overall costs and increasing experiment effectiveness.

Every human body functions differently. Reactions to a food item, medicine, or season differ. That is why we have allergies. When such is the case, why is customizing the treatment options based on the patients medical data still such an odd thought?

Machine learning helps medical professionals determine the risk for each patient, depending on their symptoms, past medical records, and family history using micro-bio sensors. These minute gadgets monitor patient health and flag abnormalities without bias, thus enabling more sophisticated capabilities of measuring health.

Cisco reports that machine-to-machine connection in global healthcare is growing at a rate of 30% CAGR which is the highest compared to any other industry!

Machine learning is mainly used to mine and analyze patient data to find out patterns and carry out the diagnosis of so many medical conditions, one of them being skin cancer.

Over 5.4mn people in the US are diagnosed with this disease annually. Unfortunately, the diagnosis is a virtual and time-taking process. It relies on long clinical screenings, comprising a biopsy, dermoscopy, and histopathological examination.

But machine learning changes all that. Moleanalyzer, an Australia-based AI software application, calculates and compares the size, diameter, and structure of the moles. It enables the user to take pictures at predefined intervals to help differentiate between benign and malignant lesions on the skin.

The analysis lets oncologists confirm their skin cancer diagnosis using evaluation techniques combined with ML, and they can start the treatment faster than usual. Where experts could identify malignant skin tumors, only 86.6% correctly, Moleanalyzer successfully detected 95%.

Healthcare providers have to ideally submit reports to the government with necessary patient records that are treated at their hospitals.

Compliance policies are continually evolving, which is why it is even more critical to ensure the hospital sites to check if they are being compliant and functioning within the legal boundaries. With machine learning, it is easy to collect data from different sources, using different methods and formatting them correctly.

For data managers, comparing patient data from various clinics to ensure they are compliant could be an overwhelming process. Machine learning helps gather, compare, and maintain that data as per the standards laid down by the government, informs Dr. Nick Oberheiden, Founder and Attorney, Oberheiden P.C.

The healthcare industry is steadily transforming through innovative technologies like AI and ML. The latter will soon get integrated into practice as a diagnostic aid, particularly in primary care. It plays a crucial role in shaping a predictive, personalized, and preventive future, making treating people a breeze. What are your thoughts?

Image: Depositphotos.com

Continue reading here:
How Machine Learning Is Redefining The Healthcare Industry - Small Business Trends

Democratizing Data-Driven Processes Through AutoML for Better Business Prospects – Analytics Insight

Data Science and Machine Learning are among the most deployed and useful technologies of the current marketplace. And as the utility increases, the new wave of advancements hit the industry with more innovations in its tides. Similarly, to add an extra edge to what Data Science and ML could achieve, we now have AutoML (Automated Machine Learning) platforms. It is among the top trends of contemporary data-market with most of the big techs investing in its successful incorporation. Companies including Google, Amazon, Microsoft have already embraced AutoML in their business processes to accelerate the effectiveness of their operations and products. Considered as a quiet revolution in AI, the technology has transformed the entire data science landscape while offering a great deal to modern-day businesses.

Automated machine learning (AutoML) is the process to automate an end-to-end process of leveraging machine learning algorithms to real-world problems. One of the most peculiar features of the technology is that even people with no data science or ML expertise can work with this platform to carry out desired outcomes.

According to Gartners survey, it takes around 4 years to make an AI project go live which doesnt cope-up with the rising demand and transforming market dynamics. And, according to statistics, huge investments in data and AI projects are only successful 15% of the time. However, with the rise in current trends and the AutoML platform, small AI projects can be produced in a short period of time.

Moreover, the soaring demands for machine learning systems dont imply the successful deployment of ML models across a wide range of applications. Its success requires a proficient team of seasoned data scientists and a team that decides which model is the best for a particular business problem. But the shortage of data science talents has doesnt quite fulfilled the scenario. Here enters the AutoML platform which tends to automate the maximum number of steps in an ML pipeline while reducing the human effort without compromising on the quality of performance.

Have you heard of Mercari? Mercari is a popular online shopping app in Japan. The company uses Googles AutoML tool in order to better process the image classification. Using a UI for uploading photos, Mercaris app can identify and suggest brand names from over 12 major brands through customized AutoML pipeline technology.

Leveraging Googles AutoML platform enabled the company to customize ML models in successfully identifying over 50,000 images with an accuracy of 91.3%.

Moreover, the implementation of automated machine learning across physical retail stores is redefining their future with rich business benefits including better sales forecasting and significant others. Analyzing the available current customer data and purchasing season, the AutoML platform can help retail industry businesses with better sales prospects. This can subsequently reduce the unused inventory costs and waste in unnecessary promotions.

While leveraging the AutoML to enhance business effectiveness and productivity, brands can also improve customer personalization through customization.

For any business across any industry, AutoML is bound to make cost reductions and increase productivity for data scientists while the democratization of machine learning reduces demand for them. The technology also helps accelerate revenues and customer satisfaction. AutoML models with enhanced accuracy possess the capability to improve other, less tangible business results too.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Here is the original post:
Democratizing Data-Driven Processes Through AutoML for Better Business Prospects - Analytics Insight

This AI tool uses machine learning to detect whether people are social distancing properly – Mashable SE Asia

Perhaps the most important step we can all take to mitigate the spread of the coronavirus, also known as COVID-19, is to actively practice social distancing.

Why? Because the further away you are from another person, the less likely you'll contract or transmit COVID-19.

But when we go about our daily routines, especially when out on a grocery run or heading to the hospital, social distancing can be a challenging task to uphold.

And some of us just have God awful spatial awareness in general.

But how do we monitor and enforce social distancing when looking at a mass population? We resort to the wonders of artificial intelligence (AI), of course.

In a recent blog post, the company demonstrated a nifty social distancing detector that shows a feed of people walking along a street in the Oxford Town Center of the United Kingdom.

The tool encompasses every individual in the feed with a rectangle. When they're properly observing social distancing, that rectangle is green. But when they get too close to another person (less than 6 feet away), the rectangle turns red, accompanied by a line 'linking' the two people that are too close to one another.

On the right-hand side of the tool there's a 'Bird's-Eye View' that allows for monitoring on a bigger scale. Every person is represented by a dot. Working the same way as the rectangles, the dots are green when social distancing is properly adhered to. They turn red when people get too close.

More specifically, work settings like factory floors where physical space is abundant, thus making manual tracking extremely difficult.

According to Landing AI CEO and Founder Andrew Ng, the technology was developed in response to requests by their clients, which includes Foxconn, the main manufacturer of Apple's prized iPhones.

The company also says that this technology can be integrated into existing surveillance cameras. However, it's still exploring ways in which to alert people when they get too close to each other. One possible method is the use of an audible alarm that rings when individuals breach the minimum distance required with other people.

According to Reuters, Amazon already uses a similar machine-learning tool to monitor its employees in their warehouses. In the name of COVID-19 mitigation, companies around the world are grabbing whatever machine-learning AI tools they can get in order to surveil their employees. A lot of these tools tend to be cheap, off-the-shelf iterations that allow employers to watch their employees and listen to phone calls as well.

Landing AI insists that their tool is only for use in work settings, even including a little disclaimer that reads "The rise of computer vision has opened up important questions about privacy and individual rights; our current system does not recognize individuals, and we urge anyone using such a system to do so with transparency and only with informed consent."

Whether companies that make use of this tool adhere to that, we'll never really know.

But we definitely don't want Big Brother to be watching our every move.

Cover image sourced from New Straits Times / AFP.

Link:
This AI tool uses machine learning to detect whether people are social distancing properly - Mashable SE Asia

Is Machine Learning Model Management The Next Big Thing In 2020? – Analytics India Magazine

ML and its services are only going to extend their influence and push the boundaries to new realms of the technology revolution. However, deploying ML comes with great responsibility. Though efforts are being made to shed its black box reputation, it is crucial to establish trust in both in-house teams and stakeholders for a fairer deployment. Companies have started to take machine learning model management more seriously now. Recently, a machine learning company Comet.ml, based out of Seattle and founded in 2017, announced that they are making a $4.5 million investment to bring state-of-the-art meta-learning capabilities to the market.

The tools developed by Comet.ml enable data scientists to track, compare, monitor, and optimise model development. Their announcement of an additional $4.5 million investment from existing investors Trilogy Equity Partners and Two Sigma Ventures is aimed at boosting their plans to domesticate the use of machine learning model management techniques to more customers.

Since their product launch in 2018, Comet.ml has partnered with top companies like Google, General Electric, Boeing and Uber. This elite list of customers use comet.al services, which have enterprise-level toolkits, and are used to train models across multiple industries spanning autonomous vehicles, financial services, technology, bioinformatics, satellite imagery, fundamental physics research, and more.

Talking about this new announcement, one of the investors, Yuval Neeman of Trilogy Equity Partners, reminded that the professionals from the best companies in the world choose Comet and that the company is well-positioned to become the de-facto Machine Learning development platform.

This platform, says Neeman, allows customers to build ML models that bring significant business value.

According to a report presented by researchers at Google, there are several ML-specific risk factors to account for in system design, such as:

Debugging all these issues require round the clock monitoring of the models pipeline. For a company that implements ML solutions, it is challenging to manage in-house model mishaps.

If we take the example of Comet again, its platform provides a central place for the team to track their ML experiments and models, so that they can compare and share experiments, debug and take decisive actions on underperforming models with great ease.

Predictive early stopping is a meta-learning functionality not seen in any other experimentation platforms, and this can be achieved only by building on top of millions of public models. And this is where Comets enterprise products come in handy. The freedom of experimentation that these meta learning-based platforms offer is what any organisation would look up to. Almost all ML-based companies would love to have such tools in their arsenal.

Talking about saving the resources, Comet.ml in their press release, had stated that their platform led to the improvement of model training time by 30% irrespective of the underlying infrastructure, and stopped underperforming models automatically, which reduces cost and carbon footprint by 30%.

Irrespective of the underlying infrastructure, it stops underperforming models automatically, which reduces cost and carbon footprint by 30%.

The enterprise offering also includes Comets flagship visualisation engine, which allows users to visualise, explain, and debug model performance and predictions, and a state-of-the-art parameter optimisation engine.

When building any machine learning pipeline, data preparation requires operations like scraping, sampling, joining, and plenty of other approaches. These operations usually accumulate haphazardly and result in what the software engineers would like to call a pipeline jungle.

Now, add in the challenge of forgotten experimental code in the code archives. Things only get worse. The presence of such stale code can malfunction, and an algorithm that runs this malfunctioning code can crash stock markets and self-driving cars. The risks are just too high.

So far, we have seen the use of ML for data-driven solutions. Now the market is ripe for solutions that help those who have already deployed machine learning. It is only a matter of time before we see more companies setting up their meta-learning shops or partner with third-party vendors.

comments

See the original post here:
Is Machine Learning Model Management The Next Big Thing In 2020? - Analytics India Magazine

AI used to predict Covid-19 patients’ decline before proven to work – STAT

Dozens of hospitals across the country are using an artificial intelligence system created by Epic, the big electronic health record vendor, to predict which Covid-19 patients will become critically ill, even as many are struggling to validate the tools effectiveness on those with the new disease.

The rapid uptake of Epics deterioration index is a sign of the challenges imposed by the pandemic: Normally hospitals would take time to test the tool on hundreds of patients, refine the algorithm underlying it, and then adjust care practices to implement it in their clinics.

Covid-19 is not giving them that luxury. They need to be able to intervene to prevent patients from going downhill, or at least make sure a ventilator is available when they do. Because it is a new illness, doctors dont have enough experience to determine who is at highest risk, so they are turning to AI for help and in some cases cramming a validation process that often takes months or years into a couple weeks.

advertisement

Nobody has amassed the numbers to do a statistically valid test of the AI, said Mark Pierce, a physician and chief medical informatics officer at Parkview Health, a nine-hospital health system in Indiana and Ohio that is using Epics tool. But in times like this that are unprecedented in U.S. health care, you really do the best you can with the numbers you have, and err on the side of patient care.

Epics index uses machine learning, a type of artificial intelligence, to give clinicians a snapshot of the risks facing each patient. But hospitals are reaching different conclusions about how to apply the tool, which crunches data on patients vital signs, lab results, and nursing assessments to assign a 0 to 100 score, with a higher score indicating an elevated risk of deterioration. It was already used by hundreds of hospitals before the outbreak to monitor hospitalized patients, and is now being applied to those with Covid-19.

advertisement

At Parkview, doctors analyzed data on nearly 100 cases and found that 75% of hospitalized patients who received a score in a middle zone between 38 and 55 were eventually transferred to the intensive care unit. In the absence of a more precise measure, clinicians are using that zone to help determine who needs closer monitoring and whether a patient in an outlying facility needs to be transferred to a larger hospital with an ICU.

Meanwhile, the University of Michigan, which has seen a larger volume of patients due to a cluster of cases in that state, found in an evaluation of 200 patients that the deterioration index is most helpful for those who scored on the margins of the scale.

For about 9% of patients whose scores remained on the low end during the first 48 hours of hospitalization, the health system determined they were unlikely to experience a life-threatening event and that physicians could consider moving them to a field hospital for lower-risk patients. On the opposite end of the spectrum, it found 10% to 12% of patients who scored on the higher end of the scale were much more likely to need ICU care and should be closely monitored. More precise data on the results will be published in coming days, although they have not yet been peer-reviewed.

Clinicians in the Michigan health system have been using the score thresholds established by the research to monitor the condition of patients during rounds and in a command center designed to help manage their care. But clinicians are also considering other factors, such as physical exams, to determine how they should be treated.

This is not going to replace clinical judgement, said Karandeep Singh, a physician and health informaticist at the University of Michigan who participated in the evaluation of Epics AI tool. But its the best thing weve got right now to help make decisions.

Stanford University has also been testing the deterioration index on Covid-19 patients, but a physician in charge of the work said the health system has not seen enough patients to fully evaluate its performance. If we do experience a future surge, we hope that the foundation we have built with this work can be quickly adapted, said Ron Li, a clinical informaticist at Stanford.

Executives at Epic said the AI tool, which has been rolled out to monitor hospitalized patients over the past two years, is already being used to support care of Covid-19 patients in dozens of hospitals across the United States. They include Parkview, Confluence Health in Washington state, and ProMedica, a health system that operates in Ohio and Michigan.

Our approach as Covid was ramping up over the last eight weeks has been to evaluate does it look very similar to (other respiratory illnesses) from a machine learning perspective and can we pick up that rapid deterioration? said Seth Hain, a data scientist and senior vice president of research and development at Epic. What we found is yes, and the result has been that organizations are rapidly using this model in that context.

Some hospitals that had already adopted the index are simply applying it to Covid-19 patients, while others are seeking to validate its ability to accurately assess patients with the new disease. It remains unclear how the use of the tool is affecting patient outcomes, or whether its scores accurately predict how Covid-19 patients are faring in hospitals. The AI system was initially designed to predict deterioration of hospitalized patients facing a wide array of illnesses. Epic trained and tested the index on more than 100,000 patient encounters at three hospital systems between 2012 and 2016, and found that it could accurately characterize the risks facing patients.

When the coronavirus began spreading in the United States, health systems raced to repurpose existing AI models to help keep tabs on patients and manage the supply of beds, ventilators and other equipment in their hospitals. Researchers have tried to develop AI models from scratch to focus on the unique effects of Covid-19, but many of those tools have struggled with bias and accuracy issues, according to a review published in the BMJ.

The biggest question hospitals face in implementing predictive AI tools, whether to help manage Covid-19 or advanced kidney disease, is how to act on the risk score it provides. Can clinicians take actions that will prevent the deterioration from happening? If not, does it give them enough warning to respond effectively?

In the case of Covid-19, the latter question is the most relevant, because researchers have not yet identified any effective treatments to counteract the effects of the illness. Instead, they are left to deliver supportive care, including mechanical ventilation if patients are no longer able to breathe on their own.

Knowing ahead of time whether mechanical ventilation might be necessary is helpful, because doctors can ensure that an ICU bed and a ventilator or other breathing assistance is available.

Singh, the informaticist at the University of Michigan, said the most difficult part about making predictions based on Epics system, which calculates a score every 15 minutes, is that patients ratings tend to bounce up and down in a sawtooth pattern. A change in heart rate could cause the score to suddenly rise or fall. He said his research team found that it was often difficult to detect, or act on, trends in the data.

Because the score fluctuates from 70 to 30 to 40, we felt like its hard to use it that way, he said. A patient whos high risk right now might be low risk in 15 minutes.

In some cases, he said, patients bounced around in the middle zone for days but then suddenly needed to go to the ICU. In others, a patient with a similar trajectory of scores could be managed effectively without need for intensive care.

But Singh said that in about 20% of patients it was possible to identify threshold scores that could indicate whether a patient was likely to decline or recover. In the case of patients likely to decline, the researchers found that the system could give them up to 40 hours of warning before a life-threatening event would occur.

Thats significant lead time to help intervene for a very small percentage of patients, he said. As to whether the system is saving lives, or improving care in comparison to standard nursing practices, Singh said the answers will have to wait for another day. You would need a trial to validate that question, he said. The question of whether this is saving lives is unanswerable right now.

See the original post here:
AI used to predict Covid-19 patients' decline before proven to work - STAT

How Coronavirus Pandemic Will Impact Machine Learning as a Service Market 2020- Global Leading Players, Industry Updates, Future Growth, Business…

The global Machine Learning as a Service market reached ~US$ xx Mn in 2019and is anticipated grow at a CAGR of xx% over the forecast period 2019-2029. In this Machine Learning as a Service market study, the following years are considered to predict the market footprint:

The business intelligence study of the Machine Learning as a Service market covers the estimation size of the market both in terms of value (Mn/Bn USD) and volume (x units). In a bid to recognize the growth prospects in the Machine Learning as a Service market, the market study has been geographically fragmented into important regions that are progressing faster than the overall market. Each segment of the Machine Learning as a Service market has been individually analyzed on the basis of pricing, distribution, and demand prospect for the Global region.

Request Sample Report @https://www.mrrse.com/sample/9077?source=atm

competition landscape which include competition matrix, market share analysis of major players in the global machine learning as a service market based on their 2016 revenues and profiles of major players. Competition matrix benchmarks leading players on the basis of their capabilities and potential to grow. Factors including market position, offerings and R&D focus are attributed to companys capabilities. Factors including top line growth, market share, segment growth, infrastructure facilities and future outlook are attributed to companys potential to grow. This section also identifies and includes various recent developments carried out by the leading players.

Company profiling includes company overview, major business strategies adopted, SWOT analysis and market revenues for year 2014 to 2016. The key players profiled in the global machine learning as a service market include IBM Corporation, Google Inc., Amazon Web Services, Microsoft Corporation, BigMl Inc., FICO, Yottamine Analytics, Ersatz Labs Inc, Predictron Labs Ltd and H2O.ai. Other players include ForecastThis Inc., Hewlett Packard Enterprise, Datoin, Fuzzy.ai, and Sift Science Inc. among others.

The global machine learning as a service market is segmented as below:

By Deployment Type

By End-use Application

By Geography

Each market player encompassed in the Machine Learning as a Service market study is assessed according to its market share, production footprint, current launches, agreements, ongoing R&D projects, and business tactics. In addition, the Machine Learning as a Service market study scrutinizes the strengths, weaknesses, opportunities and threats (SWOT) analysis.

COVID-19 Impact on Machine Learning as a Service Market

Adapting to the recent novel COVID-19 pandemic, the impact of the COVID-19 pandemic on the global Machine Learning as a Service market is included in the present report. The influence of the novel coronavirus pandemic on the growth of the Machine Learning as a Service market is analyzed and depicted in the report.

Request For Discount On This Report @ https://www.mrrse.com/checkdiscount/9077?source=atm

What insights readers can gather from the Machine Learning as a Service market report?

The Machine Learning as a Service market report answers the following queries:

Buy This Report @ https://www.mrrse.com/checkout/9077?source=atm

Why Choose Machine Learning as a Service Market Report?

Read this article:
How Coronavirus Pandemic Will Impact Machine Learning as a Service Market 2020- Global Leading Players, Industry Updates, Future Growth, Business...

AQR’s former machine-learning head says quant funds should start ‘nowcasting’ to react to real-time data instead of trying to predict the future – One…

MagnusRT @rjparkerjr09: "Quants were too reliant on models and forecasts. They need to begin practicing nowcasting reacting to real-time data13 hours ago

Mansoor Fayyaz MianAQR's former machine-learning head says its time for quants to 'pay less attention to crystal balls' and react to re https://t.co/WrhikvFdjM17 hours ago

Jerry Parker"Quants were too reliant on models and forecasts. They need to begin practicing nowcasting reacting to real-time https://t.co/ozQlfldTdI22 hours ago

Truth 2 PowerAQR's former machine-learning head says it's time for quants to 'pay less attention to crystal balls' and react to https://t.co/i0jGvPVwBz1 day ago

JoseWorksAQR's former machine-learning head says its time for quants to 'pay less attention to crystal bal... https://t.co/PGaMlHXBS22 days ago

Manpreet SinghRT @businessinsider: AQR's former machine-learning head says its time for quants to 'pay less attention to crystal balls' and react to real2 days ago

Go here to see the original:
AQR's former machine-learning head says quant funds should start 'nowcasting' to react to real-time data instead of trying to predict the future - One...

One Supercomputers HPC And AI Battle Against The Coronavirus – The Next Platform

Normally, supercomputers installed at academic and national laboratories get configured once, acquired as quickly as possible before the money runs out, installed and tested, qualified for use, and put to work for a four or five or possibly longer tour of duty. It is a rare machine that is upgraded even once, much less a few times.

But that is not he case with the Corona system at Lawrence Livermore National Laboratory, which was commissioned in 2017 when North America had a total solar eclipse and hence its nickname. While this machine, procured under the Commodity Technology Systems (CTS-1) to not only do useful work, but to assess the CPU and GPU architectures provided by AMD, was not named after the coronavirus pandemic that is now spreading around the Earth, the machine is being upgraded one more time to be put into service as a weapon against the SARS-CoV-2 virus which caused the COVID-19 illness that has infected at least 2.75 million people (confirmed by test, with the number very likely being higher) and killed at least 193,000 people worldwide.

The Corona system was built by Penguin Computing, which has a long-standing relationship with Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories the so-called Tri-Labs that are part of the US Department of Energy and that coordinate on their supercomputer procurements. The initial Corona machine installed in 2018 had 164 compute nodes, each equipped with a pair of Naples Epyc 7401 processors, which have 24 cores each running at 2 GHz with an all core turbo boost of 2.8 GHz. The Penguin Tundra Extreme servers that comprise this cluster have 256 GB of main memory and 1.6 TB of PCI-Express flash. When the machine was installed in November 2018, half of the nodes were equipped with four of AMDs Radeon Instinct MI25 GPU accelerators, which had 16 GB of HBM2 memory each and which had 768 gigaflops of FP64 performance, 12.29 teraflops of FP32 performance, and 24.6 teraflops of FP16 performance. The 7,872 CPU cores in the system delivered 126 teraflops at FP64 double precision all by themselves, and the Radeon Instinct MI25 GPU accelerators added another 251.9 teraflops at FP64 double precision. The single precision performance for the machine was obviously much higher, at 4.28 petaflops across both the CPUs and GPUs. Interestingly, this machine was equipped with 200 Gb/sec HDR InfiniBand switching from Mellanox Technologies, which was obviously one of the earliest installations of this switching speed.

In November last year, just before the coronavirus outbreak or, at least we think that was before the outbreak, that may turn out to not be the case AMD and Penguin worked out a deal to installed four of the much more powerful Radeon Instinct MI60 GPU accelerators, based on the 7 nanometer Vega GPUs, in the 82 nodes in the system that didnt already have GPU accelerators in them. The Radeon Instinct MI60 has 32 GB of HBM2 memory, and has 6.6 teraflops of FP64 performance, 13.3 teraflops of FP32 performance, and 26.5 teraflops of FP16 performance. Now the machine has 8.9 petaflops of FP32 performance and 2.54 petaflops of FP64 performance, and this is a much more balanced 64-bit to 32-bit performance, and it makes these nodes more useful for certain kinds of HPC and AI workloads. Which turns out to be very important to Lawrence Livermore in its fight against the COVID-19 disease.

To find out more about how the Corona system and others are being deployed in the fight against COVID-19, and how HPC and AI workloads are being intertwined in that fight, we talked to Jim Brase, deputy associate director for data science at Lawrence Livermore.

Timothy Prickett Morgan: It is kind of weird that this machine was called Corona. Foreshadowing is how you tell the good literature from the cheap stuff. The doubling of performance that just happened late last year for this machine could not have come at a better time.

Jim Brase: It pretty much doubles the overall floating point performance of the machine, which is great because what we are mainly running on Corona is both the molecular dynamics calculations of various viral and human protein components and then machine learning algorithms for both predictive models and design optimization.

TPM: Thats a lot more oomph. So what specifically are you doing with it in the fight against COVID-19?

Jim Brase: There are two basic things were doing as part of the COVID-19 response, and this machine is almost entirely dedicated to this although several of our other clusters at Lawrence Livermore are involved as well.

We have teams that are doing both antibody and vaccine design. They are mainly focused on therapeutic antibodies right now. They are basically designing proteins that will interact with the virus or with the way the virus interacts with human cells. That involves hypothesizing different protein structures and computing what those structures actually look like in detail, then computing using molecular dynamics the interaction between those protein structures and the viral proteins or the viral and human cell interactions.

With this machine, we do this iteratively to basically design a set of proteins. We have a bunch of metrics that we try to optimize on binding strength, the stability of the binding, stuff like that and then we do a detailed molecular dynamics calculations to figure out the effective energy of those binding events. These metrics determine the quality of the potential antibody or vaccine that we design.

TPM: To wildly oversimplify, this SARS-CoV-2 virus is a ball of fat with some spikes on it that wreaks havoc as it replicates using our cells as raw material. This is a fairly complicated molecule at some level. What are we trying to do? Stick goo to it to try to keep it from replicating or tear it apart or dissolve it?

Jim Brase: In the case of in the case of antibodies, which is what were mostly focusing on right now, we are actually designing a protein that will bind to some part of the virus, and because of that the virus then changes its shape, and the change in shape means it will not be able to function. These are little molecular machines that they depend on their shape to do things.

TPM: Theres not something that will physically go in and tear it apart like a white blood cell eats stuff.

Jim Brase: No. Thats generally done by biology, which comes in after this and cleans up. What we are trying to do is what we call neutralizing antibodies. They go in and bind and then the virus cant do its job anymore.

TPM: And just for a reference, what is the difference between a vaccine and an antibody?

Jim Brase: In some sense, they are the opposite of each other. With a vaccine, we are putting in a protein that actually looks like the virus but it doesnt make you sick. It stimulates the human immune system to create its own antibodies to combat that virus. And those antibodies produced by the body do exactly the same thing we were just talking about Producing antibodies directly is faster, but the effect doesnt last. So it is more of a medical treatment for somebody who is already sick.

TPM: I was alarmed to learn that for certain coronaviruses, immunity doesnt really last very long. With the common cold, the reason we get them is not just because they change every year, but because if you didnt have a bad version of it, you dont generate a lot of antibodies and therefore you are susceptible. If you have a very severe cold, you generate antibodies and they last for a year or two. But then youre done and your body stops looking for that fight.

Jim Brase: The immune system is very complicated and for some things it creates antibodies that remembers them for a long time. For others, its much shorter. Its sort of a combination of the of the what we call the antigen the thing about that, the virus or whatever that triggers it and then the immune system sort of memory function together, cause the immunity not to last as long. Its not well understood at this point.

TPM: What are the programs youre using to do the antibody and protein synthesis?

Jim Brase: We are using a variety of programs. We use GROMACS, we use NAMD, we use OpenMM stuff. And then we have some specialized homegrown codes that we use as well that operate on the data coming from these programs. But its mostly the general, open source molecular mechanics and molecular dynamics codes.

TPM: Lets contrast this COVID-19 effort with like something like SARS outbreak in 2003. Say you had the same problem. Could you have even done the things you are doing today with SARS-CoV-2 back then with SARS? Was it even possible to design proteins and do enough of them to actually have an impact to get the antibody therapy or develop the vaccine?

Jim Brase: A decade ago, we could do single calculations. We could do them one, two, three. But what we couldnt do was iterate it as a design optimization. Now we can run enough of these fast enough that we can make this part of an actual design process where we are computing these metrics, then adjusting the molecules. And we have machine learning approaches now that we didnt have ten years ago that allow us to hypothesize new molecules and then we run the detailed physics calculations against this, and we do that over and over and over.

TPM: So not only do you have a specialized homegrown code that takes the output of these molecular dynamics programs, but you are using machine learning as a front end as well.

Jim Brase: We use machine learning in two places. Even with these machines and we are using our whole spectrum of systems on this effort we still cant do enough molecular dynamics calculations, particularly the detailed molecular dynamics that we are talking about here. What does the new hardware allow us to do? It basically allows us to do a higher percentage of detailed molecular dynamics calculations, which give us better answers as opposed to more approximate calculations. So you can decrease the granularity size and we can compute whole molecular dynamics trajectories as opposed to approximate free energy calculations. It allows us to go deeper on the calculations, and do more of those. So ultimately, we get better answers.

But even with these new machines, we still cant do enough. If you think about the design space on, say, a protein that is a few hundred amino acids in length, and at each of those positions you can put in 20 different amino acids, you on the order of 20200 in the brute force with the possible number of proteins you could evaluate. You cant do that.

So we try to be smart about how we select where those simulations are done in that space, based on what we are seeing. And then we use the molecular dynamics to generate datasets that we then train machine learning models on so that we are basically doing very smart interpolation in those datasets. We are combining the best of both worlds and using the physics-based molecular dynamics to generate data that we use to train these machine learning algorithms, which allows us to then fill in a lot of the rest of the space because those can run very, very fast.

TPM: You couldnt do all of that stuff ten years ago? And SARS did not create the same level of outbreak that SARS-CoV-2 has done.

Jim Brase: No, these are all fairly new early new ideas.

TPM: So, in a sense, we are lucky. We have the resources at a time when we need them most. Did you have the code all ready to go for this? Were you already working on this kind of stuff and then COVID-19 happened or did you guys just whip up these programs?

Jim Brase: No, no, no, no. Weve been working on this kind of stuff for her for a few years.

TPM: Well, thank you. Id like to personally thank you.

Jim Brase: It has been an interesting development. Its both been both in the biology space and the physics space, and those two groups have set up a feedback loop back and forth. I have been running a consortium called Advanced Therapeutic Opportunities in Medicine, or ATOM for short, to do just this kind of stuff for the last four years. It started up as part of the Cancer Moonshot in 2016 and focused on accelerating cancer therapeutics using the same kinds of ideas, where we are using machine learning models to predict the properties, using both mechanistic simulations like molecular dynamics, but all that combined with data, but then also using it other the other way around. We also use machine learning to actually hypothesize new molecules given a set of molecules that we have right now and that we have computed properties on them that arent quite what we want, how do we just tweak those molecules a little bit to adjust their properties in the directions that we want?

The problem with this approach is scale. Molecules are atoms that are bonded with each other. You could just take out an atom, add another atom, change a bond type, or something. The problem with that is that every time you do that randomly, you almost always get an illegal molecule. So we train these machine learning algorithms these are generative models to actually be able to generate legal molecules that are close to a set of molecules that we have but a little bit different and with properties that are probably a little bit closer to what we what we want. And so that allows us to smoothly adjust the molecular designs to move towards the optimization targets that we want. If you think about optimization, what you want are things with smooth derivatives. And if you do this in sort of the discrete atom bond space, you dont have smooth derivatives. But if you do it in these, these are what we call learned latent spaces that we get from generative models, then you can actually have a smooth response in terms of the molecular properties. And thats what we want for optimization.

The other part of the machine learning story here is these new types of generative models. So variational autoencoders, generative adversarial models the things you hear about that generate fake data and so on. Were actually using those very productively to imagine new types of molecules with the kinds of properties that we want for this. And so thats something we were absolutely doing before COVID-19 hit. We have taken these projects like ATOM cancer project and other work weve been doing with DARPA and other places focused on different diseases and refocused those on COVID-19.

One other thing I wanted to mention is that we havent just been applying biology. A lot of these ideas are coming out of physics applications. One of our big things at Lawrence Livermore is laser fusion. We have 192 huge lasers at the National Ignition Facility to try to create fusion in a small hydrogen deuterium target. There are a lot of design parameters that go into that. The targets are really complex. We are using the same approach. Were running mechanistic simulations of the performance of those targets, we are then improving those with real data using machine learning. So now we now have a hybrid model that has physics in it and machine learning data models, and using that to optimize the designs of the laser fusion target. So thats led us to a whole new set of approaches to fusion energy.

Those same methods actually are the things were also applying to molecular design for medicines. And the two actually go back and forth and sort of feed on each other and support each other. In the last few weeks, some of the teams that have been working on the physics applications have actually jumped over onto the biology side and are using some of the same sort of complex workflows that were using on these big parallel machines that theyve developed for physics and applying those to some of the biology applications and helping to speed up the applications on these on this new hardware thats coming in. So it is a really nice synergy going back and forth.

TPM: I realize that machine learning software uses the GPUs for training and inference, but is the molecular dynamics software using the GPUs, too?

Jim Brase: All of the molecular dynamics software has been set up to use GPUs. The code actually maps pretty naturally onto the GPU.

TPM: Are you using the CUDA variants of the molecular dynamics software, and I presume that it is using the Radeon Open Compute, or ROCm, stack from AMD to translate that code so it can run on the Radeon Instinct accelerators?

Jim Brase: There has been some work to do, but it works. Its getting its getting to be pretty solid now, thats one of the reasons we wanted to jump into the AMD technology pretty early, because you know, any time you do first-in-kind machines its not always completely smooth sailing all the way.

TPM: Its not like Lawrence Livermore has a history of using novel designs for supercomputers. [Laughter]

Jim Brase: We seldom work with machines that are not Serial 00001 or Serial 00002.

TPM: Whats the machine learning stack you use? I presume it is TensorFlow.

Jim Brase: We use TensorFlow extensively. We use PyTorch extensively. We work with the DeepChem group at Stanford University that does an open chemistry package built on TensorFlow as well.

TPM: If you could fire up an exascale machine today, how much would it help in the fight against COVID-19?

Jim Brase: It would help a lot. Theres so much to do.

I think we need we need to show the benefits of computing for drug design and we are concretely doing that now. Four years ago, when we started up ATOM, everybody thought this was nuts, the general idea that we could lead with computing rather than experiment and do the experiments to focus on validating the computational models rather than the other way around. Everybody thought we were nuts. As you know, with the growth of data, the growth of machine learning capabilities, more accessibility to sophisticated molecular dynamics, and so on its much more accepted that computing is a big part of this. But we still have a long way to go on this.

The fact is, machine learning is not magic. Its a fancy interpolator. You dont get anything new out of it. With the physics codes, you actually get something new out of it. So the physics codes are really the foundation of this. You supplement them with experimental data because theyre not right necessarily, either. And then you use the machine learning on top of all that to fill in the gaps because you havent been able to sample that huge chemical and protein space adequately to really understand everything at either the data level or the mechanistic level.

So thats how I think of it. Data is truth sort of and what you also learn about data is that it is not always the same as you go through this. But data is the foundation. Mechanistic modeling allows us to fill in where we just cant measure enough data it is too expensive, it takes too long, and so on. We fill in with mechanistic modeling and then above that we fill in that then with machine learning. We have this stack of experimental truth, you know, mechanistic simulation that incorporates all the physics and chemistry we can, and then we use machine learning to interpolate in those spaces to support the design operation.

For COVID-19, there are there are a lot of groups doing vaccine designs. Some of them are using traditional experimental approaches and they are making progress. Some of them are doing computational designs, and that includes the national labs. Weve got 35 designs done and we are experimentally validating those now and seeing where we are with them. It will generally take two to three iterations of design, then experiment, and then adjust the designs back and forth. And were in the first round of that right now.

One thing were all doing, at least on the public side of this, is we are putting all this data out there openly. So the molecular designs that weve proposed are openly released. Then the validation data that we are getting on those will be openly released. This is so our group working with other lab groups, working with university groups, and some of the companies doing this COVID-19 research can contribute. We are hoping that by being able to look at all the data that all these groups are doing, we can learn faster on how to sort of narrow in on the on the vaccine designs and the antibody designs that will ultimately work.

Continued here:
One Supercomputers HPC And AI Battle Against The Coronavirus - The Next Platform

The impact of the coronavirus on the Machine Learning in Healthcare Cybersecurity Market Report 2020 – News Distinct

Global Machine Learning in Healthcare Cybersecurity Market Analysis 2020 with Top Companies, Production, Consumption, Price and Growth Rate

The Machine Learning in Healthcare Cybersecurity Market 2020 report includes the market strategy, market orientation, expert opinion and knowledgeable information. The Machine Learning in Healthcare Cybersecurity Industry Report is an in-depth study analyzing the current state of the Machine Learning in Healthcare Cybersecurity Market. It provides a brief overview of the market focusing on definitions, classifications, product specifications, manufacturing processes, cost structures, market segmentation, end-use applications and industry chain analysis. The study on Machine Learning in Healthcare Cybersecurity Market provides analysis of market covering the industry trends, recent developments in the market and competitive landscape.

Get a sample copy of the report at- https://www.reportsandmarkets.com/sample-request/global-machine-learning-in-healthcare-cybersecurity-market-report-2019?utm_source=newsdistinct&utm_medium=14

It takes into account the CAGR, value, volume, revenue, production, consumption, sales, manufacturing cost, prices, and other key factors related to the global Machine Learning in Healthcare Cybersecurity market. All findings and data on the global Machine Learning in Healthcare Cybersecurity market provided in the report are calculated, gathered, and verified using advanced and reliable primary and secondary research sources. The regional analysis offered in the report will help you to identify key opportunities of the global Machine Learning in Healthcare Cybersecurity market available in different regions and countries.

The Global Machine Learning in Healthcare Cybersecurity 2020 research provides a basic overview of the industry including definitions, classifications, applications and industry chain structure. The Global Machine Learning in Healthcare Cybersecurity analysis is provided for the international markets including development trends, competitive landscape analysis, and key regions development status.

Development policies and plans are discussed as well as manufacturing processes and cost structures are also analyzed. This report also states import/export consumption, supply and demand Figures, cost, price, revenue and gross margins.

In addition to this, regional analysis is conducted to identify the leading region and calculate its share in the global Machine Learning in Healthcare Cybersecurity. Various factors positively impacting the growth of the Machine Learning in Healthcare Cybersecurity in the leading region are also discussed in the report. The global Machine Learning in Healthcare Cybersecurity is also segmented on the basis of types, end users, geography and other segments.

Our new sample is updated which correspond in new report showing impact of COVID-19 on Industry

Reasons for Buying this Report

The report can answer the following questions:

Make an enquiry before buying this Report @ https://www.reportsandmarkets.com/sample-request/global-machine-learning-in-healthcare-cybersecurity-market-report-2019?utm_source=newsdistinct&utm_medium=14

Table of Content

1 Industry Overview of Machine Learning in Healthcare Cybersecurity

2 Manufacturing Cost Structure Analysis

3 Development and Manufacturing Plants Analysis of Machine Learning in Healthcare Cybersecurity

4 Key Figures of Major Manufacturers

5 Machine Learning in Healthcare Cybersecurity Regional Market Analysis

6 Machine Learning in Healthcare Cybersecurity Segment Market Analysis (by Type)

7 Machine Learning in Healthcare Cybersecurity Segment Market Analysis (by Application)

8 Machine Learning in Healthcare Cybersecurity Major Manufacturers Analysis

9 Development Trend of Analysis of Machine Learning in Healthcare Cybersecurity Market

10 Marketing Channel

11 Market Dynamics

12 Conclusion

13 Appendix

About us

Market research is the new buzzword in the market, which helps in understanding the market potential of any product in the market. This helps in understanding the market players and the growth forecast of the products and so the company. This is where market research companies come into the picture. Reports And Markets is not just another company in this domain but is a part of a veteran group called Algoro Research Consultants Pvt. Ltd. It offers premium progressive statistical surveying, market research reports, analysis & forecast data for a wide range of sectors both for the government and private agencies all across the world.

Contact Us:

Sanjay Jain

Manager Partner Relations & International Marketing

http://www.reportsandmarkets.com

Ph: +1-352-353-0818 (US)

Read more here:
The impact of the coronavirus on the Machine Learning in Healthcare Cybersecurity Market Report 2020 - News Distinct

What Are DPUs And Why Do We Need Them – Analytics India Magazine

We have heard of CPUs and TPUs, now, NVIDIA with the help of its recent acquisition Mellanox is bringing a new class of processors to power up deep learning applications DPUs or data processing units.

DPUs or Data Processing Units, originally popularised by Mellanox, now wear a new look with NVIDIA; Mellanox was acquired by NVIDIA earlier this year. DPUs are a new class of programmable processor that consists of flexible and programmable acceleration engines which improve applications performance for AI and machine learning, security, telecommunications, storage, among others.

The team at Mellanox has already deployed the first generation of BlueField DPUs in leading high-performance computing, deep learning, and cloud data centres to provide new levels of performance, scale, and efficiency with improved operational agility.

The improvement in performance is due to the presence of high-performance, software programmable, multi-core CPU and a network interface capable of parsing, processing, and efficiently transferring data at line rate to GPUs and CPUs.

According to NVIDIA, a DPU can be used as a stand-alone embedded processor. DPUs are usually incorporated into a SmartNIC, a network interface controller. SmartNICs are ideally suited for high-traffic web servers.

A DPU based SmartNIC is a network interface card that offloads processing tasks that the system CPU would normally handle. Using its own on-board processor, the DPU based SmartNIC may be able to perform any combination of encryption/decryption, firewall, TCP/IP and HTTP processing.

The CPU is for general-purpose computing, the GPU is for accelerated computing and the DPU, which moves data around the data centre, does data processing.

These DPUs are known by the name of BlueField that have a unique design that can enable programmability to run at speeds of up to 200Gb/s. The BlueField DPU integrates the NVIDIA Mellanox Connect best-in-class network adapter, encompassing hardware accelerators with advanced software programmability to deliver diverse software-defined solutions.

Organisations that rely on cloud-based solutions, especially can benefit immensely from DPUs. Here are few such instances, where DPUs flourish:

Bare metal environment is a network where a virtual machine is installed

The shift towards microservices architecture has completely transformed the way enterprises ship applications at scale. Applications that are based on the cloud have a lot of activity or data generation, even for processing a single application request. According to Mellanox, one key application of DPU is securing the cloud-native workloads.

For instance, Kubernetes security is an immense challenge comprising many highly interrelated parts. The data intensity makes it hard to implement zero-trust security solutions, and this creates challenges for the security team to protect customers data and privacy.

As of late last year, the team at Mellanox stated that they are actively researching into various platforms and integrating schemes to leverage the cutting-edge acceleration engines in the DPU-based SmartNICs for securing cloud-native workloads at 100Gb/s.

According to NVIDIA, a DPU comes with the following features:

Know more about DPUs here.

comments

Read more:
What Are DPUs And Why Do We Need Them - Analytics India Magazine

The startup making deep learning possible without specialized hardware – MIT Technology Review

GPUs became the hardware of choice for deep learning largely by coincidence. The chips were initially designed to quickly render graphics in applications such as video games. Unlike CPUs, which have four to eight complex cores for doing a variety of computation, GPUs have hundreds of simple cores that can perform only specific operationsbut the cores can tackle their operations at the same time rather than one after another, shrinking the time it takes to complete an intensive computation.

It didnt take long for the AI research community to realize that this massive parallelization also makes GPUs great for deep learning. Like graphics-rendering, deep learning involves simple mathematical calculations performed hundreds of thousands of times. In 2011, in a collaboration with chipmaker Nvidia, Google found that a computer vision model it had trained on 2,000 CPUs to distinguish cats from people could achieve the same performance when trained on only 12 GPUs. GPUs became the de facto chip for model training and inferencingthe computational process that happens when a trained model is used for the tasks it was trained for.

But GPUs also arent perfect for deep learning. For one thing, they cannot function as a standalone chip. Because they are limited in the types of operations they can perform, they must be attached to CPUs for handling everything else. GPUs also have a limited amount of cache memory, the data storage area nearest a chips processors. This means the bulk of the data is stored off-chip and must be retrieved when it is time for processing. The back-and-forth data flow ends up being a bottleneck for computation, capping the speed at which GPUs can run deep-learning algorithms.

NEURAL MAGIC

In recent years, dozens of companies have cropped up to design AI chips that circumvent these problems. The trouble is, the more specialized the hardware, the more expensive it becomes.

So Neural Magic intends to buck this trend. Instead of tinkering with the hardware, the company modified the software. It redesigned deep-learning algorithms to run more efficiently on a CPU by utilizing the chips large available memory and complex cores. While the approach loses the speed achieved through a GPUs parallelization, it reportedly gains back about the same amount of time by eliminating the need to ferry data on and off the chip. The algorithms can run on CPUs at GPU speeds, the company saysbut at a fraction of the cost. It sounds like what they have done is figured out a way to take advantage of the memory of the CPU in a way that people havent before, Thompson says.

Neural Magic believes there may be a few reasons why no one took this approach previously. First, its counterintuitive. The idea that deep learning needs specialized hardware is so entrenched that other approaches may easily be overlooked. Second, applying AI in industry is still relatively new, and companies are just beginning to look for easier ways to deploy deep-learning algorithms. But whether the demand is deep enough for Neural Magic to take off is still unclear. The firm has been beta-testing its product with around 10 companiesonly a sliver of the broader AI industry.

We want to improve not just neural networks but also computing overall.

Neural Magic currently offers its technique for inferencing tasks in computer vision. Clients must still train their models on specialized hardware but can then use Neural Magics software to convert the trained model into a CPU-compatible format. One client, a big manufacturer of microscopy equipment, is now trialing this approach for adding on-device AI capabilities to its microscopes, says Shavit. Because the microscopes already come with a CPU, they wont need any additional hardware. By contrast, using a GPU-based deep-learning model would require the equipment to be bulkier and more power hungry.

Another client wants to use Neural Magic to process security camera footage. That would enable it to monitor the traffic in and out of a building using computers already available on site; otherwise it might have to send the footage to the cloud, which could introduce privacy issues, or acquire special hardware for every building it monitors.

Shavit says inferencing is also only the beginning. Neural Magic plans to expand its offerings in the future to help companies train their AI models on CPUs as well. We believe 10 to 20 years from now, CPUs will be the actual fabric for running machine-learning algorithms, he says.

Thompson isnt so sure. The economics have really changed around chip production, and that is going to lead to a lot more specialization, he says. Additionally, while Neural Magics technique gets more performance out of existing hardware, fundamental hardware advancements will still be the only way to continue driving computing forward. This sounds like a really good way to improve performance in neural networks, he says. But we want to improve not just neural networks but also computing overall.

Read the original here:
The startup making deep learning possible without specialized hardware - MIT Technology Review

Canaan’s Kendryte K210 and the Future of Machine Learning – CapitalWatch

Author: CapitalWatch Staff

Canaan Inc. (Nasdaq: CAN) became publicly traded in New York in late November. It raised $90 million in its IPO, which Canaan's founder, chairman, and chief executive officer,Nangeng Zhang modestly called "a good start." Since that time, the company has met significant milestones in its mission to disrupt the supercomputing industry.

Operating since 2013, Hangzhou-based Canaan delivers supercomputing solutions tailored to client needs. The company focuses on the research and development of artificial intelligence (AI) technology specifically, AI chips, AI algorithms, AI architectures, system on a chip (SoC) integration, and chip integration. Canaan is also known as a top manufacturer of mining hardware in China the global leader in digital currency mining.

Since IPO, Canaan has made strides in accomplishing new projects, despite the hard-hit cross-industry crisis Covid-19 has caused worldwide. In a recent announcement, Canaan said it has developed a SaaS product which its partners can use to operate a cloud mining platform. Cloud mining allows users to mine digital currency without having to buy and maintain mining hardware and spend on electricity a trend that has been gaining popularity.

A Chip of the Future

Earlier this year, Canaan participatedat the 2020 International Consumer Electronics Show in Las Vegas, the world's largest tech show that attracts innovators from across the globe. Canaan impressed, showcasing its Kendryte K210 the world's first-ever RISC-V-based edge AI chip. The chip was released in September 2018 and has been in mass-production ever since.

K210 is Canaan's first chip. The AI chip is designed to carry out machine learning. The primary functions of the K210 are machine vision and semantic, which includes the KPU for computing convolutional neural networks and an APU for processing microphone array inputs. KPU is a general-purpose neural network processor with built-in convolution, batch normalization, activation, and pooling operations. The next-generation chip can detect faces and objects in real-time. Despite the high computing power, K210 consumes only 0.3W while other typical devices consume 1W.

More Than Just Chipping Away at Sales

As of September 30, 2019, Canaan has shipped more than 53,000 AI chips and development kits to AI product developers since release.

Currently, the sales of K210 are growing exponentially, according to CEO Zhang .

The company has moved quickly to the commercialization of chips, and developed modules, products and back-end SaaS, offering customers a "full flow of AI solutions."

Based on the first generation of K210, Canaan has formed critical strategic partnerships.

For example, the company launched joint projects with a leading AI algorithm provider, a top agricultural science and technology enterprise, and a well-known global soft drink manufacturer to deliversmart solutionsfor variousindustrialmarkets.

The Booming Blockchain Industry

Currently, Canaan is working under the development strategy of "Blockchain + AI." The company has made several breakthroughs in the blockchain and AI industry, including algorithm development and optimization, standard unit design, low-voltage and high-efficiency operation, high-performance design system and heat dissipation, etc. The company has also accumulated extensive experience in ASIC chip manufacturing, laying the foundation for its future growth.

Canaan released first-generation products based on Samsung's 8nm and SMIC's 14nm technologies in Q4 last year. The former has been shipped in Q1 this year, while the latter will be shipped in Q2. In February, it launched the second generation of the product which is more efficient, more cost-effective and offers better performance.

Currently, TSMC's 5nm technology is under development. This technology will further improve the company's machines' computing power and ensure Canaan's leading position in the blockchain hardware space.

"We are the leader in the industry," says Zhang.

Canaan's Covid-19 Strategy

During the Covid-19 outbreak, Canaan improved the existing face recognition access control system. The new software can detect and identify people wearing masks. At the same time, the intelligent attendance system has been integrated to assist human resource management

Integrating mining machine learning and AI, the K210 chip has been used on Avalon mining machine, which can identify and monitor potential network viruses through intelligent algorithms. The company will explore more innovative integration in the future.

Second-Generation Gem

In terms of AI, the company will launch the second-generation AI chip K510 this year. The design of its architecture has been "greatly" optimized, and the computing power is several times more robust than the K210. Later this year, Canaan will use this tech in areas including smart energy consumption, smart industrial parks, smart driving, smart retail, and smart finance.

Canaan's Cash

In terms of operating costs and R&D, the company's last-year operating cost dropped 13.3% year-on-year. In 2018 and 2019, Canaan recorded R&D expenses of 189.7 million yuan and 169 million yuan, respectively347 million yuan were used to incentivize core R&D personnel.

In addition, the company currently has more than 500 million yuan in cash ($70.5 million), will continue to operate under the "blockchain + AI" strategy, with a continued focus on the commercialization of its AI technology.

A Fruitful Future

Canaan began as a manufacturer of Bitcoin mining machines, but it has become more than that. In the short term, the Bitcoin halving cycle is approaching (Estimated to occur on May 11, 2020 CW); this should promote the sales of company's mining machine, In the long term, now a global leader in ASIC technology, Canaan could be in a unique position to meet supercomputing demand.

"Blockchain is a good start, but we'll go beyond that," says Zhang. "When a seed grows up to be a big tree, it will bear fruit."

So far, it has done just that. Just how high that "tree" can get remains to be seen, but one thing is certain: The Kendryte K210 chip will be the driving force fueling the company's growth.

More here:
Canaan's Kendryte K210 and the Future of Machine Learning - CapitalWatch