Artificial Intelligence: Apple’s Second Revolutionary Offering – Seeking Alpha

Posted: July 28, 2017 at 7:15 pm

In an earlier article on Augmented Reality, I noted that Apple (NASDAQ:AAPL) faces challenges for growth of its iPhone business, as many worldwide markets have become saturated, and the replacement rate for existing customers has dropped. I noted that Apple has weathered this change by continuing to charge premium prices for its product (against the predictions of many naysayers), and it can do this for two reasons.

1- Its design and build quality is unsurpassed, and

2- Its always on the cutting edge of new technology.

For these reasons, customers feel that there is value in the iconic product.

Number two leads the investor to the question:

While the earlier articles centered on augmented reality this will focus on Artificial Intelligence (AI), and Machine Learning (ML), this is an important topic for the investor as it is a critical part of the answer to the question above.

Most analysts focus on the easily visible aspects of devices, ignoring the deeper innovations because they dont understand them. For example, when Apple stunned the tech world in 2013 by introducing the first 64-bit mobile system on a chip (processor), the A7, many pundits played down the importance of the move. They argued that it made little difference, and listed a variety of reasons. Yet they ignored the real important advantages particularly the tremendously strengthened encryption features. This paved the way for the enhanced security features that include complete on-device data encryption, Touch ID and Apple Pay.

Apples foray into AR and now ML are further examples of this. While AR captures the imagination of many people and the new interface has been covered, the less understood Machine Learning interface has been virtually ignored in spite of the fact that going forward it will be a very important enabling technology. Product differentiation and performance are key to Apple maintaining its position, and thus key to the investor's understanding.

Machine Learning is a type of program that gives a response to input without having been explicitly programmed with the knowledge. Instead, it is trained by being presented with a set of inputs and the desired response. From these, the program learns to judge a new input.

This is different from earlier Knowledge Based Systems. These were explicitly programmed. For example, in a simple wine program I developed for a class, there were a long list of rules, essentially of the form:

- IF (type = RED) AND (acidity = LOW) THEN respond with XXX

- IF (type = RED) AND (acidity = HIGH) THEN respond with ZZZ

In a ML system, these rules do not exist. Instead a set of samples are presented and the system learns how to infer the correct responses.

There are a lot of different configurations for such learning systems, many using the Neural Network concept. This is based on the interconnected network of the brain. Here each individual neuron (brain cell) receives a connection from many other neurons, and then in turn connects to many others. As a person experiences new things, the connections between the excited cells get strengthened or facilitated so that a given network is more easily excited in the future if the same or similar input is given.

Computer neural nets work analogously, though obviously digitally. The program defines as set of cells into some series of levels. Each is influenced by some subset of the others and in turn influence yet other cells, until a final level produces a result. The degree to which the value of one cell changes the value of another cell to which it is connected is specified by the weight of the connection. This is where the magic lies.

During training, when a pattern is presented to it, the strong connections are strengthened (and others possibly weakened). This is repeated for various inputs. Eventually, the system is deemed trained, and the set of connections is saved as a trained model for use in an application. (Some systems allow for continued training after deployment.)

(For an interesting anecdote on how this works in the brain, see this story.)

Many people think of AI as some big thing on mainframes such as Watson by IBM (IBM), which championed at Jeopardy, or in research labs at Google (GOOG) (NASDAQ:GOOGL) or Microsoft (MSFT). They think that this is for the big problems of industry.

Research at Google is at the forefront of innovation in Machine Intelligence, with active research exploring virtually all aspects of machine learning, including deep learning and more classical algorithms. Exploring theory as well as application, much of our work on language, speech, translation, visual processing, ranking and prediction relies on Machine Intelligence. In all of those tasks and many others, we gather large volumes of direct or indirect evidence of relationships of interest, applying learning algorithms to understand and generalize. (Google page)

But this is not the case. ML applications are running on your smartphone and home computer now. Text prediction on your keyboard, facial recognition in your photos be it in your photos app or in Facebook (FB) and speech recognition such as Siri, Amazons (AMZN) Echo, etc., all use ML systems to perform the tasks. Many of these are actually sent off to servers in the cloud to do the heavy lifting computing, because it is indeed heavy lifting that is, it requires a great deal of compute power. NVidia (NVDA) is surging precisely because of its new Tesla (NASDAQ:TSLA) series products on the server end of this industry.

So, what has Apple done?

A few weeks ago, Apple (AAPL) held its Developers Conference (WWDC) opening with the keynote address where Tim Cook and friends introduced new features of their line of products. While many focused on the iPad Pro, the new iOS and Mac OS features or the HomePod speaker, for the long term, the real news for the investor is the AR and ML toolkits introduced.

Investors may be wondering:

What Core ML does is simple, it allows app writers to incorporate an ML model into their app by simply dragging it into the program code window. It also provides a single, simple method to send target data into that model and retrieve an answer.

The purpose of a model is to categorize or provide some other simple answer to a set of data. Input might be one piece of data, such as an image, or several, as a stream of words.

The model is a different story altogether. This is the complicated part.

Apple provides access to a lot of standard models. The programmer can simply select one of these, and plop it into the program. If not, then the programmer, or an AI specialist, would go to one of a number of available ML tools to specify a network and train it. Apple has provided tools to translate these trained models into the format that the Core ML process uses. (Apple has provided its format as open source for other developers to use.)

The amazing thing is that one can pull a model into their program code, and then write as little as three or four lines of new code to use it. That is, once you have the model, you can create a simple app to use it literally in a matter of minutes. This is an dazzling accomplishment.

An interesting thing is that the programmers call to the model to send in data and retrieve the response is exactly the same no matter what the model. Obviously one needs to send in the correct type of data (image, sound file, text), but the manner of doing so is exactly the same no matter what type of data is assessed or what the inherent structure is of the model itself. This enormously simplifies programming. The presenters continually emphasized that the developers should focus on the user experience, not on implementation details.

One of the great things about Core ML is that the apps perform all the calculations, on the device. Nothing is sent to a remote server. This provides the following benefits:

One area of interest (at least for the technophile) is some of the benefits of the actual implementation.

Software on a computer (and a smartphone is a computer) is layered, where each layer creates a logical view of the world, but really is no more than a bunch of code using the layer below it. Thus, a developer can call a routine to create a window (sending in a variety of parameters for the size and location, color, etc.), and this will perform the enormous number of operations from the lower levels that are required to open up a graphic display that we recognize as a window. In some cases, the upper layers of abstraction are the same for different devices, in spite of very different real implementations.

The illustration shows Apples implementation of Core ML and how it sits on top of other layers. In this case, there are ML layers for vision, etc. that sit on top of the Core ML itself. But the important thing here is that we can see how Core ML sits on top of Accelerate and Metal Performance Shaders.

Metal is the Apple graphics interface for accelerating graphics performance. It improves this immensely. Shaders are the units that actually perform the calculations in a Graphics Processing Unit (see GPU section of this post).

One might wonder why ML services would be built on top of graphics processors. As noted in the post on GPUs mentioned above, a graphic (photo, drawing, video frame) consists of thousands or millions of tiny picture elements, or pixels. Editing the frame consists of applying some mathematical operation on each of the pixels sometimes depending on its neighbors. This means you want to perform the same operation on millions of different data pieces. As I noted earlier, a neural network consists of many cells each with many connections. One system boasts 650K neurons with 630M connections. Yet the actual adjustments of the weights of the connections is a simple arithmetic operation. So a GPU is actually spectacular at ML processing performing the same calculation on hundreds, or even thousands of cells in parallel. Apples Metal technology lets the ML programs access the GPU compute cells directly.

The important thing to understand here is that Apple has built the Core ML engines on top of these high performance technologies. Thus, it comes for free to the app developer. All the hard work of programming an ML engine has been done, fine tuned, accelerated, and debugged. The importance of this is really hard to convey to the person who does not know the development process. It gives every app developer the benefit of literally scores of programmers working for several years to make their little app, effective, correct, and robust.

Finally, there is one last card in apples hand, yet to be officially shown. Back in May, Bloomberg reported that they had reliable sources tell them that Apple is working on a dedicated ML chip, called the Neural Engine.

This makes a lot of sense. A standard GPU is great for doing ML computations, but in the end, it was designed first to handle graphics. The design would probably be quite similar, but totally tailored to the ML tasks. My guess is that this Neural Engine will make its debut on the iPhone 8 that is expected to be released in the fall (along with updated iPhone 7s/Plus). It would be a tantalizing incentive for buyers, a major differentiator for the line. With time, it would become available on all new phones (perhaps not the low end SE). With this chip, I believe Siri would move completely onto the device. It could also be used on Macs.

ML models require a tremendous amount of computation. As such, they consume a great deal of battery power. As new generations of chips have emerged with continually shrinking transistor size (thus increasing compute power and efficiency), it has become more realistic to run some models locally. Additionally, the GPUs that Apple has built on their A-series chips have grown at an extraordinary rate. Graphics performance in the new iPad Pro, with A10x processor, is an astounding 500 times that of the original iPad. According to Ash Hewson of Serif software, the performance is literally four times that of an Intel i7 quad core desktop PC.

Still, on a portable device, every drop of battery power is precious. So if Apple can save by designing its own specialty chips, then it will be worth it. They have the talent and the capacity.

And yet another motivation. There is still a lot of evidence that Apple is working on self driving car technology. It would be just like them to want to own the process from hardware to software. With their own ML processor, they would be free from worries that some other company would have control of a key technology. (This is why they created the browser Safari.) Metal is a software/hardware interface specification. It relies implicitly on a hardware platform that conforms to its specifications. Having their own Neural Engine chip will assure this, even as they move into self-driving cars.

As an aside it is interesting to note that the Core ML libraries (including Metal 2) will run on the Mac as well as iOS. Apple is gradually moving to unify the two platforms in many respects.

With the iPhone itself, one can try to predict sales and costs and come up with a guess as to revenue and profit for a given time frame. Both ML and AR projects have little in terms of applications at the moment, and so their impact on sales is rather ephemeral at this time. Still, this is an important investment in the future. I stated above that Core ML is an important enabling technology. The fact is simple with a huge lead in this arena, performance in ML tasks will far and away outstrip that from any competitor for many years to come.

At first the most visible will be AR titles since they tend to be very flashy. But AI titles will slowly begin to gain traction. Other platforms will be left in the dust in terms of performance. (Watch the Serif Affinity Photo demo in the WWDC keynote video time 1:40:10 - to see just how astoundingly fast the iPad Pro is.)

With these tools hardware and software Apple will assure itself of being far and away the leader in basic platform technology. This will allow them to attract new customers and encourage upgrades. Exactly what the investor wants.

Disclosure: I am/we are long IBM, AAPL.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

More:

Artificial Intelligence: Apple's Second Revolutionary Offering - Seeking Alpha

Related Posts