HPE and `The Machine’ potentially the next big IT blockbuster, but one helluva gamble – Diginomica

Posted: June 1, 2017 at 10:39 pm

So HP has now got as far as announcing its first prototype of `The Machine, first talked about towards the back end of last year. The beast is real and, if the numbers surrounding it are to be believed (and who am I to argue) it represents a significant step forward in resources available and performance.

For example, the prototype features 160 TBytes of memory spread across 40 separate nodes connected using photonics links. And as its architecture is designed squarely around in-memory processing models, that means it is all available, all of the time. According to the company, this allows the equivalent of allowing simultaneous work on some 160 million books, or five times the number of books in the US Library of Congress.

But this is only a prototype and these numbers are, in the great scheme of what HPE envisages for The Machine, really only chicken feed. If its dreams come true, we are now staring at an architecture that can easily scale to an Exabyte-scale, single-memory system as it stands. Out into the future the company is already talking mind-boggling numbers: how about 4,096 Yottabytes? (where a Yottabyte equals1024 bytes). That, the company reckons, is the equivalent of 250,000 times the entire digital universe that exists todayin a box.

This is a new class of memory technology based on large, persistent memory pools that can stretch right out to the edge.

That is the basic outline of it given by Andrew Wheeler, the Deputy Director of the HPE Labs team that has developed the architecture and the prototype. The interesting factor here is that HPE has set out to develop an inclusive architecture, rather than an exclusive buy-all-or-nothing approach. So when it comes to working out at the edge, the devices used can be whatever is extant and/or appropriate for the specific task in hand at that point.

The system is based on an enhanced version of Linux, so the ability to run Linux may even be the only requirement made on such devices. So, while the prototype has been built on devices developed by Cavium and based on ARM architectures, this does not mean that everything out at the end needs to be based on that same device.

The premise of our Intelligenrt Edge design is that users will want to do analytics processing as close to where the data is generated. Take an application like video processing; users wont want to be pushing all that data to some central location for processing. That is just not sustainable or cost effective. The question then is just `what is the processor relevant to getting the job done.

So the idea is to do as much processing as close to the point of generation as possible. Ask the local device if someone carrying a red backpack was spotted in a time frame, rather than send all the data to a central location and then process it. It is only the results that are actually important and need uploading. This does create another problem that Wheelers team have been doing a lot of work on. Communicating between the core and the edge does require agents capable of ensuring that instructions are interpreted correctly, that relevant standards are adhered to and returned data is in a form that can be used immediately.

The primary goal however, is to have an analytics space that is sufficiently large to hold both current and historical data at a scale that is currently not possible to achieve and to get real-time results out of it. And because it is in-memory processing, all the latency introduced by taking data from disk to memory, memory to processor, processor to cache, back to processor (and repeat several times) and finally out to memory and then to disk.

The next steps going toward a real product include building up the growing set of hardware and software technologies that can now be engineered as `products and High Performance Computing road maps.

The second step, having moved from simulations to emulators running on SuperDomes, and on to where we are now with this prototype, we now need to select the partners and customers that we want to land actual workloads on to further increase out understanding. This will help us determine what will be the first real instantiation of what we would call `The Machine. I can tell you right now we have a pretty clear line of sight on how it can address problems in High Performance Computing and analytics work.

An obvious target here is SAP and its growing range of HANA-based applications. Wheeler agreed that HP has a long history with running SAP applications, and estimated it currently runs some 70% of all HANA-based applications. He would confirm nothing, of course, but it seems unlikely that SAP, and some of its customers, will fail to make that list of test subjects.

There are still so many questions to be answered about `The Machine, some of which may yet just kill it. For example, when asked about addressing Yottabytes of memory that is simultaneously processing in real time his response is a classic of the scientific milieu.

We have found some operating system issues with this getting to the 160TByte level. But we do have a conceptual handle on what is required to get to the Yottabyte level.

The big question of course, is `when, and while Wheeler was understandably reticent to give any indication, the signs are that the short version of the answer is `not any time soon. This, in turn, raises of areas of speculation, some of quite a serious nature.

For example, while SAP has garnered some reasonable traction with its HANA in-memory processing technology, it is interesting that not too many others have really piled in behind them. This begs the question as to whether the technology is really only good for certain types of brute analytic applications.

That would explain why others, even those playing in the analytics space, are none-too-fussed about following the SAP lead.

Or is it a case that there are times when technologies and use cases coincide. It is not uncommon for early iterations of technologies to appear and then fade away, because the tech itself is not quite ready, or the functional need has not yet developed amongst users. Later, however, the time, the technology advances and the user need can be right. Example? The mobile phone: it was a housebrick you could make and receive telephone calls on. But when it gained a camera and an internet connection, and could slip into a pocket, it became an extension of the `self.

Where is `The Machine on this scale? As a prototype it is difficult to say, and it is even more difficult to suggest when might be a good time for HPE to be ready with a product. Some of the answer will not even be in HPEs hands for it will depend upon how well the legacy technologies hold out. Current commodity processors are really only pumped up versions of the Intel 4004 processor chip introduced in 1971, and the work within a basic systems architecture, first described by John Von Neumann back in 1945.

Fundamentally, current `stuff just about all of it is definably old. But it works, and generally works well. Is it time to replace it? Quite possibly, and it is possible to see much of current tech development work as just trusses, Band Aids and other surgical appliances designed to keep those aged architectures hanging together.

But it is also possible to see just what HPE has riding on the future success if this in-memory architecture. The company has divested itself of its big systems/SI capabilities, as well as much of its middleware/software activities. It seems determined to be a leading technology developer and provider, with a strong emphasis on hardware, to boot. Yet that, inevitably, puts it up against leaner, faster, lower cost competition that might not have the depth of experience and expertise, but will have more daring and greatly reduced risk aversion to HPE.

It is reasonable to suppose, therefore, that `The Machine will not appear as a product before three years have past, more likely five. A lot of tech water will have passed under the bridge in that time, and it is quite possible that one of the small, smart companies will come up with an analytical tech that sits between what is now, and what can be when HPE brings forth. If that is good enough, it might be the death of `The Machine and even HPE.

If not, maybe CIOs need to start thinking, fantasising, about what they might want to achieve if they could analyse anything against any number of other anythings, in real time. Give it five years and it might be available.

For reasons I cannot defend by any other justification than there lies the direction in which my knee doth jerk, I think `The Machine prototype marks the birth of the next big technology blockbuster. But I also think HPE now has a tiger by the tail, and with the departure of so many other businesses which were, while maybe not desperately profitable, potentially resilient alternatives for the company, that tiger may well bite. Now the company seems increasingly exposed as a mainly hardware tech business playing high roller poker with an unknown high-risk tech development as its stake.

Image credit - Images free for commercial use

See more here:

HPE and `The Machine' potentially the next big IT blockbuster, but one helluva gamble - Diginomica

Related Posts