The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh – HPCwire

Posted: November 29, 2020 at 6:17 am

As HPEs chief technology officer for artificial intelligence, Dr. Eng Lim Goh devotes much of his time talking and consulting with enterprise customers about how AI can benefit their business operations and products.

As the start of 2021 approaches, HPCwire sister publication EnterpriseAI spoke with Goh in a telephone interview to learn about his impressions and expectations for the still-developing technology as it continues to be used by HPEs customers.

Goh, who is widely-known as one of the leading HPC visionaries today, has a deep professional background in AI and HPC. He was CTO for most of his 27 years at Silicon Graphics before joining HPE in 2016 after the company was acquired by HPE. He has co-invented blockchain-based swarm learning applications, overseen the deployment of AI for Formula 1 auto racing, and has co-designed the systems architecture for simulating a biologically detailed mammalian brain. He has been named twice, in 2005 and 2015, to HPCwires People to Watch list, for his work. A Shell Cambridge University Scholar, he completed his PhD research and dissertation on parallel architectures and computer graphics, and holds a first-class honors degree in mechanical engineering from Birmingham University in the U.K.

This interview is edited for clarity and brevity.

EnterpriseAI: Is the development of AI today where you thought it would be when it comes to enterprise use of the technology? Or do we still have a way to go before it becomes more important in enterprises?

Dr. Eng Lim Goh: You do see research with companies and industries. Some are deploying AI in a very advanced way now, while others are moving from their proof of concept to production. I think it comes down to a number of factors, including which category they are in are they coping with making decisions manually, or are they coping with writing rules into computer programs to help them automate some of the decision making? If they are coping, then there is less of an incentive to move to using machine learning and deep neural networks, other than being concerned that competition is doing that and they will out-compete them.

There are some industries that that are still making decisions manually or writing rules to automate some of that. There are others where the amount of data to be considered to make an even better decision would be insurmountable with manual decision making and manual analytics. If you asked me a few years back where things would be, I would have been conservative on one hand and also very optimistic on the other hand, depending on companies and industries.

EnterpriseAI: Are we at the beginning of AIs capabilities for business, or are we reaching the realities of what it can and cant do? Has its maturity arrived?

Goh:For some users it is maturing, if you are focused on how the machine wants to help you in decision support, or in some cases, to help you take over some decision-making. That decision is very specific in an area, and you to have enough data for it. I think things are getting very advanced now.

EnterpriseAI: What are AIs biggest technology needs to help it further solve business problems and help grow the use of AI in enterprises? Are there features and improvements that still must arrive to help deliver AI for industries, manufacturing and more?

Goh: At HPE, we spend a lot of our energy working with customers, deploying their machine learning, artificial intelligence and data analytics solutions. Thats what we focus on, the use cases. Other bigger internet companies focus more on the fundamentals of making AI more advanced. We spend more of our energy in the application of it. From the application point of view, some customer use cases are similar, but its interesting that a lot of times, the needs are in best practices.

In the best practices, a lot of times for example, proof of concepts succeed, but then they fail in their deployment into production. A lot of times, proof of concepts fail because of reasons other than the concept being a failure. A discipline, like engineering, over years, over decades, develops into a discipline, like computer engineering or programming. And over the years, these develop into disciplines where there are certain sets of best practices that people follow. In the practice of artificial intelligence, this will also develop. Thats part of the reason why we develop sets of best practices. First, to get from proof of concept to successful deployment, which is where we see a lot of our customers right now. We have one Fortune 500 customer, a large industrial customer, where the CTO/CIO invested in 50 proof of concepts for AI. We were called in to help, to provide guidance as to how to pick from these proof of concepts.

A lot of times they like to test to see if, for a particular use case, does it make sense to apply machine learning in decision support? Then they will invest in a small team, give them funding and get them going. So you see companies doing proof of concepts, like a medium-sized company doing one or two proof of concepts. The key, when Im brought into to do a workshop with them on this in transitioning from proof of concept to deployment, is to look at the best practices weve gathered over the use cases weve done over the years.

One lesson is not to say that the proof of concept is successful until you also prove that you can scale it. You have to address the scale question at the beginning. One example is that if you prove that 100 cameras work for facial recognition within certain performance thresholds, it doesnt mean the same concept will work for 100,000 cameras. You have to think through whether what you are implementing can actually scale. This is just one of the different best practices that we saw over time.

Another best practice is that this AI, when deployed, you must plug into the existing workflow in a seamless way, so the user doesnt even feel it. Also, you have to be very realistic. We have examples where they promise too much at the beginning, saying that we will deploy on day one. No, you set aside enough time for tuning, because since this is a very new capability for many customers, you need to give them time to interact with it. So dont promise that youll deploy on day one. Once you implement in production, allow a few months to interact with a customer so they can find what their key performance indicators should be.

EnterpriseAI: Are we yet at a point where AI has become a commodity, or are we still seeing enterprise AI technology breakthroughs?

Goh: Both are right. The specific AI where you have good data to feed machine learning models or deep neural network models, the accuracy is quite high, to the point that people after using it for a while, trust it. And its quite prevalent, but some people think that it is not prevalent enough to commoditize. AI skills are like programming skills a few decades ago they were highly sought after because very few people knew what it was, knew how to program. But after a few decades of prevalence, you now have enough people to do programming. So perhaps AI has gone that way.

EnterpriseAI: Where do you see the biggest impacts of AI in business? Are there still many things that we havent seen using AI that we havent even dreamed up yet?

Goh: Anytime that you youre having someone make a decision, AI can be helpful and can be used as a decision support tool. Then theres of course the question about whether you let the machine make the decision for you. In some cases, yes, in a very specific way and if the impact of a wrong decision is less significant. Treat AI as a tool like you would think automation was a tool. Its just another way to automate. If you look back decades ago, machine learning was already being used, it was just not called machine learning. It was a technique used by people in doing statistics, analytics, applying statistics. There definitely is that overlap, where statistics overlap with machine learning, and then machine learning stretches out to deep neural networks where we reach a point where this method can work, where we essentially have enough data out there, and enough compute power out there to consume it. And therefore, to be able to get the neural network to tune itself to a point where you can actually have it make good decisions. Essentially, you are brute-forcing it with data. Thats the overlap. I say weve been at it for a long time, right, were just looking for new ways to automate.

EnterpriseAI: What interesting enterprise AI projects are you working on right now that you can share with us?

Goh: Two things are in the minds of most people now COVID-19 vaccines, and back-to-work. These are two areas we have focused on over the last few months.

On the vaccine, clinical trials and gene expression data, with applying analytics to it. We realized that analytics, machine learning and deep neural networks can be quite useful in making predictions just based on gene expression data. Not just for clinical trials, but also to look ahead to the well-being of persons, by just looking at one sample. It requires highly-skilled analytics, machine learning and deep neural network techniques, to try and make predictions ahead of time, when you get a blood sample and genus expressed and measured from it.

The other area is back-to-work [after COVID-19 shutdowns around the nation and world]. Its likely that the workplace is changed now. We call it the new intelligent hybrid workplace. By hybrid we mean a portion is continuing to be remote, while a portion of factory, manufacturing plant or office employees will return to their workplaces. But even on their returns depending on companies, communities, industries and countries therell be different requirements and needs.

EnterpriseAI: And AI can help with these kinds of things that we are still dealing with under COVID-19?

Goh: Yes, in certain jurisdictions, for example, if someone is ill with the coronavirus in a factory or an office, and you are required to do specialized cleaning in the area around that high-risk person. If you do not have a tool to assist you, there are companies that clean their entire factory because theyre not quite sure where that person has been. An office may have cleaned an entire floor hoping that a person didnt go to other floors. We built an in-building tracing system with our Aruba technology, using Bluetooth Low Energy, talking to WiFi routers and access points. Immediately when you identify a particular quarter-sized Bluetooth tag that employees carry, immediately a floorplan shows up and it shows hotspots and warm spots as to where to send the cleaning services to. Youre very targeted with your cleaning. The names of the users of those tags are highly restricted for privacy.

EnterpriseAI: Lets dive into the ethics of AI, which is a growing discussion. Do you have concerns about the ethics and policies of using AI in business?

Goh: Like many things in science and engineering, this is as much a social question as it is a technical one. I get asked this a lot by CEOs in companies. Many times, from boards of directors and CEOs, this is the first question, because it affects employees. It affects the community they serve and it affects their business. Its more a societal question as it is a technical one, thats what I always tell them.

And because of this, thats the reason you dont hear people giving you rules on this issue hard and fast. There needs to be a constant dialogue. It will vary by community, by industry, to have a dialogue and then converge on consensus. I always tell them, focus on understanding the differences between how a machine makes decisions, and how a human makes decisions. Whenever we make a decision, there is a link immediately to the emotional side, and to the generalization capability. We apply judgment.

EnterpriseAI: What do you see as the evolving relationship between HPC and AI?

Goh: Interestingly, the relationship has been there for some time, its just that we didnt call it AI. Lets take hurricane prediction, for example. In HPC, this is one of the stalwart applications for high performance computing. You put in your physics and physics simulations on a supercomputer. Next, you measure where the hurricane is forming in the ocean. You then make sure you run your simulation ahead of time faster than the hurricane that is coming at you. Thats one of the major applications of HPC, building your model out of physics, and then running the simulation based on starting that mission that youve measured out in the ocean.

Machine learning and AI is now used to look at the simulation early on and predict likelihood of failure. You are using history. People in weather forecasting, or climate weather forecasting, will already tell you that theyre using this technique of historical data to make predictions. And today we are just formalizing this for the other industries.

EnterpriseAI: What do you think of the emerging AI hardware landscape today, with established chip makers and some 80 startups working on AI chips and platforms for training and inference?

Goh: Through history, its been the same thing. In the end, there will probably be tens of these chip companies. They came up with different techniques. Were back to the Thinking Machines, the vector machines, its all RISC processes and so on. Theres a proliferation of ideas of how to do this. And eventually, a few of them will stand out here and there will be a clear demarcation I believe between training and inference. Because inference needs to be low and lower energy to the point that should be the vision, that IoTs should have some inference capability. That means you need to sip energy at a very low level. Were talking about an IoT tag, a Bluetooth Low Energy tag, with a coin battery that should last two years. Today the tag that sends out and receives the information, has very little decision-making, let alone inference-level type decision-making. In the future you want that to be an intelligent tag, too. There will be a clear demarcation between inference and training.

EnterpriseAI: In the future, where do you see AI capabilities being brought into traditional CPUs? Will they remain separate or could we see chips combining?

Goh: I think it could go one way, or it could totally go the other way and everything gets integrated. If you look at historical trends, in the old days, when we built the first high-performance computers, we had a chip for our CPU, and we had another chip on board called FPU, the floating point unit, and a board for graphics. And then over time the FPU got integrated into the CPU, and now every CPU has an FPU in it for floating point calculations. Then there were networking chips that were on the outside. Now we are starting to see networking chips incorporating into the CPU. But GPUs got so much more powerful in a very specific way.

The big question is, will the CPU go into the GPU, or will the GPU go into the CPU? I think it will be dependent on a chip companys power and vision. But I believe integration, one way or the other the CPU to GPU or GPU going into CPU will be the case.

EnterpriseAI: What else should I be asking you about the future of AI as we look toward 2021?

Goh: I want to emphasize that many CEOs are keen on starting with AI. They are in phase one, where it is important to understand that data is the key to train machines with. And as such, data quality needs to be there. Quantity is important, but quality needs to be there, the trust of it, the data bias.

We focus on the fact that 80% of the time should be spent on the data even before you start on the AI project. Once you put in that effort, your analytics engine can make better use of it. If you are in phase one, thats what I would recommend. If you are in a proof of concept state, then spend time in the workshop to discuss best practices with those who have implemented AI quite a bit. And if youre in the advanced stage, if you know what youre doing, especially if youre successful, do take note that after a while with a good deployment, the accuracy of the prediction drops, so you have to continually retrain your machines. I think it is the practice that I am more focused on.

This article first appeared on sister website EnterpriseAI.news.

Follow this link:

The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh - HPCwire

Related Posts