Getting Industrial About The Hybrid Computing And AI Revolution – The Next Platform

Posted: July 23, 2021 at 4:14 am

For oil and gas companies looking at drilling wells in a new field, the issue becomes one of return vs. cost. The goal is simple enough: install the fewest number of wells that will draw them the most oil or gas from the underground reservoirs for the longest amount of time. The more wells installed, the higher the cost and the larger the impact on the environment.

However, finding the right well placements quickly becomes a highly complex math problem. Too few wells sited in the wrong places leaves a lot of resources in the ground. Too many wells placed too close together not only can sharply increase the cost but cause wells to pump from the same area.

Shahram Farhadi knows how complex the challenge is. Farhadi is the chief technology officer for industrial AI at Beyond Limits, a startup spun off by Caltech and NASAs Jet Propulsion Lab to commercialize technologies built for space exploration for industrial settings. The company, founded in 2014, aims to leverage cognitive AI, machine learning, and deep learning techniques in industries like oil and gas, manufacturing and industrial Internet of Things (IoT), power and natural resources, and healthcare and other evolving markets, many of which have already been using HPC environments to run their most complicated programs.

Placing wells within a reservoir is one of those problems that involves a sequential decision-making process that changes and grows with each decision made. Farhadi notes that in chess, there are almost 5 million possible moves after the first five are made. For the game Go, that number becomes 10 to the 12th power. When optimizing well placement in a small reservoir from where and when to drill to how many producer and injector wells there can be as many as 10 to the 20th power possible combinations after five sequential, non-commutative choices of vertical drilling locations.

The combination of advanced AI frameworks with HPC can greatly reduce the challenge.

Anything the AI can learn such as basic rules for how far the wells should be separated and apply to the problem will help decrease the number of computations, to hammer them down to something that is more tangible, Farhadi tells The Next Platform.

Where to place wells has been a challenge for oil and gas companies for years, during which time they developed seismic imaging capabilities and simulation models that run on HPC systems that describe reservoirs beneath the ground. They also use optimizers to run variations of the model to determine how many of which kinds of wells should we place where. There have been at least two generations of engineers who worked to perfect these equations and their nuances, tuning and learning from the data, Farhadi says.

The problem has been that they have worked on these computations using a combination of brute force and such optimizations as particle swarm and genetic algorithms atop computationally expensive reservoir simulators, making such a complex problem even more challenging. Thats where Beyond Limits advanced AI frameworks can come in.

The industry is really equipped with really good simulations and the opportunity of a high-performance AI could be, how about we use the simulations to generate the data and then learn from that generated data? he says. In that sense, you are going some good miles. Other industries are also doing this now, like with the auto industry, this is happening more or less. But from the energy industry standpoint, these simulations are fairly rich.

Beyond Limits is applying such techniques as deep reinforcement learning (DRL), using a framework to train a reinforcement learning agent to make optimal sequential recommendations for placing wells. It also uses reservoir simulations and novel deep convolutional neural networks to work. The agent takes in the data and learns from the various iterations of the simulator, allowing it to reduce the number of possible combinations of moves after each decision is made. By remembering what it learned from the previous iterations, the system can more quickly whittle the choices down to the one best answer.

One area that we looked at specifically is the simulation of subsurface movement of fluids, Farhadi says. Think of a body of a rock that is found somewhere that has oil in it. It also has water that has come to it and as you take out this hydrocarbon, this whole dynamic changes. Things will kick in. You might have water breaking through, but its quite a delicate process that is happening down there. A lot of time goes into building this image because you have limited information. But lets say you have built the image and you have a simulator now that if you tell this simulator, I want to place a well here [and] a well here, the simulator can evolve this in time and give you the flow rates and say, If you do this, this is what youre going to get. Now if I operate this asset, the question for me is just exactly that: How many wells do I put in this? What kind of wells do I want to put vertical [and] horizontal? Do I want to inject water from the beginning? Do I want to inject gas? This is basically the expertise of reservoir engineering. Its playing the game of how to optimally extract this natural resource from these assets, and the assets are usually billions of dollars of value. This is a very, very precious asset for any company that is producing oil and gas. The question is, how do you extract the max out of it now?

The goal is to get down to a high net present value (NPV) score essentially the amount of oil or gas that will be captured (and sold) and the amount of money made after costs are figured in. The fewest wells needed to extract the most resources will mean more profit.

The NPV initially does some iteration, but after about 150,000 times of interacting with the simulator, it can get to something like $40 million dollars of NPV, he says. The key thing here is the fact that this simulation on its own can be expensive to run, so you optimize it, be smart and use it efficiently.

That included creating a system that would allow Beyond Limits to most efficiently scale the model to where the oil and gas companies needed it. The company tested it using three systems two of which were CPU-only and one that was a hybrid running CPUs and GPUs. Beyond Limits used an on-premises 20-core CPU system running Intel Core i9-7900X chips, a cloud-based 96-core CPU system with the same processors, and the hybrid setup, with a 20-core CPU and two Nvidia Ampere A100 GPU accelerators on a p4d.24xlarge Amazon Web Services instance.

The company also took it a step further by including a 36-hour run on a p4d.24xlarge AWS instance using a setup with 90 CPU cores and eight A100 GPUs.

The metrics benchmarked were around the instantaneous rate of reinforcement learning calculation, the number of episodes and forward action-explorations during the progress of reinforcement learning and the value of the best solution found in terms of NPV.

What Beyond Limits found was that the hybrid setup outperformed both CPU-only systems. In terms of benchmarks, the hybrid setup delivered a peak in terms of processing speed of 184.3 percent over the 96-core system and 1,169.5 percent over the 20-core operation. To reach the same number of actions explored at the end of 120,000 seconds, the CPU-GPU hybrid had an improvement in time elapsed of 245.4 percent over the 20 CPU cores and 152.9 percent of the 96 CPU cores. (See chart below.) Regarding NPV, the hybrid instance had a boost of about 109 percent compared to the 20-core CPU setup for vertical wells.

Scale and efficiency are key when trying to reach optimal NPV, because not only do calculations such as the number and types of wells used add to the costs, but so do computational needs.

This problem is very, very complicated in terms of the number of possible combinations, so the more hardware you throw at it, the higher you get and obviously there are physical limits to that, Farhadi says. The GPU becomes a real value-add because you can now achieve NPVs that are higher. Just because you were able to have higher grades, you would be able to have more FLOPs or you could compute more. You have a higher chance of finding better configurations. The idea here was to show that there is this technology that can help with highly combinatorial simulation-based optimizations called reinforcement learning, and we have benchmarked it on simple, smaller reservoir models. But if you were to take it to the actual field models with this number of cells, its going to be on its own, like a massive high-performance training system.

Beyond Limits is also building advanced AI systems for other industries. One example is a system designed to help with planning of a refinery. Another AI system helps chemists more quickly and efficiently build formulas for engine oil and other lubricants, he says.

For the practices that you have relied on a human expert to come up with a framework and [to] solve a problem, it is important for them that whatever system you build is honoring that and can digest that, Farhadi says. Its not only data, its also that knowledge thats human. How do we incorporate and then bring this together? For example, how do you make the knowledge that your engineer learned about from the data or how do you use the physics as a constraint for your AI? Its an interesting field. Even in the frontiers of deep learning [and] machine learning, this is now being looked at. Instead of just looking at the pixels, now lets see if we can have more robust representations of hierarchical understandings of the objects that come our way. We really started this way earlier than 2014, because one big motivation was that the industries we went to required it. That was what they had and they needed to augment it, maybe with digital assistants. It has data elements to it, but they were not quite competent.

View post:

Getting Industrial About The Hybrid Computing And AI Revolution - The Next Platform

Related Posts