Today, BP’s upstream facilities have an uptime average of 96%. While admirable, the oil company thinks it can do better.
To move the needle BP is in the midst of adopting a new analytics system developed in partnership with Baker Hughes that is fully operational on all four of its offshore platforms in the Gulf of Mexico. The companies call it a plant operations advisor, and it uses a combination of sensor analytics and digital twin technology to drive improvements in productivity and visibility into the state of each platform.
The project began in 2016 and required the two companies to work closely to connect all the mission-critical pieces of equipment and then craft the physics-based blueprints that make up a digital twin.
Ahmed Hashmi, head of upstream technology at BP, said as the system is used more widely, the company expects to see hundreds of millions of dollars in savings as a result. And in addition to lifting the bottom line, the introduction of the operations advisor also marks a shift in how offshore facilities will be run going forward.
“Once we have this deployed around the world, and then have enough data to train predictive models on top of it, that’s the beginning of what will be the future of remote operations where you pull people from the field,” Hashmi said.
However, he noted that focus of this effort is not really on pushing equipment automation to the field. Rather, it is about moving the decision making and analysis usually done offshore into offices onshore.
What the operations advisor will automate is the detection of costly equipment failures or malfunctions by catching their precursors, perhaps days in advance. The red flags of impending faults are then raised to process engineers who can act to prevent downtime or safety risks. These engineers will also have much of their maintenance and operational reporting done for them by the system, enabling them to get other parts of their jobs done faster.
Not much of this is new to anyone familiar with predictive analytics, but what is notable is the scale of this project between BP and Baker Hughes, a subsidiary of GE.
Millions of Points of Data
To drive out the last remaining inefficiencies of BP’s Gulf of Mexico assets, the operations advisor system must monitor more than 155 million data streams each day. These data sets come from 60,000 sensors on 1,200 pieces of equipment running within the company’s platforms, which include Thunder Horse, Na Kika, Mad Dog, and Atlantis.
For BP, the Gulf of Mexico was the logical starting point. Hashmi pointed out that the company wanted to test the new system in an area where it already had some advanced technology, and having the assets managed by engineering teams in Houston facilitated a deeper collaboration with Baker Hughes.
“Now what we have is a template that we can take to the rest of the world and deploy fairly easily,” Hashmi said, adding that BP is in the midst of installing the system at its facilities in Angola and Oman, and will be using it in the North Sea sometime next year. Ultimately, the expectation is that this system will be running on 30 of the company’s upstream facilities.
News of the analytics installation was shared during the first-ever SPE ENGenious Symposium in Aberdeen where many of the industry’s top digital leaders have gathered for three days of emerging technology discussions. Hashmi is a co-chairman of the meeting, which was launched to help the oil and gas industry realize its digital transformation.
Sorting Out the Sensors
BP’s embrace of the operations advisor, which is built on top of GE’s Predix asset management platform, also reflects the rise of the industrial internet-of-things (IoT), which is almost always tied into cloud computing, which in turn, increasingly involves some variant of data analytics or machine learning applications to run over all the data being collected.
In the offshore scenario, one of the most important keys to making this all work involves constantly checking the sources of the data—the equipment sensors—for accuracy. The oil business is laden with sensors, but their biggest shortcoming is that they often require recalibration to avoid recording null values, or issuing readings that are so wrong they are not even physically possible. Hashmi characterized this data quality issue as “the Achilles heel of most digital projects.”
Without a system like the operations advisor, it would be difficult for an industrial firm with thousands of sensors to know which ones are bad. “You can’t trend it, you can’t compare things, and sometimes you’re flying blind,” said Binu Mathew, a senior vice president and global head of digital products for Baker Hughes.
The operations advisor fixes these issues, he added, and gives operators a higher degree of awareness into the inner workings of the machines involved in the production process. One way this works is by using the analytics to fill in the data gaps with probable values, or highlight why a group of interrelated sensors are not all in line within one another.
“And if something is an aberration compared to everything else, you probably have something going on there,” Mathew said. “It might be something that’s actually an issue, or it might be a sensor issue—but either way it’s something you need to look into and this gives you that kind of visibility.”
Another advantage with this system and its use of digital twin technology is that it visualizes and analyzes the interactions between machines. Hashmi noted that in the offshore sector, one equipment failure is almost always tied to something that happened in another machine.
Traditionally the industry has relied upon human experts to detangle such interrelated machine failures. The lessons learned in these exercises get burned into these people’s brains after years of experience, however, such tribal knowledge is hard to scale in a global corporation. “Now, it’s burned into the system and the system can take that and scale it across the entire fleet,” added Mathew.