R&D Technical Section Q&A: Why Your Next Big Innovation Might Depend on AI/ML
Gaurav Agrawal, Zikri Bayraktar_
In this article, Gaurav Agrawal, a member of the SPE Research and Development Technical Section (RDTS) explores the ever-expanding topic of machine learning (ML) and artificial intelligence (AI) with Zikri Bayraktar, SPE, from SLB’s Software Technology and Innovation Center (STIC) in Menlo Park, California.
This series highlights innovative ideas and analysis shaping the future of energy, with a focus on emerging technologies and their roadmaps, potential, and impact. With these conversations, we hope to inspire dialogue and accelerate progress across new energy frontiers.
Zikri Bayraktar is a senior machine learning engineer at the SLB Software Technology and Innovation Center (STIC). Before joining STIC, he spent 11 years at the Schlumberger-Doll Research Center as the AI research lead, where he led projects focused on automated reservoir steering, geology, carbon capture, and intelligent automation. Earlier in his career, he worked in IBM’s semiconductor research and development division. Bayraktar holds a PhD in electrical engineering and computational science from Penn State University and an MSc in management from the University of Illinois at Urbana-Champaign. He has coauthored 14 journal articles, 42 conference papers, a book chapter, and holds six patents. He is a senior member of IEEE and a member of SPE and SPWLA.
RDTS: AI and ML are often used interchangeably. Do they overlap?
Bayraktar:
Though they overlap to a certain extent, they are in fact distinct. AI broadly focuses on systems that emulate the human decision-making process to solve problems, including rule-based systems, data-based algorithms in ML, robotics, and others.
ML can be considered a subset of AI, as algorithms that can learn from data, find patterns, improve outcomes, or automate certain tasks without explicit instructions. Both fields extensively utilize available data, may consume large computational resources, and may still produce stochastic outcomes at the end.
Three papers published in the 1950s were significant: “Turing Test,” “Three Laws of Robotics,” and “Perceptron” established the roots of neural networks (NN) from biological abstractions. A decade later, Perceptron trained with the backpropagation provided the foundations for the transformers that gave us the translators, chatbots, and AI/ML solutions that we enjoy today!
RDTS: Could you share examples of AI/ML-enabled solutions that were not possible some years back?
Bayraktar:
I live in the San Francisco Bay area, where there are an increasing number of self-driving cars. Stepping into one of these cars and not seeing a driver is unnerving at first. But after a few minutes of smooth driving in complex urban traffic, you ease up and marvel at the technology. Partly driven by advanced ML algorithms and AI systems, which can seamlessly fuse various sensors and make decisions, self-driving cars are no longer a fantasy.
Similarly, personal assistants that can interact with human voice, cameras, and text are nowadays part of our daily lives: They can control home appliances, turn utilities on and off, warn about security issues, and even feed our pets. They are driven by high-accuracy sensor data, sophisticated large language models (LLMs), and AI agents.