- Home
- A-Z Publications
- First Break
- Previous Issues
- Volume 21, Issue 8, 2003
First Break - Volume 21, Issue 8, 2003
Volume 21, Issue 8, 2003
-
-
4D elastic inversion helps locate in-fill wells at Oseberg field
Authors H. Rutledal, J. Helgesen and H. BuranH. Rutledal, Norsk Hydro E&P, and J. Helgesen and H. Buran, both with CGG Norge, describe how, 4D elastic inversion for the Oseberg field was combined with modern 3D visualisation techniques to support the reservoir history matching process and detect areas of undrained reserves. Several successful in-fill wells have been planned and drilled based on the integrated analysis of inversion results, well data and reservoir models. Oseberg is a major oil and gas field operated by Norsk Hydro in the North Sea some 140 km off the coast of Norway. The reservoir comprises sandstones from the Middle Jurassic Brent Group in three eastward-dipping, tilted fault-blocks. Production at the Oseberg field started in 1988. The production plateau was reached in 1992, and production started to decline in 1996. Currently, close to 85% of the initial oil reserves have been produced. The challenge facing the operator Norsk Hydro is to maximise the economic value of the field by increasing oil recovery. Additional in-fill drilling, guided by 4D seismic interpretation, was identified as a way of achieving this. A monitor survey was acquired in 1999 and combined with a base survey from 1992. The two vintages were processed using a 4D sequence to maximise repeatability. In order to exploit the full potential of the 4D data, an elastic inversion of the 1999 vintage was carried out for reservoir characterisation purposes, using a 3D layer-based inversion (StrataVista). After successful testing, this elastic inversion was extended to the 1992 data for mapping of production effects.
-
-
-
China looms as major energy consumer, according to BP annual review
World oil supply is becoming more diverse with world oil production capacity comfortably exceeding world oil demand. That was how BP chief economist Peter Davies summarised key findings from the BP Statistical Review of World Energy 200, the 52nd edition of the annual publication released in June. But perhaps the most startling development last year was the massive growth in energy consumption in China. China accounted for 68.5% of the increase in global primary energy consumption in 2002 and has become a major energy consumer and importer. Consumption of coal, which accounts for 66% of Chinese energy use, grew a massive 27.9%. Oil consumption increased 5.8% or 332 000 b/d, accounting for all of the world's oil consumption growth in 2002. China replaced Japan as the world's second largest oil consumer. Peter Davies, BP chief economist, said producers were able to meet the needs of oil consumers during the Iraq war and during unplanned supply disruptions in Venezuela and Nigeria. ‘Consuming nations were not required to tap their emergency reserves. This is good news for those concerned about energy security, but it should not lead to complacency.’ OPEC, while using spare capacity of almost 4 million b/d to keep the market supplied during the war, cut its average daily output by 1.87 million b/d in response to weak global oil demand and to a 1.45 million b/d increase in non-OPEC production. OPEC production has declined inthree of the last four years. The story was one of supply momentum that looks set to continue, according to Davies. He said Russian oil production was up 25% in three years and Russia had been joined by a new group of oil producing basins, across several continents and regions that have begun to grow rapidly. Production from Russia, the Caspian, the deepwater Atlantic Basin and Canada is up 3.3 million b/d (26.5%) in three years and has the potential to increase by another 5 million b/d by 2007. Natural gas is the world's preferred non-transport fuel. Outside the Former Soviet Union (FSU) gas consumption has grown 3.4% a year over the past decade and its share of total energy consumption is now roughly equal to coal at 24%. US gas consumption grew 3.9% in 2002 as North American gas production fell 1.8%. Imported LNG is filling part of the gap. Producers are now considering options for delivering new sources of pipeline gas and LNG to this growing gas market. Commercial (non-hydro) renewable energies are growing rapidly, but their contribution to total world electricity generation remains small (1.7% in 2000 versus 1% in 1990). The review records that world consumption of primary energy increased by 2.6% in 2002, well ahead of the 10-year growth trend of 1.4% per annum. Reported growth in energy demand of almost 20% in China was behind much of this relative strength. Energy consumption in the world, excluding China, grew by less than 1% during the year, reflecting a second year of below-trend economic growth.
-
-
-
Next generation computing challenges for pre-stack depth migration and reservoir simulation
By B. BartlingBill Bartling, senior director, energy and sciences, Silicon Graphics provides an analysis of the shortcomings of today’s computer systems and discusses where the future lies for computer-intensive operations in the E&P business. Oil and gas computational challenges, especially in seismic processing and reservoir simulation, have consistently outstripped the capabilities of even the most powerful computers. Continuing that trend, these same computational problems are now growing at a rate faster than Moore’s Doubling Law. In truth, the very large problems we are trying to solve today have always been around. Adapting to our surroundings, we have scaled them back to match the limited abilities of our machines hoping that engineering breakthroughs would eventually compensate for those limitations. Many breakthroughs have indeed come to pass, especially in microprocessor speed, power, pricing and form factor. But these revolutions exposed new limitations that in many cases cancelled out some or all of the benefits. Scientists and engineers have rushed to consume the promises of blazing fast gigahertz CPUs at highly competitive prices, only to find that I/O, bus speed, storage systems and code design prevented them from realizing their full potential. Idle cycles became the theme of the day. The goal in working with the complexity of seismic and reservoir simulation data, is throughput: speed is merely a way to get there. The concept of using speed to deliver breakthrough throughput, and thus to deliver significantly faster computational times for much larger data models, continues to drive research both in computer and software design. The answer must lie in balanced systems, with each component optimized to do its part in delivering the revolution. The past few years have ushered in a new computational paradigm: the cluster. This model is based on one of the most tantalizing promises of the Internet – a globally inter-connected computational grid of inexpensive systems that, together, can potentially combine to create the most powerful supercomputer ever built. The SETI Grid (http://setiathome. ssl.berkeley.edu/) is an excellent example of such a system, but, of course, with inter-connects running at 28.8kb/sec in too many places, SETI@home highlights the importance of appropriate and adequate back plane bandwidth. At the other end of the spectrum are supercomputers, which, in truth, are highly compact computational grids with extraordinarily fast interconnects. In spite of their slower clock speed microprocessors, from their introduction they outran the fastest PCs due to efficient, stable and proven parallel operating systems and related optimizations that focused on the throughput of many joined processors, not just individual processor speed. Of course, the price point that comes with this sort of machine, while once easily justified, has come under increasing scrutiny from data processors, IT executives and P/L managers. Key components to the success of supercomputers, in addition to their interconnect speeds, have been effective I/O and data delivery systems to keep the processors fed and working, the lack of which has been a striking shortcoming of PC architectures applied to computationally intensive tasks. Direct attached storage with fibre-channel connections has been the preferred solution to keep the data flowing to processors. New storage and delivery systems crush that paradigm, offering breakthroughs in data delivery via Storage Area Networks.
-
-
-
Value of high speed multi-volume visualization and seismic data processing
In recent publications of First Break (March and May 2003) a number of special topics were covered addressing the power and flexibility of multi-volume visualization in the E&P data interpretation and integration phases. This article addresses how high-end visualization capabilities facilitate the workflow in data processing quality control (QC), employing a proprietary in-house system. This system delivers high performance, stereographic multi-volume visualization in a fully flexible environment from the large immersive theatre to the desktop.
-
-
-
How modern technology can meet needs of modern learning in geoscience
More LessJonny Hesthammer, formerly with Statoil and now a professor in the Department of Earth Sciences, University of Bergen, is a leading advocate of modern learning techniques for geoscience education. In this article he describes how new technologies can be adapted to meet the needs of both students and those already employed in industry. Modern technology can change the way we learn. Used correctly, the technology can enhance learning, used wrongly, it may reduce the learning effect. This article describes modern learning in light of (a) problem-based learning, (b) organizing and administrating content, (c) interactive multimedia learning experiences, and (d) geosimulators (advanced flight simulators). Two geological field courses will be used to illustrate the concept. The field courses in structural geology and sequence stratigraphy were run in the areas of southeastern Utah and eastern Colorado. One of the courses was held for geoscience students at M.Sc. level, while the other course had participants from the petroleum industry. Both courses consisted of a pre-field course and the field course itself. During the course, participants worked in groups, solving problems related to general topic disciplines and petroleum industry value chain processes. The participants used geosimulators and e-learning modules (interactive, multimedia learning experiences) to acquire the necessary knowledge. The article presents results that have developed from a major collaboration effort between Statoil and all Norwegian universities, called the Collaboration Agreement. The collaboration started in 1998, and Statoil annually spends approximately $3 million to strengthen learning both within the company and at the universities.
-
-
-
Why the US oil industry owes so much to a navy captain from Croatia
Authors F. di Cesare and F. GuidiIn two previous articles published in First Break (August 2002 and February 2003), Franco di Cesare and Francesco Guidi have contributed to our knowledge about key historic figures in the Italian oil industry. They have now turned their attention to Captain Anthony Francis Lucas, the enterprising mechanical and mining engineer of Croatian origin who was to inspire the growth of the oil industry with a famous Texas oil discovery as well as sow the seeds for the development of the discipline of petroleum engineering.
-
-
-
A fast sorting algorithm for large volume datasets
Authors Y. Luo, M. Huwadi, K.P. Gunaratnam and M.N. AlfarajWe introduce a new and efficient sorting technique, referred to as BT-Sort after the Bayer-balanced Tree (B-Tree) method. It is fast and parsimonious on usage of memory and scratch space, and is thus deemed very attractive for sorting very large datasets. One property of BT-Sort is that it minimizes the use of random I/O needed for accessing the scratch disk, regardless of the size of datasets being sorted. Furthermore, it uses sequential I/O to/from scratch disks in a very efficient way, and does not require large memory. For example, with such features, sorting 1.2 TB of input shot gathers into ordered common-midpoint gathers with BT-Sort would require only three days using four computing nodes (each of say 32 GB, 375 MHz RISC architecture), whereas the same exercise would typically require six to eight weeks to complete using conventional sorting techniques. From a seismic operational point of view, a rapid increase in seismic traces might result in altering some of the algorithms needed in the processing. Taking Saudi Aramco as an example, the current level of seismic-data acquisition there stands at 10 seismic crews, eight of which are 3D-seismic. The average count on these crews is about 2500 channels per crew. With an average daily production rate of some 2000 vibration points, the corresponding volume of seismic data flowing into processing is some 40 million traces per day. Consequently, some of the processing algorithms are being constantly updated in order to handle these massively increasing seismic volumes. Frequent sorting of seismic data into various domains (e.g. common-midpoint, common-receiver, etc.) has traditionally been avoided mainly because of the lack of efficient sorting algorithms. In the extreme case, sorting very large datasets with only limited computer resources (a few GB of memory) is a hugely inefficient task as the operation could take months to complete. As a compromise, geophysicists at times have opted to reformulate some existing processing algorithms such that seismic data would be kept in a certain preferred order throughout the processing sequence. One example was the reformulation of the dip-moveout operator by Biondi and Ronen (1987), so that it can be applied directly to recorded shot profiles instead of common-offset gathers if data cannot be sorted to common offsets (due to cost or other practical reasons), thereby avoiding the cost-prohibitive sorting process. Another, even worse shortcoming lies in the inability to apply some domain-specific processing algorithms, such as those for noise suppression and Radon multiple removal that require implementation in the common-receiver and common-midpoint (CMP) domains, respectively. In the case of (PRT) to normal-moveout-corrected CMP gathers, for example, geophysicists have attempted to speed up the process by optimizing the PRT operator (Kelamis and Chiburis 1992; Beylkin and Vassiliou 1998); this multiple elimination process in its entirety can benefit even further if data were sorted from shot to CMP gathers in a much faster way such as that proposed here. Unfortunately, lack of a fast sort may inevitably force the seismic data to suffer from unnecessary noisy contaminations if the expensive (slow) sort is to be avoided (Al Dossary et al. 1998). When using conventional (quick or heap) sorting algorithms, one may in theory estimate the cost (in terms of the number of operations to be performed) as N log2 N (Press et al. 1988), where N is the number of traces. On this basis, 500 million traces, for example, would require about 15 billion operations to sort them. In practice, however, such an implementation is more complicated than simply coping with a formidable number of operations. Specifically, the entire dataset should physically reside in memory (internal sorting) in a stand-alone system, a task most computer-hardware configurations cannot support.
-
Volumes & issues
-
Volume 42 (2024)
-
Volume 41 (2023)
-
Volume 40 (2022)
-
Volume 39 (2021)
-
Volume 38 (2020)
-
Volume 37 (2019)
-
Volume 36 (2018)
-
Volume 35 (2017)
-
Volume 34 (2016)
-
Volume 33 (2015)
-
Volume 32 (2014)
-
Volume 31 (2013)
-
Volume 30 (2012)
-
Volume 29 (2011)
-
Volume 28 (2010)
-
Volume 27 (2009)
-
Volume 26 (2008)
-
Volume 25 (2007)
-
Volume 24 (2006)
-
Volume 23 (2005)
-
Volume 22 (2004)
-
Volume 21 (2003)
-
Volume 20 (2002)
-
Volume 19 (2001)
-
Volume 18 (2000)
-
Volume 17 (1999)
-
Volume 16 (1998)
-
Volume 15 (1997)
-
Volume 14 (1996)
-
Volume 13 (1995)
-
Volume 12 (1994)
-
Volume 11 (1993)
-
Volume 10 (1992)
-
Volume 9 (1991)
-
Volume 8 (1990)
-
Volume 7 (1989)
-
Volume 6 (1988)
-
Volume 5 (1987)
-
Volume 4 (1986)
-
Volume 3 (1985)
-
Volume 2 (1984)
-
Volume 1 (1983)