- Home
- Conferences
- Conference Proceedings
- Conferences
70th EAGE Conference and Exhibition - Workshops and Fieldtrips
- Conference date: 09 Jun 2008 - 12 Jun 2008
- Location: Rome, Italy
- ISBN: 978-94-6282-104-0
- Published: 09 June 2008
1 - 50 of 91 results
-
-
Integrated geophysics in challenging areas: from joint planning to joint interpretation
Authors P. Dell’Aversana, I. Giori, M. Gilardi, G. Spadini and R. PuricelliIn 2007 Eni acquired a multidisciplinary geophysical survey in a desert area located in southwestern
Libya, combining seismic and non-seismic geophysical methods.
The investigated area is located in SW Libya and represents the erosion remnant of a
Palaeozoic intra-cratonic basin located on the Saharan Platform of North Africa. The
sedimentary fill is mainly Paleozoic to Mesozoic in age and reaches a thickness of about 4000
m in the basin centre.
The project was aimed at reducing the exploration risk using different geophysical disciplines.
Acquisition of non-seismic data was performed in correspondence of 2D seismic lines
previously shot.
Densely spaced gravity stations were located along these seismic lines (called “central lines”).
The acquisition grid was progressively coarsened along lines parallel to the central lines, in
order to be integrated with the regional gravity data already available in the area.
Densely spaced magnetotelluric data were acquired along the same seismic/gravity lines for a
total of 50km.
Distance between successive MT stations was set at 100-200m, depending on operational
difficulties.
High ground resistance was the biggest factor which affected the data quality in the
investigated area. It was reduced injecting salty water in deep holes were the electrodes were
positioned. 254 MT stations were acquired with satisfactory Signal/Noise ratio. Reliable
inversion results (Figures 1 and 2) were obtained and checked with the resistivity logs
available in the area.
-
-
-
Mapping geothermal reservoirs using broadband 2-D MT and gravity data
Authors K. -M. Strack, N. Allegar, G. Yu, H. Tulinius, L. Adam, Á. Gunnarsson, L. F. He and Z. X. HeGeothermal energy is playing a larger role as an alternative energy source for both electricity
generation and for space heating. Our recent magnetotelluric (MT) and gravity surveys in
Iceland and Hungary have both characterized known geothermal reservoirs and identified new
drilling opportunities. The success of these surveys has resulted in additional 2D and 3D MT
and gravity data acquisition and the onset of a drilling program to evaluate the identified
geothermal potential.
Higher temperatures and salinity of the pore water, as well as the concomitant increased rock
alteration associated with geothermal areas, often contribute to a decrease in the bulk
resistivity in a rock mass. The zones of low resistivity that are associated with geothermal
reservoirs can be detected by electromagnetic techniques such as the MT method.
We used MT/AMT measurements to acquire natural time varying electrical and magnetic
fields at frequencies of 10,000 Hz ~ 0.001 Hz. The EM field propagates into the Earth as
coupled electrical and magnetic fields and these fields are commonly represented in the
frequency domain as a four element impedance tensor. The characteristics of the MT
resistivity curves are analyzed to extract structural information (associated with resistivity
contrast) that is used to determine high-permeability zones and up flow zones of hydrothermal
systems (Malin, Onacha, and Shalev, 2004).
To complement the MT data, gravity surveys were acquired along the MT survey lines to
assist in detecting fault systems below the surface. Fault system information can be used to
analyze and to understand groundwater channels and water flow directions. At the same time,
gravity data may be used to interpret the subsurface and to aid in locating prospective heat
sources. Integrating the MT and gravity data reduces the intrinsic ambiguity of either dataset
and produces a more robust interpretation.
-
-
-
Feasibility study of electromagnetic, gravimetric and aeromagnetic methods in sub-basaltic settings
Authors R. F. Reynisson, S. Fanavoll, G. Waag and J. EbbingThe objective of this study was to investigate the feasibility of using gravity, magnetic,
magnetotellurics (MMT) and electromagnetic (EM) data in the sub-basalt imaging problem.
The Møre volcanic margin is a part of the mid-Norwegian margin and consists of a central
area of NE-SW trending deep Cretaceous basin. The basin is flanked to the west by the Møre
Marginal High that is characterized by thick, Early Eocene basalt flows overlying an
unknown substrate. Numerous previous experiments have demonstrated that standard seismic
acquisition and processing techniques are not capable of characterizing the volcanics or
imaging beneath it.
-
-
-
CSEM data in integrated data analysis
By K. N. MadsenOffshore hydrocarbon exploration utilizes various remote sensing techniques in order to
increase the knowledge about the subsurface before drilling decisions are made. Seismic
exploration is by far the most common tool and uses acoustic and elastic waves to map
boundaries between subsurface layers with contrasting P- and S-wave velocities. Seismic data
can provide relatively high-resolution information about geological structures and possible
hydrocarbon traps.
-
-
-
Effects of Fluid Saturation on Seismic AVA and CSEM Response in the Norwegian Sea
Authors L. MacGregor, J. Walls, R. Shu, N. Derzhi and P. HarrisOne of the most promising trends in geophysical exploration today is the integration of 3D
seismic and controlled source electromagnetic (CSEM) surveys. The combination of these
data sets offers explorationists a powerful tool for risk reduction. Seismic data provides the
detailed depth and structure information, while CSEM provides electrical resistivity for fluid
discrimination. The combination of the two can greatly improve our ability to invert for
porosity, lithology, and hydrocarbon saturation. However, the key to inversion is forward
modeling. In this paper we will show how well logs and rock physics can be used to provide
the input for modeling both seismic amplitude versus offset (AVO) and CSEM radial
amplitude versus source-receiver spacing.
-
-
-
Onboard prestack WEM on WAZ data – A new breakthrough
Authors J. Kapoor, D. Wilson and T. DittrichThe past couple of years have seen a tremendous increase in the acquisition of wide
azimuth (WAZ) surveys that provide improved subsalt imaging. These surveys can
acquire data three times faster than conventional narrow azimuth (NAZ) surveys and at
the same time increase the fold of the data by a factor of five. Therefore we are looking at
processing fifteen times more data while the need for fast delivery of imaged data is
accelerating due to lease expiry, lease sales, drilling decisions and other financial
conditions.
-
-
-
In Field Processing - A must for high productivity vibroseis acquisition
Authors J. Meunier, T. Bianchi and J. J. PostelVarious techniques have been proposed to increase vibroseis productivity. The most popular
of them are Slip Sweep, introduced by PDO, and HFVS, introduced by EXXONMOBIL.
Reduction of the extra noise that these techniques generate requires extra processing. The
complexity of their operations includes the collection of source data that could be much
heavier to handle. A way to prevent these extra tasks from adding up to the cycle time
between acquisition and interpretation is to perform them in the field as soon as possible after
the data have been acquired.
-
-
-
Improving Image by Anisotropic Migration - Mississippi Canyon, Gulf of Mexico
More LessSeismic anisotropy refers to seismic waves traveling with different velocity at different
propagation angle, usually in consolidated, shale-prone areas such as in Gulf of Mexico
and West Africa. A single imaging velocity for any given subsurface location (regardless
of propagation angles) has been commonly used in the industry and is called Isotropic
Migration. Images obtained with Isotropic Migration are often mis-positioned, resulting
in extra cycle time needed to calibrate and correct isotropic images for prospect
evaluation and well planning.
-
-
-
Non-linear 3D tomographic inversion of residual moveout in depth kirchhoff migrated CIG
Authors R. Baina, F. Adler, A. Soudani and J. -B. RichardIt is well-known that depth imaging brings viable solution to complex problems and
helps interpreters to quantify and to understand the architecture of the reservoir under
study. This recognition leads to a pressure to depth-image larger and larger areas and
to shorten the delivery delay. However, the success of a depth imaging projects
requires testing of different methodologies and relies on several trial and errors during
depth velocity model building. This means the use of costly iterative sequential
scheme of full PSDM followed by linearized tomographic inversion.
We present here a new method for depth velocity building which we believe will help
us to achieve a fast turnaround of depth imaging project and give us full flexibility for
testing and adjusting our model parameterisation and inversion setting.
-
-
-
Rapid structural framework generation for reservoir property propagation and modeling in complex structures
More LessIn the last few years the computer assisted tools for horizon picking have improved their
speed, usability and quality of picking so that interpreter's productivity for this task has been
increased by large amounts. Some have quoted productivity gains of three months to one
week. In addition these tools often produce auxiliary outputs of pick quality or stratigraphic
attributes. More recently attention has been focused in helping improve the interpreter's
productivity when performing fault interpretation and framework building. For these tasks
software developments have focused on increased use of coherency and volume curvature.
These attributes are then used to inform automatic fault extraction before detailed
interpretation or fault tracking while interpreting so that fault surfaces are generated with
significantly less manual picking. The result of reducing the effort needed to produce fault
and horizon surfaces has been a huge increase in the number of these surfaces interpreted
within a seismic survey. Semi-automated Structural Framework building then enables these
surfaces and surfaces developed from sub-seismic well correlation to be incorporated into a
very detailed earth model that can be easily populated with reservoir parameters of pressure,
porosity, saturation and permeability. Such models are suitable for use in reservoir simulation
or detailed well planning. The ability to perform all these activities on a single data
representation with computer assistance at each step has removed many of the bottlenecks in
seismic data interpretation and the integration of well and seismic data interpretation.
-
-
-
Automatic interpretation of salt bodies by iterative image segmentation
Authors A. Halpert, J. Lomask, B. Clapp and B. BiondiVelocity model building is the most human-intensive component of the depth-imaging
process, and it is often the bottleneck when trying to reduce the cycle time of large seismic
imaging projects. For near-salt or sub-salt imaging the interpretation of the salt-body
geometries can be extremely time consuming. Current automatic methods based on horizon
tracking are prone to errors, in particular when the salt boundaries are poorly imaged. Lomask
et al. (2007) have proposed an automatic method to interpret salt boundaries that segments the
image cube by solving a global optimization problem, and thus it is more robust than local
methods based on horizon tracking. We apply the image-segmentation method to the iterative
velocity-model building process. We show how it can be applied to a conventional sediment
and salt flooding procedure and we discuss how to use the boundaries picked at the previous
iterations as a constraint to the iterative solution and thus make the method more reliable.
-
-
-
Depth on Demand – Fast Beam Migration and Integrated Visualization for Rapid Velocity Depth Model Building
Authors K. Schleicher, J. Lima, T. Bird and P. WijnenHere, we will present a set of tools that we find are very well suited to address this challenge. Through our beam migration approach, we are able to efficiently and accurately image large dataset very rapidly and accurately image with a large migration aperture and with a full dip-range. The beam migration process is separated into two main components: a dipscan process that is performed once for the whole survey and which output is then stored on disk for later use, second a fast migration step is carried out ‘on-demand’ or through multiple model iterations. With modern computer hardware this imaging step can be performed in almost real-time. We also present beam-based imaging techniques that allow for more rapid model building in salt-regions through the use of velocity discriminating filtering in the imaging process.
-
-
-
Reducing time to interpretation decisions with RTM
More LessIn this paper, we show some history as to why mixed migration algorithms have been used in
the model building portion of the imaging sequence. We go on to show that by using a single
algorithm in both the model building and final imaging phases of a project, the cycle time can
be collapsed. In this paper, a two way wave equation algorithm (RTM) is proposed as a tool
to help significantly reduce total project turnaround.
-
-
-
Looking ahead: cycle time reduction for GOM subsalt imaging projects
More LessThere has been a constant drive to reduce cycle time for depth imaging projects in the
Gulf of Mexico. In recent years, facing impending lease expirations, upcoming lease
sales, and tight drilling schedules, oil companies have continued to compress the cycle
time of Gulf of Mexico depth imaging projects. Seismic companies have increasingly
higher stakes in cycle time reduction, as well. This is a result of the surge of regional
non-exclusive wide-azimuth data acquisition in the Gulf of Mexico's deep waters. To
quickly recoup their companies' investments of hundreds of million dollars, processing
geophysicists in seismic companies are increasingly under pressure to process large
amounts of data in record times. This is in addition to simultaneous demands for
geophysicists to apply the most current technologies, such as, 3D SRME and RTM.
-
-
-
Rapid turnaround for processing and analysis of frequent time-lapse surveys
Authors J. P. van Gestel, R. Clarke and O. I. BarkvedWe present a case study of fast turn-around processing and analysis of time-lapse seismic data
acquired using the Life of Field Seismic (LoFS) permanent receiver array. We combine a
small computer cluster, a broad band connection to the offshore facilities and script-base
processing software into an automated workflow from acquisition to interpretation. This
results in delivery of time-lapse processed volumes to the interpreters within a week after last
shot. By automating the generation of time-lapse difference volumes and extractions, we
deliver fast-track interpretation of the main time-lapse effects within hours of data arrival in
Stavanger. These main time-lapse effects are captured in graphic files, which are all
automatically linked in HTML documentation. This automation and standardization has
allowed moving the main workload of the geophysicist from of data manipulation to
interpretation and integration of the collected data.
-
-
-
Cycle-Time Challenges for Geophysics in the Evaluation of a Tight Gas Pilot Project – Example from South Sulige Permit, Sulige Field, Ordos Basin, China
By J. SuiterHistorically, well targeting in tight gas reservoir plays has
focused on ‘sweet spots” (e.g. high net-to-gross, high Sg,
natural fractures, etc) which aim to highlight areas of
increased productivity. On Sulige, reservoir and
productivity predictions have rarely been straightforward
(no natural fracturing, complex diagenetic history) and the
already-producing wells by Petrochina on North Sulige
more often than not need mechanical stimulation (i.e.
hydraulic fracturing) to maintain prolonged production in
the low permeability, fluvial sandstones.
-
-
-
Rethinking the Seismic Value Chain, excellence in terms of cycle time and output quality
By G. BerkhoutExcellence in the seismic value chain will depend on the quality of the involved
specialized tools and skills (abilities) as well as the capability of organisations to
combine these abilities to achieve maximum value in terms of speed and quality.
-
-
-
How use of Visualization and Virtual Reality evolved into a major business impact
More LessNorsk Hydro started out their Virtual Reality research activity in 1997 developing an in-house
application in collaboration with Christian Michelsen Research. The application was sold and
commercialized by Schlumberger, known as Inside Reality, but development has separated
the two applications. In HydroVR important application tools were developed in tight
collaboration with the users securing a tight linkage to the daily workflows. The development
and use of HydroVR has been regarded as a success, but have been mainly used by
Production and Research.
-
-
-
The Evolution of Visualization
By D. FanguyThe subject matter of this workshop is Visualization; is there anything left to do? The short
answer is yes, but we need to understand and review the evolution of Visualization to truly
answer and come to this conclusion. This paper is a brief summary of the past twenty years
and how we came to build Visualization Centers in the Oil & Gas industry. The term
Visualization replaced Virtual Reality a few years ago and now more and more of us use the
term Collaboration. So no matter what word you use we all know over time that someone will
change it and become the “buzz word” at that time. However, the important message is not
about how we describe these centers but how we use these centers. The ever changing
technologies we utilize in these centers have in fact changed the way we use the centers. This
will be one point that is explained in more detail though-out this paper and also why we
cannot be sure exactly what we need to do in the future. So as stated above, the answer to the
question – “is there anything left to do” is always going to be “yes’, but how we use it will
evolve as well.
-
-
-
Covisualization: Lowering barriers-toentry for multidisciplinary 4D databased decision making
Authors R. Mayoraz, G. Brew and A. ParadisTime-lapse seismic acquisition and other temporal monitoring initiatives are becoming
increasingly commonplace. These data, and much additional time-variant information, can be
critical to the decision-making process for reservoir development of mature assets. One of the
best ways of rapidly reconnoitering and analyzing these data is through comprehensive
visualization solutions.
However, current workflows are not conducive to simultaneous visualization of data from
disparate software applications. Time-lapse seismic data, reservoir simulation output, and
production histories are produced in a wide variety of software packages with different
formats and spatial arrangements. Visualization in the same virtual space and time can be an
extremely difficult proposition.
The solution we present makes the import and integration of these data, from many different
sources, an easy and potentially automatic process. By lowering the barriers-to-entry to
visually integrate all the data needed in the reservoir development process, better, faster, and
more accurate decisions can be made.
-
-
-
Basin Scale Exploration – A New Frontier for Desktop Visualization
Authors N. Purday, D. M. Roberts and M. ColeVisualization technology has changed dramatically over the past few years, moving
from a high end tool used only in visualization rooms by major oil companies to a
desktop solution applied to a significant range of exploration and field development
opportunities. Visualization has become an enabling technology to both speed seismic
interpretation and integrate geologic and geophysical data much more effectively than
ever before. The rapid changes in adoption are being driven by the requirement for a
significant reduction in exploration risk, cost reduction and cycle time improvement.
This paper will focus on thinking beyond current volume interpretation workflows to
demonstrate how visualization can play a vital role in Basin Scale exploration and
interpretation.
-
-
-
Sismage DisplayWall: a new environment for complex Seismic Data Interpretation.
By N. KeskesSince more than ten years, the “DisplayWall” technology is growing very fast and becomes
widely used in various domains: Geosciences, Astronomy, medicine, etc.. The academic and
industrial research around this topic is also very active.
The main advantage with this technology is the ability to have multiple display content and
huge windows with high resolution.
-
-
-
Utilizing the Benefits of Virtual Environments
Authors T. Holtkämper, A. Dressler and M. BogenOver the last ten years many oil & gas companies have installed Virtual Environments in
order to optimize, complement, or replace steps in their E&P workflow. The success of the
deployed Virtual Reality technology varies from company to company, but not always fulfills
set expectations.
The experiences made in the VRGeo Consortium also indicate that the potential of VR
technology is not yet used to its full extend. Only if VR technology can clearly show its
benefits over existing technology, the users are willing to adopt and integrate it into their daily
workflow. In this respect, we describe by means of illustrative examples what can be done to
better utilize the benefits of Virtual Environments.
-
-
-
Sub-Surface Visualisation in BP Technology – People – Business Drivers
Authors K. Hansch and J. ThomsonBP has been an early adopter of the use of visualisation in sub-surface workflows. Over the
past 15 years the implementation of visualisation technology and the development and sharing
of skills and knowledge have led to the current routine use of visualisation in our daily work.
Visualisation is no longer seen as a discipline in itself but as a tool to help in the delivery of
our business objectives. Dedicated effort was required to reach the current level of user
engagement. A large number of lessons have been learnt during the implementation of
visualisation into the sub-surface workflows.
-
-
-
Digital Oilfields: Real-time Decisions
More LessThe state of the art in visualization today is embodied in the real-time aspect of group
decision making. Real-time group decision making environments are used in many
applications in industries including engineering analysis and design in manufacturing,
government command and control for disaster response, drug design in pharmaceutical
companies, and immersive environments in academic research. However, in no industry is
the applicability and return on investment more clear than in the exploration and production
portion of the oil and gas industry.
-
-
-
The Value of Visualization in Exploration and Production: Anecdotal Evidence and Quantitative Data
Authors G. A. Dorn, G. S. Pech, K. Gruchalla and J. MarbachSince the introduction of large-scale visualization systems in the energy industry over a
decade ago, discussion has focused on the relative benefits, if any, of conducting typical
exploration and development tasks in a large visualization environment vs. a desktop display.
The only industry specific information with regard to benefits has been anecdotal in nature.
Five human performance studies have been conducted over the last five years to quantify the
amount of benefit achieved in a set of directional well path planning problems using largescale
visualization. These studies have demonstrated that significant, quantifiable
improvements in efficiency and accuracy of results are achieved using large-scale
visualization environments for design engineering tasks, such as well path planning.
-
-
-
Enterprise-class Virtual Environments Interoperability for the Upstream Industry―Promise or Peril
By E. J. DoddThe exploration and production (E&P) enterprise is undergoing a tectonic shift within its
information and communication technology (ICT) ecosystem. E&P companies need to better
utilize their existing assets, increase their resource portfolios and bridge the looming
knowledge gaps. In particular, the E&P Industry needs to take an active role in defining and
supporting open and free standards that integrate disparate data sources into enterprise-class,
secure Virtual Worlds. This paper briefly discusses emerging ICT research and innovation
around of Virtual Worlds into Virtual Environments. There is an evolution from today’s
classical interaction and display paradigm to the fully-integrated global enterprise using the
3-D Internet.
-
-
-
NVIDIA Advanced Visualization Solutions for Oil & Gas market
More LessThe world is estimated to hold about 940 billion barrels of undiscovered oil and natural gas
resources, much of it in remote and difficult to reach places, such as deep water, deserts, and
arctic environments. Oil & gas companies are looking for technologies to help increase
accuracy of exploration and production, while reducing risks and costs. Efficient and fast data
interpretation amongst teams of specialists is key. Data have to be visualized with a maximum
of details and be shared within a small group, a large audience or with remote colleagues.
Standard 19” LCD monitor can display up to 1.9M pixels at 1600 x 1200 resolution but the
image is too small to be shared, too small to interpret efficiently and it takes far too long to
zoom in and out.
30” LCD can display up to 2.9M pixels at a 2560 x 1600 resolution, that’s great for two or
three engineers working together. But looking for 15 frames per second animation, for a fluid
seismic interpretation or reservoir simulation display, means that the system has to handle
43.5 M pixels per second. Can it be done on a standard workstation? Can you even look for
larger display size, faster frame rate for increased efficiency, from a single system?
-
-
-
HoloVizio: The Next Generation of 3D Oil & Gas Visualization
Authors T. Balogh and P. T. KovácsWe present the HoloVizio system design and give an overview of Holografika’s approach to
the 3D displaying. The patented HoloVizio technology uses a specially arranged array of
optical modules and a holographic screen. Each point of the holographic screen emits light
beams of different color and intensity to various directions. With proper software control,
light beams leaving the pixels propagate in multiple directions, as if they were emitted from
the points of 3D objects at fixed spatial locations. We show that the direction selective light
emission is a general requirement for every 3D systems and the advantages of light field
reconstruction over the multiview approach. We describe the 10 Mpixel desktop display and
the 50Mpixel large-scale system that enables the collaborative work in real 3D surpassing the
limitations known at stereoscopic systems. We cover the real-time control issues at high
pixel-count systems with the HoloVizio software environment and describe concrete
developments targeting 3D oil&gas visualization applications.
-
-
-
Improved asset management with network-centric visualization
By Y. NirToday’s oil and gas industry is faced with a growing need to not only find and explore new
energy reserves, but also to manage existing assets more efficiently and in an integrated way.
This involves multiple teams and people with various backgrounds and skills working
together to share information, collaborate and make faster and better decisions. The key issue
in this is usually not the collection of data but the amount of available data that is ever
increasing. This presents a considerable challenge to companies striving to exploit this
information for competitive advantage.
This paper explores how recent breakthroughs in professional visualization can help improve
the collaboration process by visualizing and sharing various types of data, and how this
ultimately leads to better asset management.
-
-
-
Regional Geologic Visual Integration Offshore Brazil
Authors K. P. Boyd and D. M. RobertsRecent advances in the power of 64 bit PC hardware, and also the tremendous power
of modern visualization software applications now makes it possible to visualize
datasets which are on basin scale and even continent scale.
-
-
-
Open Inventor and Avizo: commercial cross-discipline visualization tools
By F. GambaMercury Computer Systems provide commercial visualization solutions for a wide selection
of markets such as Medical, Aerospace, Oil and Gas, Fluid-Dynamics and Materials Science.
In order to provide cross-discipline effective solutions, Mercury leverages its know-how in
core graphics technology.
Modern GPUs supply the computational power to make a step forward in 3D volume
rendering quality and interactivity. After several years of academic research new techniques
becomes available in commercial solutions such as Open Inventor by Mercury and Avizo.
Extended usage of computational clusters with thousands of nodes allow the generation of
massive datasets that exceed the hardware resources of the single workstation used for
visualization. To overcome these limitations, smart management of GPU/CPU memory and
CPU/GPU computational power becomes crucial. A Large Data Management (LDM) engine
allows the user to efficiently visualize hundreds of GB of seismic data and hundreds of
millions of cells reservoir models. Mercury recently introduced a new modular visualization
framework, called Avizo, which brings all the Open Inventor technology to the end-user level.
Avizo is the best tool to experience cross-discipline visualization within the Oil and Gas
workflow.
-
-
-
The Future Of Visualisation - Steering Through The Fog
By H. LaufertsThe vision of future visualisation is an environment where technology provides the means to
experience data, subsurface models and production facilities as if they were part of the real
world.
Visualisation should utilize our natural senses as in the real world: visibility in three
dimensions and a virtual reality that can be felt and touched.
We recognize the technology elements that lead the way to this vision: touch interfaces,
gesture control, holography and auto stereoscopy to name a few. These tools help us steer
through the fog and may be part of the solution to reach our vision. We do not know how
much time it will take to reach this vision, but we are convinced that given our current
business challenges we can’t wait for the developments to be made for us. We have to
promote, sponsor, encourage and actively steer our stakeholders on the road to future
visualisation.
-
-
-
The Human Factor in Interpretation and Visualisation
By R. GrasE&P software has rapidly evolved from the early interpretation workstations of the 80-90s to
a sophisticated, integrated and unified system for interpretation, visualisation and increasingly
virtual or augmented reality. However, relatively less effort has been spent in investigating
how people, either teams or individuals, interact with these technologies towards realising the
value for their organisations. In recent years much emphasis has been placed towards
enabling team-based collaborative work processes within visualisation centers (Collaborative
Visualisation Environments, henceforward abbreviated as CVE’s), but a simple truth remains
that specific core tasks in E&P are performed primarily by individuals. At risk of
generalisation, whereas Development and Production are predominantly team-based work
processes, Exploration on the other hand is a task that relies heavily on an individual’s skills
for discovery. The E&P enterprise that recognizes the individual’s relevant skills and provides
an environment for both the individual as well as teams to perform optimally gains a
significant competitive advantage. Additional efforts are needed to specify and provide
visualisation environments for individuals or small teams geared towards Exploration.
-
-
-
High Order Acoustic Scheme For a Wave Propagation Modeling
Authors I. Tarrass, A. C. Bon and P. ThoreWe present a high order finite difference numerical scheme to simulate the acoustic wave equation. The
scheme uses 2 coupled equations in pressure and displacement. The scheme is an 8th order in space and
a 2n order in time n ≥ 1. The parallelism of code is based on message passing implementation to handle
one simulation and a master slave paradigm to simulate a large campaign aquisition. The code has been
tested on different architectures and present a high level of portability.
-
-
-
GEOCUBIT, an HPC parallel mesher for Spectral-Element Method seismic wave simulation
Authors E. Casarotti, M. Stupazzini, S. J. Lee, D. Komatitsch, A. Piersanti and J. TrompWave propagation phenomena can nowadays be studied thanks to many powerful numerical
techniques. Spurred by the computational power made available by parallel computers,
geoscientists and engineers can now accurately compute synthetic seismograms in realistic
3D Earth models. In this field, the Spectral Element Method (SEM) has convincingly
demonstrated the ability to handle high-resolution simulations at global(e.g., Komatitsch et
al., 2005), regional (e.g., Komatisch et al, 2004, Lee at al., in press) and local scale (e.g.,
Stupazzini, 2004).
The SEM is as a generalization of the Finite Element Method (FEM) based on the use of
high-order piecewise polynomial functions. In the coming Petaflops era, the SEM should
become a standard tool for the study of seismic wave propagation, both for forward and
inverse problems. The more the power provided by computer clusters, the higher the
resolution that is available for the simulations. Consequently, the definition of a good
geological model and the creation of an all-hexahedral unstructured mesh are critical.
-
-
-
Massively parallel computations for the solution of the 3D-Helmholtz equation in the frequency domain.
Authors H. Calandra, I. Duff, S. Gratton, X. Pinel and X. VasseurThe topic of our work is the solution of the three-dimensional Helmholtz equation in the frequency
domain on massively parallel computers modeled by the following partial differential
equation: (view PDF), with some absorbing boundary conditions, where u is the pressure of the wave, f its frequency,
c the propagation velocity of the subsurface and g is a Dirac function that represents the wave
source in the frequency domain. This equation is involved in an inverse problem modelling a
wave propagation under Earth. The solution of this inverse problem enables geophysicists to
deduce from experimental data the structure of the subsoil. An explicit solution method (time
domain) is often considered because it keeps the memory need acceptable. But working in time
domain supposes that stability conditions on the discretization scheme hold both in time and
space, that often leads to very small time steps (i.e. large simulation time) for real problems.
One of the great advantage of the frequency domain formulation is that stability conditions of
the discretization scheme only rely on the frequency. The frequency formulation is yet much
greedier in memory than the time one’s, because standard discretization methods such as finite
difference and finite element methods leads to linear systems of size depending linearly on the
frequency. Nevertheless, according to recent trends concerning massively parallel architectures,
solving the implicit Helmholtz equation seems now feasible, because large distributed memories
and efficient interconnect become available. We shall show that linear systems of size more than
one billion can be solved by present supercomputers in a few minutes.
-
-
-
The role of High-performance computing and seismic imaging and interpretation
Authors B. Biondi, B. Clapp and A. ValencianoProgresses in seismic-imaging technology are driven by advancements in data acquisition and
high-performance computing. Wide-azimuth acquisition geometries of both marine and land
data are dramatically changing the data we image. The commoditization of multi-core
processors and the availability of ultra-fast hardware accelerators (FPGAs, GPUs, Cells, …)
will change the way that we image and interpret those new data sets. These hardware
improvements will enable the application of imaging operators that are more accurate in both
the modeling of the underlying physical phenomena (e.g. wave propagation vs. ray-tracing)
and the approximation of the actual inversion of the recorded data. The future availability of
workstation with multi-core CPUs will enable the exploitation of expensive numerical
algorithms to support interpretation. This should lead to dramatic improvements in the
structural and stratigraphic interpretation in areas where the complexity of the velocity model
requires a tight loop between interpretation and processing
-
-
-
Pushing limits of the 3D acoustic waveform inversion in the frequency domain
Authors H. Ben-Hadj-Ali, F. Sourbier, V. Etienne, S. Operto and J. VirieuxWe present a 3-D acoustic full waveform tomography (FWT) based on a forward problem suited for
multisource simulations. This forward problem based on the wave equation is solved in the frequency
domain using a direct solver technique, leading to an impressive request of core memory. The imaging
problem (Tarantola, 1987) is built through a local minimization of the mis t function between recorded
and synthetic data. The frequency-domain (FD) formulation of FWT was originally developed for 2D
cross-hole acquisition surveys which involve wide-aperture propagations (Pratt and Worthington, 1990).
Only few discrete frequencies are required to develop a reliable image of the medium thanks to the
wavenumber redundancy provided by multifold wide-aperture geometries. The lowest frequency and
the starting model both play a critical role in the convergence of the minimization. Since the full wave
propagation modeling is a critical issue in FWT methods, a 3D optimal nite-difference stencil has been
designed by Operto et al. (2007) that leads to 4 grid points per wavelength for an accurate modelling,
reducing the memory request when solving the large sparse linear system for each frequencywe consider.
Although present hardware con gurations limit the domain dimensions, it remains unclear where are the
different bottlenecks of the approach as the degrading conditioning of the impedance matrix or the poor
scalability when we increase the number of nodes. In this presentation, we shall provide some insights on
the feasibility and relevance of 3D frequency-domain FWT for building high-resolution velocity models
of isotropic acoustic media with one application related to the SEG/EAGE Overthrustmodel and we shall
provide an analysis for isotropic elastic media.
-
-
-
Combining direct and iterative solvers for improving ef ciency of solving wave equations when considering multi-sources problems
Authors F. Sourbier, A. Haidar, L. Giraud, R. Brossier, S. Operto and J. VirieuxFrequency-domain full-waveform inversion (FWI) has been extensively developed during last decade
to build high-resolution velocity models (Pratt, 2004). One advantage of the frequency domain is that
inversion of a few frequencies are enough to build velocity models from wide-aperture acquisitions.
Multi-source frequency-domain wave modeling requires resolution of a large sparse system of linear
equations with multiple right-hand side (RHS). In 3D geometries or for very large 2D problems, the
memory requirements of state-of-the-art direct solvers preclude applications involving hundred millions
of unknowns. In order to overcome this limitation, we investigate a domain decomposition method based
on the Schur complement approach for 2D/3D frequency-domain acoustic wave modeling. The method
relies on a hybrid direct-iterative solver. Direct solver is applied to sparse impedance matrices assembled
on each subdomain, hence, reducing the memory requirement of the overall simulation. Iterative
solver based on a preconditioned Krylov method is used for solving the interface nodes between adjacent
domains. A possible drawback of the hybrid approach is that the time complexity of the iterative part
linearly increases with the number of RHS, if single-RHS Krylov subspace method is sequentially applied
to each RHS. We mention that block-Krylov techniques or de ation techniques can be used in that
case to partially overcome this effect. In the following, we introduce the domain decomposition method
before illustrating its features with 2D and 3D simulations.
-
-
-
Seismic Imaging and the Road to Petascale Capacity: RTM and the Cell /B.E. Processor
Authors F. Ortigosa, J. M. Cela, M. Araya-Polo and R. de la Cruzch, on the other hand, is around the
corner. Seismic Imaging is definitively a field in our industry where petascale capacity is
needed. The question is not when this capacity will be widely available, but how. There are
several hardware processors and devices as a candidates for the brain of the new generation of
petascale supercomputers. The only thing in common is that all of them are difficult to
program, and the programming will be different from the programming of today’s x86
processor generation. We believe that among all the hardware options, the Cell /BE processor
has several characteristics that make it ideal for widely available petascale capacity. Besides
the difficulty of programing the Cell, we present a benchmark for RTM between Cell and
PowerPC processors. We show that using an early generation of the Cell, and a difficult
Kernel of a compute intensive algorithm, we may expect almost one order of magnitude of
performance increase.
-
-
-
Grid and Cloud Computing: Opportunities and challenges for e-Science
By F. GagliardiA new science paradigm has emerged in the last few years referred to as electronic Science (e
Science). It extensively uses simulation techniques based on software modeling which run on
distributed computing infrastructures. In addition, it makes use of huge amounts of distributed and
shared data captured by instruments or sensors and/or stored in databases, analyzed to provide new
results for science. This distributed HPC and data environment allows sharing the acquired
knowledge, accessing remote resources and enabling world wide scientific collaboration.
-
-
-
The Implications of Multicore Processor for High Performance Computing
More LessWith some of the largest supercomputing clusters on the planet, the seismic imaging industry
has a tremendous appetite for computing resources. Increasing demand for energy and
societal pressures for greener energy will only accelerate this growing need. It is therefore
clear that seismic imaging companies will be early adopters for the newest generation of
supercomputers that enable petascale computing and beyond – machines that are perhaps as
close as only months away. However, as this transition to petascale sweeps through the
industry, the industry will also be one of the first groups to come face-to-face with the
significant changes that will be required in order to go beyond petascale. The industry will
have to actively develop plans for how their business models and standard operations will
change in order for practitioners have to achieve the performance they require for their evergrowing
technical challenges. The next few years will present very important challenges and
opportunities for high performance computing in research, industry and business. Petascale
computing and, eventually, “exascale” computing will bring the promise of capability to
deliver full solutions to some of the most challenging and complex issues facing the industry.
However, for well documented technology reasons, these new computing systems
architectures will be radically different in design from traditional high performance
computing platforms. For example, in response to growing technological obstacles, the
processor industry is moving down the multicore path. This development is driving a sea
change in the computer industry for which a new "Moore's Law" may be arising - dictating a
doubling of the number of cores per unit time. As more cores are squeezed on to a chip, the
old programming approaches will not be adequate to achieve the performance required by this
industry; and naive assumptions of linear scaling of performance with the number of cores
will be very wrong. Recent experience with multicore has identified key challenges which
will have to be overcome in order to realize the potential of the next generation of
supercomputing. These challenges include fundamental algorithm design; integration of novel
architectures with more traditional computational systems; management of the unprecedented
amounts of data which are now a key component in all high performance computing
activities; and the development, improvement and validation of new applications solutions
which address the full complexity of the problems which these novel architectures will make
tractable. This presentation will discuss how the industry will be impacted by these changes
and what practitioners can do to achieve the full potential of multicore-based petascale and
exascale supercomputers.
-
-
-
Seismic Wave Propagation on GPUs
Authors A. Loddoch and W. R. VolzIn this work, we present the adaptation of a 3D Finite Difference acoustic wave propagation
code to GPU architecture using the NVIDIA CUDA framework. We demonstrate the general
concept of transforming a CPU-based code to fit the requirements of GPU computing. The
individual steps of this process are illustrated along with the necessary modifications to the
original program code and algorithm.
Problems that typically arise when porting a code to graphics cards, such as numerical
stability due to reduced accuracy and memory availability/handling are discussed as well as
the ability to use multiple GPUs simultaneously.
The described technique results in a GPU-based code that provides speedups of one order of
magnitude in execution time compared to the original CPU version.
-
-
-
Experiences with Seismic Imaging on GPUs
Authors S. Morton, T. Cullison and P. MicikeviciusFor years graphics co-processing units (GPUs) have been growing in performance capabilities
faster than CPUs, and have recently surpassed them. Simultaneously the hardware and
corresponding APIs have grown more generally applicable to a wide range of computational
tasks. This has fostered the use of GPUs for general programming.
In this abstract we will describe our experiences monitoring, testing and ultimately using
GPUs to run various seismic imaging kernels. The performance improvement of GPUs over
CPUs varies with algorithm but is high enough so that we've purchased a substantial GPUbased
cluster. In our workshop talk, we will discuss our production experiences running code
on this system.
-
-
-
NVIDIA Tesla, a way to dramatically speedup seismic processing and reservoir simulation applications
More LessMassive, fine-grained parallel computing capabilities will be needed to help researchers
effectively use petascale computing environments. In particular, petascale computing will
gain performance speed from the parallel processing capabilities of graphics processing units
(GPU). The concept behind the general-purpose GPU (GPGPU) is simple: Use the massively
parallel architecture of the graphics processor for general-purpose computing tasks. Because
of that parallelism, ordinary calculations can be dramatically sped up.
GPGPU is being used as a high-performance coprocessor for oil and gas exploration and other
applications—and it's much cheaper than a supercomputer. Scientists and researchers benefit
from the power of the massively parallel computing architecture. This availability of
supercomputing will unlock the answers to previously unsolvable problems in systems
ranging from a workstation to server clusters.
Using a GPU as a calculation unit may appear complex. It’s not really about dividing up the
task to execute into a handful of threads like using a multicore CPU but rather it involves
thousands of threads.
In other words, to try and use the GPU is pointless if the task isn’t massively parallel, and for
this reason, it can be compared to a super calculator rather than a multi-core CPU. An
application to be carried out on a super calculator is necessarily divided into an enormous
number of threads and a GPU can thus be seen as an economical version devoid of its
complex structure.
NVIDIA CUDA is a software layer intended for stream computing and an extension in C
programming language, which allows identifying certain functions to be processed by the
GPU instead of the CPU. These functions are compiled by a compiler specific to CUDA in
order that they can be executed by a GPU’s numerous calculation units. Thus, the GPU is
seen as a massively parallel co-processor that is well adapted to processing well paralleled
algorithms and like in seismic and reservoir simulation.
NVIDIA Tesla product line is dedicated to HPC. The Tesla Computing System is a slim 1U
form factor which easily scales to solve the most complex, dataintensive HPC problems.
Tesla Computing System is equipped with four new generation NVIDIA GPU boards, IEE
754 compliant Double Precision FP, and a total of 16GB video memory. The rack is used in
tandem with multi-core CPU systems to create a flexible computing solution that fits
seamlessly into existing IT infrastructure.
-
-
-
Hardware Hybrid Computing solutions
More LessParallel HPC applications benefit from multi core CPU technology and have been able to
multiply the computation density by a factor of 2 to 4 and later by 8. This improvement is not
enough compared to the computation requirements of today’s applications. This is why
people have been looking for new hardware and specialized processors which could give
applications gains from 20 up to 100.
Specialized processors like GPUs have improved performance at a greater pace than Moore
law predicts. They started 10 years ago with a technology using 350nm, 5 million transistors
at 75Mhz and now are using 55nm, 700Millions transistors at 800Mhz being able to deliver
512GFlops or more than 3.5GFlops/Watt.
This leads to improvements factors of 1.7x/year in transistors count, 1.3x/year in clock speed,
2.0x/year in processing units and 1.3x/year in memory bandwidth. Using such powerful
dedicated processors as well as CPU in a highly parallel environment of Multi-core for both is
showing the requirement to be able to use in the most efficient way this heterogeneous
environment of Hybrid computing.
This hardware environment exists and can be used today. The first challenge is on the
software development side. Development tools need to integrate heterogeneous programming
as well as multi core from the core of their language being able to support code generation on
different processors types as well as handling asynchronous behaviors. This comes with
compilers and libraries supporting this and being design or extended for it. Obviously those
tools need to support multiple hardware platforms to lead to some standards.
The second key challenge change is the evolution of buses and bandwidth linking together the
different cores of CPU and GPUs. And the way they talk to each other. Fusion projects will
address those evolutions in the future by defining new architectures around those processors
to improve the data flow between them which will be the key to use all the power available.
Cross bar memory controllers will allow GPUs to talk each other very quickly without
breaking parallelism. Hyper Transport bus will improve communication between GPUs and
CPUs. Finally Multi core GPUs and CPUS on the same die will increase even more the
compute density.
Different benchmarks and application codes have been used to demonstrate already the
benefits so such architecture. We will present SGEMM results as well as different algorithms.
The results will highlight the fact that performance is affected by in/out copy of the data on
the GPU at the moment and that finer tunings allows huge jump in performance. We will also
show that changing the way algorithms have been implemented for CPU to fit GPU
architecture adds even more performance gains.
-
-
-
Characterizing controls of geothermal systems through integrated geologic and geophysical studies: Developing recipes for successful exploration of conventional and unconventional geothermal systems
Authors J. Faulds, M. F. Coolbaugh, G. S. Vice and V. BouchotAlthough conventional geothermal systems have been successfully exploited for electrical
production and district heating in many parts of the world, exploration and development of
new systems is commonly stymied by the risk of unsuccessful drilling. Problems include
drilling of hot, relatively dry wells with low flow rates, decreasing temperatures with depth as
wells penetrate relatively thin and shallow geothermal aquifers (overturn), and wells with
reasonable flow rates but relatively low temperatures. Due to the high cost of drilling, such
problems can effectively preclude geothermal exploration. Proposals to generate enhanced
geothermal systems (EGS) by artificially stimulating hot dry wells, commonly through
mechanical hydro-fracturing of rocks, have therefore gained in popularity.
-
-
-
Lithosphere tectonics and thermo-mechanical properties: an integrated modeling approach for EGS exploration in Europe
By F. BeekmanFor geothermal exploration and production of enhanced geothermal systems (EGS)
knowledge of the thermo-mechanical signature of the lithosphere and crust is important to
obtain critical constraints for the crustal stress field and basement temperatures. The stress
and temperature field in Europe is subject to strong spatial variations which can be linked to
Polyphase extensional and compressional reactivation of the lithosphere, in different modes
of deformation. The development of innovative combinations of numerical and analogue
modeling techniques is key to thoroughly understand the spatial and temporal variations in
crustal stress and temperature. In this paper we present an overview of our advancement
developing and applying analogue and numerical thermo-mechanical models to
quantitatively asses the interplay of lithosphere dynamics and basin (de)formation. Field
studies of kinematic indicators and numerical modeling of present-day and paleo-stress fields
in selected areas have yielded new constraints on the causes and the expression of
intraplate stress fields in the lithosphere, driving basin (de)formation. The actual basin
response to intraplate stress is strongly affected by the rheological structure of the underlying
lithosphere, the basin geometry, fault dynamics and interplay with surface processes.
Integrated basin studies show that rheological layering and strength of the lithosphere plays
an important role in the spatial and temporal distribution of stress-induced vertical motions,
varying from subtle faulting to basin reactivation and large wavelength patterns of
lithospheric folding, demonstrating that sedimentary basins are sensitive recorders to the
intraplate stress field. The long lasting memory of the lithosphere, in terms of lithospheric
scale weak zones, appears to play a far more important role in basin formation and
reactivation than hitherto assumed. A better understanding of the 3-D linkage between basin
formation and basin reactivation is, therefore, an essential step in research that aims at
linking lithospheric forcing and upper mantle dynamics to crustal vertical motions and stress,
and their effect on sedimentary systems and heat flow. Vertical motions in basins can
become strongly enhanced, through coupled processes of surface erosion/sedimentation
and lower crustal flow. Furthermore patterns of active thermal attenuation by mantle plumes
can cause a significant spatial and modal redistribution of intraplate deformation and stress,
as a result of changing patterns in lithospheric strength and rheological layering. Novel
insights from numerical and analogue modeling aid in quantitative assessment of basin and
basement histories and shed new light on tectonic interpretation, providing helpful
constraints for geothermal exploration and production, including understanding and
predicting crustal stress and basin and basement heat flow.
-
-
-
Geophysical exploration methods at European sites
By D. BruhnMost geophysical exploration methods have been developed for the oil and gas industry, and
ever more sophisticated tools and refinements in the different approaches are designed to
solve specific problems associated with the detection and characterisation of hydrocarbon
reservoirs. The exploration of geothermal resources has profited greatly from these
developments, however, the methods cannot always by directly transferred from oil and gas
to hot water and/or steam. First of all, physical properties of H2O differ from those of
hydrocarbons, resulting in differing responses of physical measurement methods. Secondly,
geothermal reservoirs can be found in highly varying geological environments, mostly
associated with volcanism, where hydrocarbons are usually not present. Thirdly, the
economically most interesting geothermal reservoirs are much hotter than any oil or gas
reservoir. At the moderate temperatures comparable to those of hydrocarbons many of the
advanced exploration methods are simply cost-prohibitive, as the economic potential of a
medium-enthalpy geothermal reservoir is much lower than for an oil or gas well. For these
reasons, some of the existing geophysical methods have to be adapted to meet the needs of
geothermal exploration or different methods have to be developed and applied.
-