- Home
- Conferences
- Conference Proceedings
- Conferences
70th EAGE Conference and Exhibition - Workshops and Fieldtrips
- Conference date: 09 Jun 2008 - 12 Jun 2008
- Location: Rome, Italy
- ISBN: 978-94-6282-104-0
- Published: 09 June 2008
91 results
-
-
Integrated geophysics in challenging areas: from joint planning to joint interpretation
Authors P. Dell’Aversana, I. Giori, M. Gilardi, G. Spadini and R. PuricelliIn 2007 Eni acquired a multidisciplinary geophysical survey in a desert area located in southwestern
Libya, combining seismic and non-seismic geophysical methods.
The investigated area is located in SW Libya and represents the erosion remnant of a
Palaeozoic intra-cratonic basin located on the Saharan Platform of North Africa. The
sedimentary fill is mainly Paleozoic to Mesozoic in age and reaches a thickness of about 4000
m in the basin centre.
The project was aimed at reducing the exploration risk using different geophysical disciplines.
Acquisition of non-seismic data was performed in correspondence of 2D seismic lines
previously shot.
Densely spaced gravity stations were located along these seismic lines (called “central lines”).
The acquisition grid was progressively coarsened along lines parallel to the central lines, in
order to be integrated with the regional gravity data already available in the area.
Densely spaced magnetotelluric data were acquired along the same seismic/gravity lines for a
total of 50km.
Distance between successive MT stations was set at 100-200m, depending on operational
difficulties.
High ground resistance was the biggest factor which affected the data quality in the
investigated area. It was reduced injecting salty water in deep holes were the electrodes were
positioned. 254 MT stations were acquired with satisfactory Signal/Noise ratio. Reliable
inversion results (Figures 1 and 2) were obtained and checked with the resistivity logs
available in the area.
-
-
-
Mapping geothermal reservoirs using broadband 2-D MT and gravity data
Authors K. -M. Strack, N. Allegar, G. Yu, H. Tulinius, L. Adam, Á. Gunnarsson, L. F. He and Z. X. HeGeothermal energy is playing a larger role as an alternative energy source for both electricity
generation and for space heating. Our recent magnetotelluric (MT) and gravity surveys in
Iceland and Hungary have both characterized known geothermal reservoirs and identified new
drilling opportunities. The success of these surveys has resulted in additional 2D and 3D MT
and gravity data acquisition and the onset of a drilling program to evaluate the identified
geothermal potential.
Higher temperatures and salinity of the pore water, as well as the concomitant increased rock
alteration associated with geothermal areas, often contribute to a decrease in the bulk
resistivity in a rock mass. The zones of low resistivity that are associated with geothermal
reservoirs can be detected by electromagnetic techniques such as the MT method.
We used MT/AMT measurements to acquire natural time varying electrical and magnetic
fields at frequencies of 10,000 Hz ~ 0.001 Hz. The EM field propagates into the Earth as
coupled electrical and magnetic fields and these fields are commonly represented in the
frequency domain as a four element impedance tensor. The characteristics of the MT
resistivity curves are analyzed to extract structural information (associated with resistivity
contrast) that is used to determine high-permeability zones and up flow zones of hydrothermal
systems (Malin, Onacha, and Shalev, 2004).
To complement the MT data, gravity surveys were acquired along the MT survey lines to
assist in detecting fault systems below the surface. Fault system information can be used to
analyze and to understand groundwater channels and water flow directions. At the same time,
gravity data may be used to interpret the subsurface and to aid in locating prospective heat
sources. Integrating the MT and gravity data reduces the intrinsic ambiguity of either dataset
and produces a more robust interpretation.
-
-
-
Feasibility study of electromagnetic, gravimetric and aeromagnetic methods in sub-basaltic settings
Authors R. F. Reynisson, S. Fanavoll, G. Waag and J. EbbingThe objective of this study was to investigate the feasibility of using gravity, magnetic,
magnetotellurics (MMT) and electromagnetic (EM) data in the sub-basalt imaging problem.
The Møre volcanic margin is a part of the mid-Norwegian margin and consists of a central
area of NE-SW trending deep Cretaceous basin. The basin is flanked to the west by the Møre
Marginal High that is characterized by thick, Early Eocene basalt flows overlying an
unknown substrate. Numerous previous experiments have demonstrated that standard seismic
acquisition and processing techniques are not capable of characterizing the volcanics or
imaging beneath it.
-
-
-
CSEM data in integrated data analysis
By K. N. MadsenOffshore hydrocarbon exploration utilizes various remote sensing techniques in order to
increase the knowledge about the subsurface before drilling decisions are made. Seismic
exploration is by far the most common tool and uses acoustic and elastic waves to map
boundaries between subsurface layers with contrasting P- and S-wave velocities. Seismic data
can provide relatively high-resolution information about geological structures and possible
hydrocarbon traps.
-
-
-
Effects of Fluid Saturation on Seismic AVA and CSEM Response in the Norwegian Sea
Authors L. MacGregor, J. Walls, R. Shu, N. Derzhi and P. HarrisOne of the most promising trends in geophysical exploration today is the integration of 3D
seismic and controlled source electromagnetic (CSEM) surveys. The combination of these
data sets offers explorationists a powerful tool for risk reduction. Seismic data provides the
detailed depth and structure information, while CSEM provides electrical resistivity for fluid
discrimination. The combination of the two can greatly improve our ability to invert for
porosity, lithology, and hydrocarbon saturation. However, the key to inversion is forward
modeling. In this paper we will show how well logs and rock physics can be used to provide
the input for modeling both seismic amplitude versus offset (AVO) and CSEM radial
amplitude versus source-receiver spacing.
-
-
-
Onboard prestack WEM on WAZ data – A new breakthrough
Authors J. Kapoor, D. Wilson and T. DittrichThe past couple of years have seen a tremendous increase in the acquisition of wide
azimuth (WAZ) surveys that provide improved subsalt imaging. These surveys can
acquire data three times faster than conventional narrow azimuth (NAZ) surveys and at
the same time increase the fold of the data by a factor of five. Therefore we are looking at
processing fifteen times more data while the need for fast delivery of imaged data is
accelerating due to lease expiry, lease sales, drilling decisions and other financial
conditions.
-
-
-
In Field Processing - A must for high productivity vibroseis acquisition
Authors J. Meunier, T. Bianchi and J. J. PostelVarious techniques have been proposed to increase vibroseis productivity. The most popular
of them are Slip Sweep, introduced by PDO, and HFVS, introduced by EXXONMOBIL.
Reduction of the extra noise that these techniques generate requires extra processing. The
complexity of their operations includes the collection of source data that could be much
heavier to handle. A way to prevent these extra tasks from adding up to the cycle time
between acquisition and interpretation is to perform them in the field as soon as possible after
the data have been acquired.
-
-
-
Improving Image by Anisotropic Migration - Mississippi Canyon, Gulf of Mexico
More LessSeismic anisotropy refers to seismic waves traveling with different velocity at different
propagation angle, usually in consolidated, shale-prone areas such as in Gulf of Mexico
and West Africa. A single imaging velocity for any given subsurface location (regardless
of propagation angles) has been commonly used in the industry and is called Isotropic
Migration. Images obtained with Isotropic Migration are often mis-positioned, resulting
in extra cycle time needed to calibrate and correct isotropic images for prospect
evaluation and well planning.
-
-
-
Non-linear 3D tomographic inversion of residual moveout in depth kirchhoff migrated CIG
Authors R. Baina, F. Adler, A. Soudani and J. -B. RichardIt is well-known that depth imaging brings viable solution to complex problems and
helps interpreters to quantify and to understand the architecture of the reservoir under
study. This recognition leads to a pressure to depth-image larger and larger areas and
to shorten the delivery delay. However, the success of a depth imaging projects
requires testing of different methodologies and relies on several trial and errors during
depth velocity model building. This means the use of costly iterative sequential
scheme of full PSDM followed by linearized tomographic inversion.
We present here a new method for depth velocity building which we believe will help
us to achieve a fast turnaround of depth imaging project and give us full flexibility for
testing and adjusting our model parameterisation and inversion setting.
-
-
-
Rapid structural framework generation for reservoir property propagation and modeling in complex structures
More LessIn the last few years the computer assisted tools for horizon picking have improved their
speed, usability and quality of picking so that interpreter's productivity for this task has been
increased by large amounts. Some have quoted productivity gains of three months to one
week. In addition these tools often produce auxiliary outputs of pick quality or stratigraphic
attributes. More recently attention has been focused in helping improve the interpreter's
productivity when performing fault interpretation and framework building. For these tasks
software developments have focused on increased use of coherency and volume curvature.
These attributes are then used to inform automatic fault extraction before detailed
interpretation or fault tracking while interpreting so that fault surfaces are generated with
significantly less manual picking. The result of reducing the effort needed to produce fault
and horizon surfaces has been a huge increase in the number of these surfaces interpreted
within a seismic survey. Semi-automated Structural Framework building then enables these
surfaces and surfaces developed from sub-seismic well correlation to be incorporated into a
very detailed earth model that can be easily populated with reservoir parameters of pressure,
porosity, saturation and permeability. Such models are suitable for use in reservoir simulation
or detailed well planning. The ability to perform all these activities on a single data
representation with computer assistance at each step has removed many of the bottlenecks in
seismic data interpretation and the integration of well and seismic data interpretation.
-
-
-
Automatic interpretation of salt bodies by iterative image segmentation
Authors A. Halpert, J. Lomask, B. Clapp and B. BiondiVelocity model building is the most human-intensive component of the depth-imaging
process, and it is often the bottleneck when trying to reduce the cycle time of large seismic
imaging projects. For near-salt or sub-salt imaging the interpretation of the salt-body
geometries can be extremely time consuming. Current automatic methods based on horizon
tracking are prone to errors, in particular when the salt boundaries are poorly imaged. Lomask
et al. (2007) have proposed an automatic method to interpret salt boundaries that segments the
image cube by solving a global optimization problem, and thus it is more robust than local
methods based on horizon tracking. We apply the image-segmentation method to the iterative
velocity-model building process. We show how it can be applied to a conventional sediment
and salt flooding procedure and we discuss how to use the boundaries picked at the previous
iterations as a constraint to the iterative solution and thus make the method more reliable.
-
-
-
Depth on Demand – Fast Beam Migration and Integrated Visualization for Rapid Velocity Depth Model Building
Authors K. Schleicher, J. Lima, T. Bird and P. WijnenHere, we will present a set of tools that we find are very well suited to address this challenge. Through our beam migration approach, we are able to efficiently and accurately image large dataset very rapidly and accurately image with a large migration aperture and with a full dip-range. The beam migration process is separated into two main components: a dipscan process that is performed once for the whole survey and which output is then stored on disk for later use, second a fast migration step is carried out ‘on-demand’ or through multiple model iterations. With modern computer hardware this imaging step can be performed in almost real-time. We also present beam-based imaging techniques that allow for more rapid model building in salt-regions through the use of velocity discriminating filtering in the imaging process.
-
-
-
Reducing time to interpretation decisions with RTM
More LessIn this paper, we show some history as to why mixed migration algorithms have been used in
the model building portion of the imaging sequence. We go on to show that by using a single
algorithm in both the model building and final imaging phases of a project, the cycle time can
be collapsed. In this paper, a two way wave equation algorithm (RTM) is proposed as a tool
to help significantly reduce total project turnaround.
-
-
-
Looking ahead: cycle time reduction for GOM subsalt imaging projects
More LessThere has been a constant drive to reduce cycle time for depth imaging projects in the
Gulf of Mexico. In recent years, facing impending lease expirations, upcoming lease
sales, and tight drilling schedules, oil companies have continued to compress the cycle
time of Gulf of Mexico depth imaging projects. Seismic companies have increasingly
higher stakes in cycle time reduction, as well. This is a result of the surge of regional
non-exclusive wide-azimuth data acquisition in the Gulf of Mexico's deep waters. To
quickly recoup their companies' investments of hundreds of million dollars, processing
geophysicists in seismic companies are increasingly under pressure to process large
amounts of data in record times. This is in addition to simultaneous demands for
geophysicists to apply the most current technologies, such as, 3D SRME and RTM.
-
-
-
Rapid turnaround for processing and analysis of frequent time-lapse surveys
Authors J. P. van Gestel, R. Clarke and O. I. BarkvedWe present a case study of fast turn-around processing and analysis of time-lapse seismic data
acquired using the Life of Field Seismic (LoFS) permanent receiver array. We combine a
small computer cluster, a broad band connection to the offshore facilities and script-base
processing software into an automated workflow from acquisition to interpretation. This
results in delivery of time-lapse processed volumes to the interpreters within a week after last
shot. By automating the generation of time-lapse difference volumes and extractions, we
deliver fast-track interpretation of the main time-lapse effects within hours of data arrival in
Stavanger. These main time-lapse effects are captured in graphic files, which are all
automatically linked in HTML documentation. This automation and standardization has
allowed moving the main workload of the geophysicist from of data manipulation to
interpretation and integration of the collected data.
-
-
-
Cycle-Time Challenges for Geophysics in the Evaluation of a Tight Gas Pilot Project – Example from South Sulige Permit, Sulige Field, Ordos Basin, China
By J. SuiterHistorically, well targeting in tight gas reservoir plays has
focused on ‘sweet spots” (e.g. high net-to-gross, high Sg,
natural fractures, etc) which aim to highlight areas of
increased productivity. On Sulige, reservoir and
productivity predictions have rarely been straightforward
(no natural fracturing, complex diagenetic history) and the
already-producing wells by Petrochina on North Sulige
more often than not need mechanical stimulation (i.e.
hydraulic fracturing) to maintain prolonged production in
the low permeability, fluvial sandstones.
-
-
-
Rethinking the Seismic Value Chain, excellence in terms of cycle time and output quality
By G. BerkhoutExcellence in the seismic value chain will depend on the quality of the involved
specialized tools and skills (abilities) as well as the capability of organisations to
combine these abilities to achieve maximum value in terms of speed and quality.
-
-
-
How use of Visualization and Virtual Reality evolved into a major business impact
More LessNorsk Hydro started out their Virtual Reality research activity in 1997 developing an in-house
application in collaboration with Christian Michelsen Research. The application was sold and
commercialized by Schlumberger, known as Inside Reality, but development has separated
the two applications. In HydroVR important application tools were developed in tight
collaboration with the users securing a tight linkage to the daily workflows. The development
and use of HydroVR has been regarded as a success, but have been mainly used by
Production and Research.
-
-
-
The Evolution of Visualization
By D. FanguyThe subject matter of this workshop is Visualization; is there anything left to do? The short
answer is yes, but we need to understand and review the evolution of Visualization to truly
answer and come to this conclusion. This paper is a brief summary of the past twenty years
and how we came to build Visualization Centers in the Oil & Gas industry. The term
Visualization replaced Virtual Reality a few years ago and now more and more of us use the
term Collaboration. So no matter what word you use we all know over time that someone will
change it and become the “buzz word” at that time. However, the important message is not
about how we describe these centers but how we use these centers. The ever changing
technologies we utilize in these centers have in fact changed the way we use the centers. This
will be one point that is explained in more detail though-out this paper and also why we
cannot be sure exactly what we need to do in the future. So as stated above, the answer to the
question – “is there anything left to do” is always going to be “yes’, but how we use it will
evolve as well.
-
-
-
Covisualization: Lowering barriers-toentry for multidisciplinary 4D databased decision making
Authors R. Mayoraz, G. Brew and A. ParadisTime-lapse seismic acquisition and other temporal monitoring initiatives are becoming
increasingly commonplace. These data, and much additional time-variant information, can be
critical to the decision-making process for reservoir development of mature assets. One of the
best ways of rapidly reconnoitering and analyzing these data is through comprehensive
visualization solutions.
However, current workflows are not conducive to simultaneous visualization of data from
disparate software applications. Time-lapse seismic data, reservoir simulation output, and
production histories are produced in a wide variety of software packages with different
formats and spatial arrangements. Visualization in the same virtual space and time can be an
extremely difficult proposition.
The solution we present makes the import and integration of these data, from many different
sources, an easy and potentially automatic process. By lowering the barriers-to-entry to
visually integrate all the data needed in the reservoir development process, better, faster, and
more accurate decisions can be made.
-
-
-
Basin Scale Exploration – A New Frontier for Desktop Visualization
Authors N. Purday, D. M. Roberts and M. ColeVisualization technology has changed dramatically over the past few years, moving
from a high end tool used only in visualization rooms by major oil companies to a
desktop solution applied to a significant range of exploration and field development
opportunities. Visualization has become an enabling technology to both speed seismic
interpretation and integrate geologic and geophysical data much more effectively than
ever before. The rapid changes in adoption are being driven by the requirement for a
significant reduction in exploration risk, cost reduction and cycle time improvement.
This paper will focus on thinking beyond current volume interpretation workflows to
demonstrate how visualization can play a vital role in Basin Scale exploration and
interpretation.
-
-
-
Sismage DisplayWall: a new environment for complex Seismic Data Interpretation.
By N. KeskesSince more than ten years, the “DisplayWall” technology is growing very fast and becomes
widely used in various domains: Geosciences, Astronomy, medicine, etc.. The academic and
industrial research around this topic is also very active.
The main advantage with this technology is the ability to have multiple display content and
huge windows with high resolution.
-
-
-
Utilizing the Benefits of Virtual Environments
Authors T. Holtkämper, A. Dressler and M. BogenOver the last ten years many oil & gas companies have installed Virtual Environments in
order to optimize, complement, or replace steps in their E&P workflow. The success of the
deployed Virtual Reality technology varies from company to company, but not always fulfills
set expectations.
The experiences made in the VRGeo Consortium also indicate that the potential of VR
technology is not yet used to its full extend. Only if VR technology can clearly show its
benefits over existing technology, the users are willing to adopt and integrate it into their daily
workflow. In this respect, we describe by means of illustrative examples what can be done to
better utilize the benefits of Virtual Environments.
-
-
-
Sub-Surface Visualisation in BP Technology – People – Business Drivers
Authors K. Hansch and J. ThomsonBP has been an early adopter of the use of visualisation in sub-surface workflows. Over the
past 15 years the implementation of visualisation technology and the development and sharing
of skills and knowledge have led to the current routine use of visualisation in our daily work.
Visualisation is no longer seen as a discipline in itself but as a tool to help in the delivery of
our business objectives. Dedicated effort was required to reach the current level of user
engagement. A large number of lessons have been learnt during the implementation of
visualisation into the sub-surface workflows.
-
-
-
Digital Oilfields: Real-time Decisions
More LessThe state of the art in visualization today is embodied in the real-time aspect of group
decision making. Real-time group decision making environments are used in many
applications in industries including engineering analysis and design in manufacturing,
government command and control for disaster response, drug design in pharmaceutical
companies, and immersive environments in academic research. However, in no industry is
the applicability and return on investment more clear than in the exploration and production
portion of the oil and gas industry.
-
-
-
The Value of Visualization in Exploration and Production: Anecdotal Evidence and Quantitative Data
Authors G. A. Dorn, G. S. Pech, K. Gruchalla and J. MarbachSince the introduction of large-scale visualization systems in the energy industry over a
decade ago, discussion has focused on the relative benefits, if any, of conducting typical
exploration and development tasks in a large visualization environment vs. a desktop display.
The only industry specific information with regard to benefits has been anecdotal in nature.
Five human performance studies have been conducted over the last five years to quantify the
amount of benefit achieved in a set of directional well path planning problems using largescale
visualization. These studies have demonstrated that significant, quantifiable
improvements in efficiency and accuracy of results are achieved using large-scale
visualization environments for design engineering tasks, such as well path planning.
-
-
-
Enterprise-class Virtual Environments Interoperability for the Upstream Industry―Promise or Peril
By E. J. DoddThe exploration and production (E&P) enterprise is undergoing a tectonic shift within its
information and communication technology (ICT) ecosystem. E&P companies need to better
utilize their existing assets, increase their resource portfolios and bridge the looming
knowledge gaps. In particular, the E&P Industry needs to take an active role in defining and
supporting open and free standards that integrate disparate data sources into enterprise-class,
secure Virtual Worlds. This paper briefly discusses emerging ICT research and innovation
around of Virtual Worlds into Virtual Environments. There is an evolution from today’s
classical interaction and display paradigm to the fully-integrated global enterprise using the
3-D Internet.
-
-
-
NVIDIA Advanced Visualization Solutions for Oil & Gas market
More LessThe world is estimated to hold about 940 billion barrels of undiscovered oil and natural gas
resources, much of it in remote and difficult to reach places, such as deep water, deserts, and
arctic environments. Oil & gas companies are looking for technologies to help increase
accuracy of exploration and production, while reducing risks and costs. Efficient and fast data
interpretation amongst teams of specialists is key. Data have to be visualized with a maximum
of details and be shared within a small group, a large audience or with remote colleagues.
Standard 19” LCD monitor can display up to 1.9M pixels at 1600 x 1200 resolution but the
image is too small to be shared, too small to interpret efficiently and it takes far too long to
zoom in and out.
30” LCD can display up to 2.9M pixels at a 2560 x 1600 resolution, that’s great for two or
three engineers working together. But looking for 15 frames per second animation, for a fluid
seismic interpretation or reservoir simulation display, means that the system has to handle
43.5 M pixels per second. Can it be done on a standard workstation? Can you even look for
larger display size, faster frame rate for increased efficiency, from a single system?
-
-
-
HoloVizio: The Next Generation of 3D Oil & Gas Visualization
Authors T. Balogh and P. T. KovácsWe present the HoloVizio system design and give an overview of Holografika’s approach to
the 3D displaying. The patented HoloVizio technology uses a specially arranged array of
optical modules and a holographic screen. Each point of the holographic screen emits light
beams of different color and intensity to various directions. With proper software control,
light beams leaving the pixels propagate in multiple directions, as if they were emitted from
the points of 3D objects at fixed spatial locations. We show that the direction selective light
emission is a general requirement for every 3D systems and the advantages of light field
reconstruction over the multiview approach. We describe the 10 Mpixel desktop display and
the 50Mpixel large-scale system that enables the collaborative work in real 3D surpassing the
limitations known at stereoscopic systems. We cover the real-time control issues at high
pixel-count systems with the HoloVizio software environment and describe concrete
developments targeting 3D oil&gas visualization applications.
-
-
-
Improved asset management with network-centric visualization
By Y. NirToday’s oil and gas industry is faced with a growing need to not only find and explore new
energy reserves, but also to manage existing assets more efficiently and in an integrated way.
This involves multiple teams and people with various backgrounds and skills working
together to share information, collaborate and make faster and better decisions. The key issue
in this is usually not the collection of data but the amount of available data that is ever
increasing. This presents a considerable challenge to companies striving to exploit this
information for competitive advantage.
This paper explores how recent breakthroughs in professional visualization can help improve
the collaboration process by visualizing and sharing various types of data, and how this
ultimately leads to better asset management.
-
-
-
Regional Geologic Visual Integration Offshore Brazil
Authors K. P. Boyd and D. M. RobertsRecent advances in the power of 64 bit PC hardware, and also the tremendous power
of modern visualization software applications now makes it possible to visualize
datasets which are on basin scale and even continent scale.
-
-
-
Open Inventor and Avizo: commercial cross-discipline visualization tools
By F. GambaMercury Computer Systems provide commercial visualization solutions for a wide selection
of markets such as Medical, Aerospace, Oil and Gas, Fluid-Dynamics and Materials Science.
In order to provide cross-discipline effective solutions, Mercury leverages its know-how in
core graphics technology.
Modern GPUs supply the computational power to make a step forward in 3D volume
rendering quality and interactivity. After several years of academic research new techniques
becomes available in commercial solutions such as Open Inventor by Mercury and Avizo.
Extended usage of computational clusters with thousands of nodes allow the generation of
massive datasets that exceed the hardware resources of the single workstation used for
visualization. To overcome these limitations, smart management of GPU/CPU memory and
CPU/GPU computational power becomes crucial. A Large Data Management (LDM) engine
allows the user to efficiently visualize hundreds of GB of seismic data and hundreds of
millions of cells reservoir models. Mercury recently introduced a new modular visualization
framework, called Avizo, which brings all the Open Inventor technology to the end-user level.
Avizo is the best tool to experience cross-discipline visualization within the Oil and Gas
workflow.
-
-
-
The Future Of Visualisation - Steering Through The Fog
By H. LaufertsThe vision of future visualisation is an environment where technology provides the means to
experience data, subsurface models and production facilities as if they were part of the real
world.
Visualisation should utilize our natural senses as in the real world: visibility in three
dimensions and a virtual reality that can be felt and touched.
We recognize the technology elements that lead the way to this vision: touch interfaces,
gesture control, holography and auto stereoscopy to name a few. These tools help us steer
through the fog and may be part of the solution to reach our vision. We do not know how
much time it will take to reach this vision, but we are convinced that given our current
business challenges we can’t wait for the developments to be made for us. We have to
promote, sponsor, encourage and actively steer our stakeholders on the road to future
visualisation.
-
-
-
The Human Factor in Interpretation and Visualisation
By R. GrasE&P software has rapidly evolved from the early interpretation workstations of the 80-90s to
a sophisticated, integrated and unified system for interpretation, visualisation and increasingly
virtual or augmented reality. However, relatively less effort has been spent in investigating
how people, either teams or individuals, interact with these technologies towards realising the
value for their organisations. In recent years much emphasis has been placed towards
enabling team-based collaborative work processes within visualisation centers (Collaborative
Visualisation Environments, henceforward abbreviated as CVE’s), but a simple truth remains
that specific core tasks in E&P are performed primarily by individuals. At risk of
generalisation, whereas Development and Production are predominantly team-based work
processes, Exploration on the other hand is a task that relies heavily on an individual’s skills
for discovery. The E&P enterprise that recognizes the individual’s relevant skills and provides
an environment for both the individual as well as teams to perform optimally gains a
significant competitive advantage. Additional efforts are needed to specify and provide
visualisation environments for individuals or small teams geared towards Exploration.
-
-
-
High Order Acoustic Scheme For a Wave Propagation Modeling
Authors I. Tarrass, A. C. Bon and P. ThoreWe present a high order finite difference numerical scheme to simulate the acoustic wave equation. The
scheme uses 2 coupled equations in pressure and displacement. The scheme is an 8th order in space and
a 2n order in time n ≥ 1. The parallelism of code is based on message passing implementation to handle
one simulation and a master slave paradigm to simulate a large campaign aquisition. The code has been
tested on different architectures and present a high level of portability.
-
-
-
GEOCUBIT, an HPC parallel mesher for Spectral-Element Method seismic wave simulation
Authors E. Casarotti, M. Stupazzini, S. J. Lee, D. Komatitsch, A. Piersanti and J. TrompWave propagation phenomena can nowadays be studied thanks to many powerful numerical
techniques. Spurred by the computational power made available by parallel computers,
geoscientists and engineers can now accurately compute synthetic seismograms in realistic
3D Earth models. In this field, the Spectral Element Method (SEM) has convincingly
demonstrated the ability to handle high-resolution simulations at global(e.g., Komatitsch et
al., 2005), regional (e.g., Komatisch et al, 2004, Lee at al., in press) and local scale (e.g.,
Stupazzini, 2004).
The SEM is as a generalization of the Finite Element Method (FEM) based on the use of
high-order piecewise polynomial functions. In the coming Petaflops era, the SEM should
become a standard tool for the study of seismic wave propagation, both for forward and
inverse problems. The more the power provided by computer clusters, the higher the
resolution that is available for the simulations. Consequently, the definition of a good
geological model and the creation of an all-hexahedral unstructured mesh are critical.
-
-
-
Massively parallel computations for the solution of the 3D-Helmholtz equation in the frequency domain.
Authors H. Calandra, I. Duff, S. Gratton, X. Pinel and X. VasseurThe topic of our work is the solution of the three-dimensional Helmholtz equation in the frequency
domain on massively parallel computers modeled by the following partial differential
equation: (view PDF), with some absorbing boundary conditions, where u is the pressure of the wave, f its frequency,
c the propagation velocity of the subsurface and g is a Dirac function that represents the wave
source in the frequency domain. This equation is involved in an inverse problem modelling a
wave propagation under Earth. The solution of this inverse problem enables geophysicists to
deduce from experimental data the structure of the subsoil. An explicit solution method (time
domain) is often considered because it keeps the memory need acceptable. But working in time
domain supposes that stability conditions on the discretization scheme hold both in time and
space, that often leads to very small time steps (i.e. large simulation time) for real problems.
One of the great advantage of the frequency domain formulation is that stability conditions of
the discretization scheme only rely on the frequency. The frequency formulation is yet much
greedier in memory than the time one’s, because standard discretization methods such as finite
difference and finite element methods leads to linear systems of size depending linearly on the
frequency. Nevertheless, according to recent trends concerning massively parallel architectures,
solving the implicit Helmholtz equation seems now feasible, because large distributed memories
and efficient interconnect become available. We shall show that linear systems of size more than
one billion can be solved by present supercomputers in a few minutes.
-
-
-
The role of High-performance computing and seismic imaging and interpretation
Authors B. Biondi, B. Clapp and A. ValencianoProgresses in seismic-imaging technology are driven by advancements in data acquisition and
high-performance computing. Wide-azimuth acquisition geometries of both marine and land
data are dramatically changing the data we image. The commoditization of multi-core
processors and the availability of ultra-fast hardware accelerators (FPGAs, GPUs, Cells, …)
will change the way that we image and interpret those new data sets. These hardware
improvements will enable the application of imaging operators that are more accurate in both
the modeling of the underlying physical phenomena (e.g. wave propagation vs. ray-tracing)
and the approximation of the actual inversion of the recorded data. The future availability of
workstation with multi-core CPUs will enable the exploitation of expensive numerical
algorithms to support interpretation. This should lead to dramatic improvements in the
structural and stratigraphic interpretation in areas where the complexity of the velocity model
requires a tight loop between interpretation and processing
-
-
-
Pushing limits of the 3D acoustic waveform inversion in the frequency domain
Authors H. Ben-Hadj-Ali, F. Sourbier, V. Etienne, S. Operto and J. VirieuxWe present a 3-D acoustic full waveform tomography (FWT) based on a forward problem suited for
multisource simulations. This forward problem based on the wave equation is solved in the frequency
domain using a direct solver technique, leading to an impressive request of core memory. The imaging
problem (Tarantola, 1987) is built through a local minimization of the mis t function between recorded
and synthetic data. The frequency-domain (FD) formulation of FWT was originally developed for 2D
cross-hole acquisition surveys which involve wide-aperture propagations (Pratt and Worthington, 1990).
Only few discrete frequencies are required to develop a reliable image of the medium thanks to the
wavenumber redundancy provided by multifold wide-aperture geometries. The lowest frequency and
the starting model both play a critical role in the convergence of the minimization. Since the full wave
propagation modeling is a critical issue in FWT methods, a 3D optimal nite-difference stencil has been
designed by Operto et al. (2007) that leads to 4 grid points per wavelength for an accurate modelling,
reducing the memory request when solving the large sparse linear system for each frequencywe consider.
Although present hardware con gurations limit the domain dimensions, it remains unclear where are the
different bottlenecks of the approach as the degrading conditioning of the impedance matrix or the poor
scalability when we increase the number of nodes. In this presentation, we shall provide some insights on
the feasibility and relevance of 3D frequency-domain FWT for building high-resolution velocity models
of isotropic acoustic media with one application related to the SEG/EAGE Overthrustmodel and we shall
provide an analysis for isotropic elastic media.
-
-
-
Combining direct and iterative solvers for improving ef ciency of solving wave equations when considering multi-sources problems
Authors F. Sourbier, A. Haidar, L. Giraud, R. Brossier, S. Operto and J. VirieuxFrequency-domain full-waveform inversion (FWI) has been extensively developed during last decade
to build high-resolution velocity models (Pratt, 2004). One advantage of the frequency domain is that
inversion of a few frequencies are enough to build velocity models from wide-aperture acquisitions.
Multi-source frequency-domain wave modeling requires resolution of a large sparse system of linear
equations with multiple right-hand side (RHS). In 3D geometries or for very large 2D problems, the
memory requirements of state-of-the-art direct solvers preclude applications involving hundred millions
of unknowns. In order to overcome this limitation, we investigate a domain decomposition method based
on the Schur complement approach for 2D/3D frequency-domain acoustic wave modeling. The method
relies on a hybrid direct-iterative solver. Direct solver is applied to sparse impedance matrices assembled
on each subdomain, hence, reducing the memory requirement of the overall simulation. Iterative
solver based on a preconditioned Krylov method is used for solving the interface nodes between adjacent
domains. A possible drawback of the hybrid approach is that the time complexity of the iterative part
linearly increases with the number of RHS, if single-RHS Krylov subspace method is sequentially applied
to each RHS. We mention that block-Krylov techniques or de ation techniques can be used in that
case to partially overcome this effect. In the following, we introduce the domain decomposition method
before illustrating its features with 2D and 3D simulations.
-
-
-
Seismic Imaging and the Road to Petascale Capacity: RTM and the Cell /B.E. Processor
Authors F. Ortigosa, J. M. Cela, M. Araya-Polo and R. de la Cruzch, on the other hand, is around the
corner. Seismic Imaging is definitively a field in our industry where petascale capacity is
needed. The question is not when this capacity will be widely available, but how. There are
several hardware processors and devices as a candidates for the brain of the new generation of
petascale supercomputers. The only thing in common is that all of them are difficult to
program, and the programming will be different from the programming of today’s x86
processor generation. We believe that among all the hardware options, the Cell /BE processor
has several characteristics that make it ideal for widely available petascale capacity. Besides
the difficulty of programing the Cell, we present a benchmark for RTM between Cell and
PowerPC processors. We show that using an early generation of the Cell, and a difficult
Kernel of a compute intensive algorithm, we may expect almost one order of magnitude of
performance increase.
-
-
-
Grid and Cloud Computing: Opportunities and challenges for e-Science
By F. GagliardiA new science paradigm has emerged in the last few years referred to as electronic Science (e
Science). It extensively uses simulation techniques based on software modeling which run on
distributed computing infrastructures. In addition, it makes use of huge amounts of distributed and
shared data captured by instruments or sensors and/or stored in databases, analyzed to provide new
results for science. This distributed HPC and data environment allows sharing the acquired
knowledge, accessing remote resources and enabling world wide scientific collaboration.
-
-
-
The Implications of Multicore Processor for High Performance Computing
More LessWith some of the largest supercomputing clusters on the planet, the seismic imaging industry
has a tremendous appetite for computing resources. Increasing demand for energy and
societal pressures for greener energy will only accelerate this growing need. It is therefore
clear that seismic imaging companies will be early adopters for the newest generation of
supercomputers that enable petascale computing and beyond – machines that are perhaps as
close as only months away. However, as this transition to petascale sweeps through the
industry, the industry will also be one of the first groups to come face-to-face with the
significant changes that will be required in order to go beyond petascale. The industry will
have to actively develop plans for how their business models and standard operations will
change in order for practitioners have to achieve the performance they require for their evergrowing
technical challenges. The next few years will present very important challenges and
opportunities for high performance computing in research, industry and business. Petascale
computing and, eventually, “exascale” computing will bring the promise of capability to
deliver full solutions to some of the most challenging and complex issues facing the industry.
However, for well documented technology reasons, these new computing systems
architectures will be radically different in design from traditional high performance
computing platforms. For example, in response to growing technological obstacles, the
processor industry is moving down the multicore path. This development is driving a sea
change in the computer industry for which a new "Moore's Law" may be arising - dictating a
doubling of the number of cores per unit time. As more cores are squeezed on to a chip, the
old programming approaches will not be adequate to achieve the performance required by this
industry; and naive assumptions of linear scaling of performance with the number of cores
will be very wrong. Recent experience with multicore has identified key challenges which
will have to be overcome in order to realize the potential of the next generation of
supercomputing. These challenges include fundamental algorithm design; integration of novel
architectures with more traditional computational systems; management of the unprecedented
amounts of data which are now a key component in all high performance computing
activities; and the development, improvement and validation of new applications solutions
which address the full complexity of the problems which these novel architectures will make
tractable. This presentation will discuss how the industry will be impacted by these changes
and what practitioners can do to achieve the full potential of multicore-based petascale and
exascale supercomputers.
-
-
-
Seismic Wave Propagation on GPUs
Authors A. Loddoch and W. R. VolzIn this work, we present the adaptation of a 3D Finite Difference acoustic wave propagation
code to GPU architecture using the NVIDIA CUDA framework. We demonstrate the general
concept of transforming a CPU-based code to fit the requirements of GPU computing. The
individual steps of this process are illustrated along with the necessary modifications to the
original program code and algorithm.
Problems that typically arise when porting a code to graphics cards, such as numerical
stability due to reduced accuracy and memory availability/handling are discussed as well as
the ability to use multiple GPUs simultaneously.
The described technique results in a GPU-based code that provides speedups of one order of
magnitude in execution time compared to the original CPU version.
-
-
-
Experiences with Seismic Imaging on GPUs
Authors S. Morton, T. Cullison and P. MicikeviciusFor years graphics co-processing units (GPUs) have been growing in performance capabilities
faster than CPUs, and have recently surpassed them. Simultaneously the hardware and
corresponding APIs have grown more generally applicable to a wide range of computational
tasks. This has fostered the use of GPUs for general programming.
In this abstract we will describe our experiences monitoring, testing and ultimately using
GPUs to run various seismic imaging kernels. The performance improvement of GPUs over
CPUs varies with algorithm but is high enough so that we've purchased a substantial GPUbased
cluster. In our workshop talk, we will discuss our production experiences running code
on this system.
-
-
-
NVIDIA Tesla, a way to dramatically speedup seismic processing and reservoir simulation applications
More LessMassive, fine-grained parallel computing capabilities will be needed to help researchers
effectively use petascale computing environments. In particular, petascale computing will
gain performance speed from the parallel processing capabilities of graphics processing units
(GPU). The concept behind the general-purpose GPU (GPGPU) is simple: Use the massively
parallel architecture of the graphics processor for general-purpose computing tasks. Because
of that parallelism, ordinary calculations can be dramatically sped up.
GPGPU is being used as a high-performance coprocessor for oil and gas exploration and other
applications—and it's much cheaper than a supercomputer. Scientists and researchers benefit
from the power of the massively parallel computing architecture. This availability of
supercomputing will unlock the answers to previously unsolvable problems in systems
ranging from a workstation to server clusters.
Using a GPU as a calculation unit may appear complex. It’s not really about dividing up the
task to execute into a handful of threads like using a multicore CPU but rather it involves
thousands of threads.
In other words, to try and use the GPU is pointless if the task isn’t massively parallel, and for
this reason, it can be compared to a super calculator rather than a multi-core CPU. An
application to be carried out on a super calculator is necessarily divided into an enormous
number of threads and a GPU can thus be seen as an economical version devoid of its
complex structure.
NVIDIA CUDA is a software layer intended for stream computing and an extension in C
programming language, which allows identifying certain functions to be processed by the
GPU instead of the CPU. These functions are compiled by a compiler specific to CUDA in
order that they can be executed by a GPU’s numerous calculation units. Thus, the GPU is
seen as a massively parallel co-processor that is well adapted to processing well paralleled
algorithms and like in seismic and reservoir simulation.
NVIDIA Tesla product line is dedicated to HPC. The Tesla Computing System is a slim 1U
form factor which easily scales to solve the most complex, dataintensive HPC problems.
Tesla Computing System is equipped with four new generation NVIDIA GPU boards, IEE
754 compliant Double Precision FP, and a total of 16GB video memory. The rack is used in
tandem with multi-core CPU systems to create a flexible computing solution that fits
seamlessly into existing IT infrastructure.
-
-
-
Hardware Hybrid Computing solutions
More LessParallel HPC applications benefit from multi core CPU technology and have been able to
multiply the computation density by a factor of 2 to 4 and later by 8. This improvement is not
enough compared to the computation requirements of today’s applications. This is why
people have been looking for new hardware and specialized processors which could give
applications gains from 20 up to 100.
Specialized processors like GPUs have improved performance at a greater pace than Moore
law predicts. They started 10 years ago with a technology using 350nm, 5 million transistors
at 75Mhz and now are using 55nm, 700Millions transistors at 800Mhz being able to deliver
512GFlops or more than 3.5GFlops/Watt.
This leads to improvements factors of 1.7x/year in transistors count, 1.3x/year in clock speed,
2.0x/year in processing units and 1.3x/year in memory bandwidth. Using such powerful
dedicated processors as well as CPU in a highly parallel environment of Multi-core for both is
showing the requirement to be able to use in the most efficient way this heterogeneous
environment of Hybrid computing.
This hardware environment exists and can be used today. The first challenge is on the
software development side. Development tools need to integrate heterogeneous programming
as well as multi core from the core of their language being able to support code generation on
different processors types as well as handling asynchronous behaviors. This comes with
compilers and libraries supporting this and being design or extended for it. Obviously those
tools need to support multiple hardware platforms to lead to some standards.
The second key challenge change is the evolution of buses and bandwidth linking together the
different cores of CPU and GPUs. And the way they talk to each other. Fusion projects will
address those evolutions in the future by defining new architectures around those processors
to improve the data flow between them which will be the key to use all the power available.
Cross bar memory controllers will allow GPUs to talk each other very quickly without
breaking parallelism. Hyper Transport bus will improve communication between GPUs and
CPUs. Finally Multi core GPUs and CPUS on the same die will increase even more the
compute density.
Different benchmarks and application codes have been used to demonstrate already the
benefits so such architecture. We will present SGEMM results as well as different algorithms.
The results will highlight the fact that performance is affected by in/out copy of the data on
the GPU at the moment and that finer tunings allows huge jump in performance. We will also
show that changing the way algorithms have been implemented for CPU to fit GPU
architecture adds even more performance gains.
-
-
-
Characterizing controls of geothermal systems through integrated geologic and geophysical studies: Developing recipes for successful exploration of conventional and unconventional geothermal systems
Authors J. Faulds, M. F. Coolbaugh, G. S. Vice and V. BouchotAlthough conventional geothermal systems have been successfully exploited for electrical
production and district heating in many parts of the world, exploration and development of
new systems is commonly stymied by the risk of unsuccessful drilling. Problems include
drilling of hot, relatively dry wells with low flow rates, decreasing temperatures with depth as
wells penetrate relatively thin and shallow geothermal aquifers (overturn), and wells with
reasonable flow rates but relatively low temperatures. Due to the high cost of drilling, such
problems can effectively preclude geothermal exploration. Proposals to generate enhanced
geothermal systems (EGS) by artificially stimulating hot dry wells, commonly through
mechanical hydro-fracturing of rocks, have therefore gained in popularity.
-
-
-
Lithosphere tectonics and thermo-mechanical properties: an integrated modeling approach for EGS exploration in Europe
By F. BeekmanFor geothermal exploration and production of enhanced geothermal systems (EGS)
knowledge of the thermo-mechanical signature of the lithosphere and crust is important to
obtain critical constraints for the crustal stress field and basement temperatures. The stress
and temperature field in Europe is subject to strong spatial variations which can be linked to
Polyphase extensional and compressional reactivation of the lithosphere, in different modes
of deformation. The development of innovative combinations of numerical and analogue
modeling techniques is key to thoroughly understand the spatial and temporal variations in
crustal stress and temperature. In this paper we present an overview of our advancement
developing and applying analogue and numerical thermo-mechanical models to
quantitatively asses the interplay of lithosphere dynamics and basin (de)formation. Field
studies of kinematic indicators and numerical modeling of present-day and paleo-stress fields
in selected areas have yielded new constraints on the causes and the expression of
intraplate stress fields in the lithosphere, driving basin (de)formation. The actual basin
response to intraplate stress is strongly affected by the rheological structure of the underlying
lithosphere, the basin geometry, fault dynamics and interplay with surface processes.
Integrated basin studies show that rheological layering and strength of the lithosphere plays
an important role in the spatial and temporal distribution of stress-induced vertical motions,
varying from subtle faulting to basin reactivation and large wavelength patterns of
lithospheric folding, demonstrating that sedimentary basins are sensitive recorders to the
intraplate stress field. The long lasting memory of the lithosphere, in terms of lithospheric
scale weak zones, appears to play a far more important role in basin formation and
reactivation than hitherto assumed. A better understanding of the 3-D linkage between basin
formation and basin reactivation is, therefore, an essential step in research that aims at
linking lithospheric forcing and upper mantle dynamics to crustal vertical motions and stress,
and their effect on sedimentary systems and heat flow. Vertical motions in basins can
become strongly enhanced, through coupled processes of surface erosion/sedimentation
and lower crustal flow. Furthermore patterns of active thermal attenuation by mantle plumes
can cause a significant spatial and modal redistribution of intraplate deformation and stress,
as a result of changing patterns in lithospheric strength and rheological layering. Novel
insights from numerical and analogue modeling aid in quantitative assessment of basin and
basement histories and shed new light on tectonic interpretation, providing helpful
constraints for geothermal exploration and production, including understanding and
predicting crustal stress and basin and basement heat flow.
-
-
-
Geophysical exploration methods at European sites
By D. BruhnMost geophysical exploration methods have been developed for the oil and gas industry, and
ever more sophisticated tools and refinements in the different approaches are designed to
solve specific problems associated with the detection and characterisation of hydrocarbon
reservoirs. The exploration of geothermal resources has profited greatly from these
developments, however, the methods cannot always by directly transferred from oil and gas
to hot water and/or steam. First of all, physical properties of H2O differ from those of
hydrocarbons, resulting in differing responses of physical measurement methods. Secondly,
geothermal reservoirs can be found in highly varying geological environments, mostly
associated with volcanism, where hydrocarbons are usually not present. Thirdly, the
economically most interesting geothermal reservoirs are much hotter than any oil or gas
reservoir. At the moderate temperatures comparable to those of hydrocarbons many of the
advanced exploration methods are simply cost-prohibitive, as the economic potential of a
medium-enthalpy geothermal reservoir is much lower than for an oil or gas well. For these
reasons, some of the existing geophysical methods have to be adapted to meet the needs of
geothermal exploration or different methods have to be developed and applied.
-
-
-
Technological challenges of geothermal exploration
By A. ManzellaThe most pressing technological challenges in exploration and investigation of Enhanced
Geothermal Systems (EGS) and Unconventional Geothermal Resources (UGR) are
considered to be those associated with the identification of the nature of geothermal heat
concentrations and prospective reservoirs without drilling, the improvement of methods
predicting reservoir performance/lifetime, the optimization of resource exploitation producing
the lowest possible effect to the environment.
A list of important research themes is provided, allowing a spatial and temporal
reconstruction of the subsurface geothermal condition that might not only cut the time from
discovery to production and improve efficiency, but also reduce environmental impacts
forecasting possible problems and finding solutions beforehand.
-
-
-
Available approaches for increasing the producibility of geothermal wells in natural fracture systems by optimizing their placement, design, and stimulation
More LessMaybe the greatest value of technology in geothermal energy issues can be assessed in
terms of its ability to reduce risk. The present contribution only addresses subsurface matters
and, even more specifically, how naturally fractured geothermal reservoirs could be more
efficiently tapped and developed in the framework of EGS operations. Lastly, among the two
aspects which have to be investigated in EGS projects, i.e. temperatures and flow rates, only
the latter is considered here, assuming that the isotherm geometry can, for instance, be
constrained by MT (magneto-telluric) surveys looking for electrically conductive altered rocks
(clay alteration zones) or other techniques.
The ability to characterize NFSs (natural fracture systems) in the early field development
stage of an EGS project reduces economic risk because it enables the development team to
determine optimal well placement and trajectories. Characterizing, tapping and developing a
geothermal NFS, as well as predicting the heat and fluid flows in response to hot water
extraction and cooled water re-injection, is a challenging task that span multiple disciplines
and multiple scales.
-
-
-
Induced seismicity: Setting the problem in perspective for EGS development
By K. F. EvansInduced seismicity is a recognised hazard in practically all engineering endeavours where
stress or pore pressure are altered. This can be taken as a reflection of the realisation that
has dawned in the past 20 years that the Earth's crust generally supports high shear stress
levels and is often close to failure. Historically, the most damaging events, which have
sometimes caused many fatalities, are associated with the impoundment of reservoirs.
However, earthquakes of sufficient size to cause damage to localities have also been
associated with mining activity, long-term fluid withdrawal wells, and long-term fluid injection
wells. Given that massive stimulation injections into crystalline rocks have routinely been
performed at EGS sites since the early 70s, it is perhaps surprising that the issue of the
seismic hazard associated with these operations has only recently come to the fore. Massive
injections of fluid have been conducted at Fenton Hill, Rosemanowes, Hijiori and Soultz (3.5
km reservoir) without producing events large enough to disturb the local population. More
recently, events approaching or exceeding 3.0 have occurred during or shortly following
injections at Soultz , Cooper Basin (Australia) and Basel, all of which were conducted at 4.5-
5.0 km. These events, particularly the event at Basel because of it proximity to a major
population centre and reports of damage, has galvanised attention on the seismic hazard
posed by EGS development.
-
-
-
Understanding the stress field and potential fault activity – a key issue to drilling and stimulation in man-made geothermal reservoirs
Authors I. Moek, H. Schandelmeier and T. BackersIt has been recognized that in-situ stresses have significant impact, either positive or
negative, on the short and long term behaviour of fractured reservoirs. The knowledge of the
stress conditions are therefore important for planning and utilization of man-made
geothermal reservoirs. The geothermal field Groß Schönebeck belongs to the key sites in the
north eastern German Basin in Germany. We present a combined approach of stress field
determination and application of the new knowledge for drilling and stimulation design at this
key site, where 4100 m deep sandstones and volcanic rocks of Lower Permian are ongoing
to be explored. In our comprehensive study we use detailed 3D fault mapping, based on
available well and 2D seismic data, stress regime determination based on empirical and
analytical methods, and slip-tendency analysis to estimate reactivation and leakage potential
of any fault population within the stress field under initial and changing pore pressure
conditions. We discuss the importance of various fault sets related to the stress field in terms
of their potential for conducting geothermal fluids based on the tendency of the faults to dilate
and slip. In particular, we demonstrate how the well path trajectory and mud weights can be
defined on the basis of principle stress orientation and magnitude to minimize formation
damage under mechanically stable borehole conditions and to optimise stimulation designs
of multiple fracs in multilayered rocks. Finally, the results of slip-tendency can be used to
control seismicity induced by massive stimulation campaigns at geothermal sites. Our
approach can be adopted to any other geothermal site investigation.
-
-
-
Supercritical fluids and their properties for heat transmission and geochemical reactivity: example of the supercritical CO²
Authors M. Azaroual, L. André, A. Lassin and A. MenjozThe thermodynamic and thermophysical properties of supercritical carbon dioxide (CO2(sc))
are known and theoretical approaches are introduced in many numerical modeling codes.
Various studies have identified the key mechanisms of transport and the physical – chemical
behaviour of the field near the CO2(sc) injection wells in saline aquifers (André et al., 2007,
and therein references). The contrast of thermophysical properties between water and
carbon dioxide is sufficiently large to envisage the use of CO2(sc) as a heat transmission fluid
in the context of enhanced geothermal systems - EGS (Brown, 2000; Pruess, 2006; Pruess
and Azaroual, 2006; Pruess 2008). The carbon dioxide is a poor conductor of heat, low
density and low viscosity fluid but it still offers some properties flow quite attractive especially
because of its low viscosity and high buoyancy. It is also a bad solvent of solids and water.
Analysis of these thermodynamic functions reveals the complexity of the thermal perturbation
induced by the injection of CO2(sc) in the geothermal heat exchangers in which initial
conditions of temperature and pressure correspond to the field of supercritical CO2(sc).
-
-
-
Green field evaluation approach for geothermal energy
By R. Bertani“Geothermal resource assessment for green fields” is the evaluation of the expected
potential of supplied geothermal electricity that might become available for exploitation of a
given reservoir.
The standard technique described (“stored heat method”) takes into account only the heat
reserves of the inferred geothermal field, without any consideration of the number of wells
and economical feasibility: the permeability of the system is simply not used.
This approach could be considered as “step zero”, for obtaining a first, rough approximation
of what it is possible to install on a given field, when the available information are very poor
and speculative. We will discuss the physical and mathematical basis of the method, and we
will present some application at two real cases. We are restricting our analysis only for water
dominated systems, both high enthalpy (flash plant) and medium enthalpy (binary
plant). Some examples have been chosen as a benchmark of the technique.
With the introduction of an high (realistic) value of the abandon temperature, the correction
for the cooling effect specific consumption degradation and the effective flash technology a
value significantly lower than the standard approach is obtained. We believe this result as a
better estimation of the effective industrial capacity that can be supported by a realistic
approach in a geothermal project. More detailed benchmarks will be conducted in the future,
with a comparison from the effective geothermal field performances and the estimated
capacity.
-
-
-
Advances in magnetotelluric sounding of geothermal zones
More LessAdvanced 3-D interpretation tools based on imaging, Bayesian inversion and artificial neural
network (ANN) recognition developed by the author (Spichak et al., 1999; Spichak, 2007)
form a basement of a new paradigm in the electromagnetic data interpretation that takes into
account the geological information known, noise level in the data, prior estimates of the
unknown parameters, hypotheses formulated in probabilistic terms, data available from other
methods and formalized expert estimates. Application of these methods to magnetotelluric
sounding data enables constructing 3-D electrical resistivity models of the geothermal areas,
mapping the geothermal reservoirs and monitoring macro-parameters of the fluid bearing
faults.
In particular, Spichak (2002) used Bayesian inversion of MT data in order to construct 3-D
resistivity model of the Minamikayabe geothermal area (Hokkaido, Japan). Spichak (2001)
has found the most suitable data transforms for adequate interpretation of MT measurements
carried out with the purpose of monitoring variations in the geothermal reservoir resistivity
with temperature. Finally, using ANN Expert System enabled to estimate the Minou fault
(Kyushu, Japan) macro-parameters (Spichak et al., 2002).
-
-
-
The Green tuff units of Mt. Epomeo, Ischia Island (Italy): evidence of an exhumed fossil geothermal system
Authors P. Fulignati, A. Sbrana, M. Vietina and A. J. BoyceThe hydrothermally altered Green Tuff units that outcrop on the Western and North-Western
flanks of Mt Epomeo on Ischia island offer the rare opportunity to see a section of a
hydrothermal system exposed on the surface. The Ischia island fossil hydrothermal system
shows numerous common features with the typical active geothermal systems developed on
volcanic islands.
The mineralogy, the chemistry and the space distribution of the hydrothermal assemblages
observed in the Green Tuff units and in hydrothermally altered xenoliths agree with most of
geothermal environments (Browne, 1978). Based on (1) the occurrence of temperaturesensitive
minerals such as mixed layers I/S (<150°C), illite-phengite-chlorite (>220-240°C)
and biotite (>320°C), and (2) chlorite and illite geothermometry, the secondary minerals of
Ischia island fossil hydrothermal system indicate ambient paleo-temperatures ranging from
120-140°C to about 340°C.
-
-
-
The Ischia Island hydrothermal system: hydrogeochemical conceptual model
Authors P. Fulignati, G. Giudetti, A. Sbrana, I. Giulivo and L. MontiIschia island is located in the north-western part of the Gulf of Naples and is part of the
Phlegrean Fields Volcanic District. Ischia is marked to be an example of resurgent caldera
and is characterized by a high heat flux comprised between 200 and 400 mWm-2.
Hydrothermal activity is well know on the island since Roman Age and more than 200 hotels
and SPA resources, located all over the island, use thermal waters (T ranges from 30°C to
99°C) for balneo-therapeutic medical cures. The deep wells, drilled by SAFEN in 1950's and
subsequent investigations (De Gennaro et al., 1984; Panichi et al., 1992; Inguaggiato et al.,
2000), also revealed the occurrence of high temperature fluids (up to ~220°C) in the subsoil
of the island, hosted within a possible geothermal reservoir. The aim of this work is to
characterize the main geochemical processes that explain the water geochemistry of the
thermal fluids of Ischia Island, to classify the water composition data into genetic groups and
to delineate a conceptual model to explain the composition of the discharges.
-
-
-
Migration of fluids in the Boccheggiano-Montieri (southern Tuscany, Italy) fossil geothermal system: insights for the Larderello high-enthalpy active geothermal field
Authors A. Brogi, A. Dini, P. Fulignati, D. Liotta, G. Ruggieri and A. SbranaUnderstanding the migration of hydrothermal fluids represents a continuous task for
successful exploration of geothermal resources. Contributions to better constrain the
hydrogeological models in geothermal areas can derive from field and laboratory studies on
fossil geothermal systems, evidenced by the concentration of ore deposits in wide areas.
This work presents an integrated study based on fluid inclusion and structural analyses on a
Pliocene-Pleistocene fossil hydrothermal system, located to the south of the present active
Larderello geothermal field. Mineralization, mainly made up of quartz and pyrite, is
widespread distributed in the damage zone of the Pliocene Boccheggiano normal fault and,
far from it, in the older cataclastic levels, deriving from previous deformational events.
-
-
-
Shallow versus deep thermal circulations at Bagni di S. Filippo (M.te Amiata, Tuscany, Italy)
Authors A. Baietto, G. Giudetti, S. Governi, L. Fusani and E. SalvaticiThe M.te Amiata sector constitutes a volcano-geothermal area of the southern Tuscany,.
This area has been affected by extensional tectonics from the Early-Middle Miocene onwards
(Carmignani et al., 1994), that led, in Pliocene, to the emplacement of a deep seated
intrusive body and to the eruption of dacitic-rhyodacitic lavas through a NW-SE-trending
fissure (Ferrari et al., 1996). Currently, this area is characterized by a high heat flow (up to
600 mW/m2; Baldi et al., 1995) that feeds important geothermal fields. Main geothermal
reservoirs are located at depths of several hundreds of meters within Triassic evaporitic
horizons, constituting the base of the Tuscan Unit.
-
-
-
Resistivity reduction in the vapour-dominated field of Travale (Italy)
Authors C. Giolito, G. Ruggieri, A. Manzella and G. GianelliThe aim of this multidisciplinary work is to find out what can account the significant reduction
in resistivity (from 103 to 100 m) observed at the depth of the geothermal reservoirs in the
Travale area (SE of Larderello, Italy). Since the exploited fluid is supersaturated steam and
thus resistive, its presence and localisation can not explain the observed resistivity
reductions. The observed reduction in resistivity could be related: 1) to the lithology and
heterogeneities of reservoir rocks and the alteration affecting them (i.e. presence-abundance
of conductive minerals), and/or 2) to the presence of brines within a fracture net sufficiently
interconnected to produce electrolytic conduction.
-
-
-
Waveform tomography - Successes, Cautionary tales, and future directions
By R. G. PrattI would like to highlight some of the issues in evaluating the success (or otherwise) of
waveform tomography. For synthetic data it is always tempting to commit inverse crimes, for
real data a scrutiny of the detailed data fit is "sine qua non". I'll go through some new
animations that illustrate the benefits of the frequency domain, and I'll finish with some topics
that we are working on - real data examples of Q-inversion being a current interest.
-
-
-
Applications of Waveform Inversion
More LessWaveform inversion techniques aim to fit the entire seismic wavefield including those phases
that conventional processing and migration seek to remove. Such methods have the potential
to image the subsurface with significantly improved spatial resolution. The inversion in this
paper uses a frequency-domain, finite-difference modeling method to solve the full acoustic
wave equation, so high-order effects such as diffractions and multiple scattering are
accounted for automatically. It is a local descent algorithm that refines a starting model
iteratively to reduce the waveform misfit between observed and modeled data.
Waveform inversion is applied to a number of synthetic models, including the complicated
BP EAGE model. The results have demonstrated that waveform inversion has the potential to
reconstruct high-resolution velocity structure. Some practical strategies were found to be
critical in the application to real data. These strategies include: using a diving wave
tomography model as a starting model; starting the inversion with the lowest available
frequency; using complex-valued velocity to take care of undesired amplitude discrepancies;
using complex-valued frequencies to simulate the damping of late arrivals; and handling
surface-related multiples effectively.
-
-
-
Effects of Surface Scattering in Waveform Inversion
Authors F. Bleibinhaus and S. RondenayFor seismic waveform inversion of body waves, the free surface is usually neglected on
grounds of computational efficiency. Subsurface parameters are simply extended in the air,
and sources and receivers are embedded in the model at their true locations. Many synthetic
studies have shown that this is an acceptable approximation if the surface is flat and,
naturally, surface waves are excluded from the inversion. In this study, we investigate the
effects of ignoring P wave scattering from irregular topography in acoustic waveform
inversion.
-
-
-
Full Elastic Waveform Inversion: Future of Quantitative Seismic Imaging
Authors S. Singh, T. Sears, M. Roberts, A. Gosselet, G. Royle and P. Batoneismic reflection data are acquired at a very high cost. Conventional processing (stacking
and migration) provides very high-quality image of the sub-surface, but does not provide
quantitative measure of the physical properties of the sub-surface. Amplitude versus offset
(AVO) analyses can be used to estimate P and S-wave impedances. Since the method is local,
i.e. assumes 1D media, linear approximation to the reflection coefficient, and ignores
interference effects, the results are very approximative. Over the last ten years, we have
developed a suite of 1D and 2D full waveform inversion. We have particularly focused on the
use of wide-aperture data, containing near- and post-critical angle reflections, which helped to
constrain medium scale features of the velocity model, allowing convergence towards the
global minimum. The algorithm has been applied to surface seismic reflection, ocean bottom
cable and walk-away VSP data. Both vertical (Vz) and horizontal (Vx) particle velocity
records aree used, allowing fine-scale estimation of both P- and S-waves velocities.
The 2D elastic waveform inversion scheme is based on Shipp and Singh (2002) and
Freudenreich et al. (2002), utilizing a finite-difference solution to the 2D elastic wave
equation (Levander, 1988) operating in the time-distance domain. The aim of the scheme is
to model shot gathers accurately, and to use the residual between observed and modeled
wavefields to update the velocity model appropriately using a conjugate gradient method.
Both Vp and Vs may be inverted for, whilst density is coupled empirically with Vp.
-
-
-
Full-waveform inversion results when using acoustic approximation instead of elastic medium
Authors C. Barnes and M. ChararaSeismic marine data inversion is a very heavy process, especially for the 3D seismic case. Often approximations are made to limit the number of physical parameters or to speed up the forward modeling. Because the data are often dominated by uncoverted P waves, one popular approximation is to consider the earth as purely acoustic: no shear modulus; even sometimes with constant density. Nonlinear waveform seismic inversion consists in iteratively minimizing the misfit between the amplitudes of the measured and the modeled data.
-
-
-
3D wavefield tomography: Problems, opportunities and future directions
More LessWavefield tomography, otherwise known as full-waveform inversion, of two-dimensional
seismic data, has become a well-established technique over the past decade, with impressive
recovery of realistically complex synthetic models being reported by several groups.
However, despite its proven potential, its uptake to tackle real-world exploration and
production problems has been rather limited. In our view, this has been principally because
the increased spatial resolution, accuracy, and other benefits that the method brings are only
genuinely realised for field data when the method is extended to deal with three-dimensional
velocity structure, three-dimensional reflection geometry, and a three-dimensional array of
sources and receivers. Since the real world is always three-dimensional, very-accurate twodimensional
solutions to three-dimensional problems are nearly always illusory – the higher is
the spatial resolution of the method, and the more accurate is the physics of wave propagation
that is employed, then the more significant will be the errors that are introduced by neglect of
the third dimension. In essence, there is little utility to be gained from a model that is highly
resolved in two dimensions, but that is not at all resolved in the third, and where structure
from the missing third dimension is mapped incorrectly onto the 2D plane.
-
-
-
3D Full Waveform Inversion: a Complex Recipe for Success
By L. SirgueWaveform inversion has been an established technique for more than two decades
(Lailly 1983; Tarantola, 1984). Numerous 2D applications on synthetic and real data
have been published (Mora, 1988; Pratt et al., 1996, Operto et al., 2004, Sirgue and
Pratt, 2004). In particular, studies related to the influence of subsurface angle
illumination (Jannane et al., 1989, Sun and McMechan, 1992; Pratt et al., 1996;
Sirgue and Pratt , 2004) have demonstrated the importance of wide-angle/large offset
surface seismic data.
On the other hand, the highly non-linear nature of waveform inversion begs for the
need of low frequencies in the seismic data. Multi-scale strategies in either time or
frequency domains (Bunks et al, 1995; Forgues et al., 1998) have shown that inverting
initially for the low-end of the frequency spectrum may be an efficient approach for
the mitigation of non-linearity.
This need for low frequencies however may not be dissociated from the accuracy of
the starting model. As a result, inaccuracy of the starting model will results in more
demanding requirements in terms of low frequencies (Sirgue and Pratt, 2002). The
combination of wide-angle illumination, low frequencies and starting model hence
constitute the primary ingredients of a successful inversion (Sirgue, 2006). Each of
these ingredients plays a key role and interacts with one another in a complex
relationship that will depend on the geophysical problem that one is trying to solve.
More recently, the first examples of 3D waveform inversion were shown (Sirgue et
al., 2007; Ben-Hadj-Ali et al., 2007). While the extension of waveform inversion to
3D problems will not fundamentally change the importance of wide-angle data,
starting model and low frequencies, additional aspects will need to be assessed such
as the impact of the azimuthal coverage.
-
-
-
Velocity inversion based on one-way migration and semblance maximization versus fullwaveform inversion
Authors R. Soubaras and B. GratacosIn this paper, we provide a theoretical comparison between twomethods of velocity estimation:
full-waveform inversion based on the minimization between the recorded data and the reconstructed
data based on a reflectivity model and a velocity model, and stack energy maximization
methods where the energy of a one-way migration is maximized. We first analyze the process of
migration and least-squares migration, then show that minimizing the data misfit is equivalent
to first order to energy maximization of a migration, when an appropriated weighting matrix is
used. We then obtain, by using 1D migration, a first order approximation of this weighting matrix.
Synthetic examples illustrates this derivation. The influence of using one-way propagation
instead of two-way is then discussed.
-
-
-
Velocity analysis in the data domain – overview and prospects
Authors T. van Leeuwen and W. A. MulderAutomatic versions of migration velocity analysis provide velocity background models that
can serve as a starting point for migration or least-squares inversion. When dealing with
primary reflections, current methods perform well. They tend to break down, however, when
strong multiples are present. This is due to the single scattering approximation that underlies
most migration algorithms. When multiples are interpreted as primaries, they may end up at
the wrong depth with the wrong apparent velocity. In the data-domain, it should in principle
be possible to match observed with predicted multiples and use them for velocity estimation.
We present a method for performing the velocity analysis in the data domain and include tests
on multiple-free synthetic data and on real data, using the convolutional model to model the
data. We also outline ideas on how the method is expected to behave in the presence of
multiples.
-
-
-
On the gradient generated wave-paths in differential semblance velocity analysis
By P. ShenDifferential semblance velocity analysis (“DSVA”, (Symes, 1986)) estimates velocity models
from waveform data, by means of prestack migration and its linearized adjoint state process.
Several authors have presented DSVA in its explicit differential forms for laterally
heterogeneous velocity models using various methods of prestack migration (Symes and
Versteeg, 1993; Kern and Symes, 1994; Chauris and Nobel, 2001;Mulder and ten Kroode,
2002). Shen et al. (2003) presented a version DSVA based on an objective of focusing the
image in subsurface offset. In this approach the wrong velocity is penalized by simply
multiplication of subsurface offset to the image volume. The name of “differential
semblance” is justified through its subsurface angle substitute (Shen, 2004), where an explicit
differential with respect to angle is formulated.
-
-
-
High-resolution imaging of basin-bounding normal faults in the Southern Apennines seismic belt (Italy) by traveltime and frequency-domain full-waveform tomography
Authors L. Improta, S. Operto, C. Piromallo and L. ValorosoWe apply a two-step seismic imaging flow by combined first-arrival traveltime and
frequency-domain waveform tomographies to dense wide aperture data collected in the Val
d’Agri basin (southern Italy). A large wavelength Vp model determined by first-arrival
traveltime tomography is used as a starting model for waveform tomography. The multiscale
waveform tomography consisting of successive inversion of increasing frequencies allows to
progressively reconstruct the short wavelengths of the velocity model, providing valuable
information on the Quaternary basin and on range-bounding normal-faulting systems.
-
-
-
Quantitative imaging of the Permo-Mesozoic complex and its basement by frequency domain waveform tomography of wide-aperture seismic data from the Polish Basin
Authors M. Malinowski and S. OpertoRecently we observe an increasing interest in acquisition of global-offset seismic data for
commercial prospecting in geologically complex areas, eg. in areas of basalt flows or thrust
belts (Operto et al. 2004, Colombo, 2005). The broad range of recorded offsets provides a
sufficient ray coverage for traveltime tomography and enhances the depth-migrated images by
using more energetic wide-angle reflections. Such data are also well suited for frequencydomain
full waveform inversion (FWI) – a method which was recently used for imaging
complex structures (Ravaut et al., 2004).
In this study we present the workflow and results of 2-D frequency domain waveform
tomography (WT) applied to the global-offset seismic data acquired in central Poland along a
50-km long profile during GRUNDY 2003 experiment (Malinowski et al. 2007). The WT
method allows full exploitation of the wide-aperture content of these data and produces in a
semi-automatic way both the detailed P-wave velocity model and the structural image (i.e.,
perturbations in respect to the starting model).
-
-
-
2D Full wave form inversion in time-lapse mode: CO2 quantification at Sleipner
Authors A. Gosselet and S. SinghConventional processing of seismic time-lapse data is very valuable in gaining qualitative
insights into reservoir history. For example, this allows characterization of fluid front
displacements, identification of fluid migration pathway or detection of flow barriers and
compartments. Quantitative analyses are generally based on Amplitude Versus Offset (AVO)
and attribute generation techniques. However, AVO is fundamentally a 1D approach and is
generally applied using linear approximations. Integrating time-lapse seismic into historymatching
is another way to obtain quantitative conclusions. Nevertheless, in this case, seismic
forward modeling is often based on a simple 1D convolution model and a too large
computational load precludes any proper optimization loop. To overcome such limitations, we
propose to apply 2D elastic full waveform inversion to time-lapse seismic data. The method is
computer intensive but allows modeling the different propagation modes (reflections, wide
angles, multiples, converted) to achieve a rigorous non-linear inversion of the seismic data.
The approach is applied to reflection time-lapse data from monitoring surveys of the Sleipner
CO2 injection site, North Sea. CO2 separated from methane is injected into the Utsira Sands,
a deep saline aquifer. Inverted P-wave velocity variations are related to CO2 saturation using
Gassmann theory. Since gas injection into a water bearing formation is a drainage process,
saturation is likely to be patchy. Consequently, we also used the patchy Vp-saturation
relationship to determine the maximum possible saturations. Investigations about the level of
patchiness, with respect to the frequency bandwidth used for the inversion, is required to
determine the most likely CO2 saturation.
-
-
-
What initial velocity model do we need for full waveform inversion?
Authors H. Chauris, M. Noble and C. TaillandierIn the context of velocity model building, we examine if the velocity models obtained after
first-arrival traveltime tomography are accurate enough for subsequent full waveform
inversion of reflected energy. For that purpose, we test the quality of the velocity model
obtained by first-arrival traveltime tomography on the BP salt dome model. Several 1-D
inversions are conducted in two different zones. In the simplest zone corresponding to smooth
velocity models, the tomographic models are good enough for waveform inversion with
realistic frequency contents. In the complex part going through a salt body, one need very low
frequencies (starting at around 1 Hz) or a further refinement of the tomographic model.
-
-
-
Influence of acquisition parameters for 2D acoustic frequency-domain full-waveform inversion.
Authors C. Ravaut, M. Alerini, J. A. Haugen and B. ArntsenIn this paper, we studied and illustrated the influence of the acquisition parameters on the results of 2D
acoustic frequency domain full-waveform inversion. We considered two synthetic geological models: a
tilted layered blocks model and a complex salt dome model. In the first case, the inverse problem is quite
linear up to 5Hz frequency and full-waveform inversion gives well constrained models for more
industrial seismic acquisitions. In the salt dome context, the inverse problem is strongly non linear and
very low frequencies and large offsets are necessary for full-waveform inversion to reconstruct properly
the true velocity model. In this very complex case, dedicated acquisitions, like wide-offsets and low
frequency sources, need to be designed to ensure the success of the method
-
-
-
Comparison of acoustic full waveform tomography in the time - and frequency - domain
Authors A. Kurzmann, D. Köhn and T. BohlenFor better parameter estimation, both in active source and earthquakes seismology, we need to exploit the richness of full seismic waveforms. Full waveform tomography (FWT) is a powerful method to reach this goal. Although first implementations in the 1980's were conducted in the time-domain by Tarantola, the frequency-domain version of FWT developed in the 1990's by G. Pratt and coworkers has now emerged as an efficient imaging tool. The main advantage of the frequency-domain approach is the possibility of starting the inversion at low frequencies (large scale structures) and then moving to higher frequency compounds (smaller scale structures), thereby realizing a multiscale approach. The main advantage of the time-domain method is the efficient parallelization by domain decomposition leading to a significant speedup on parallel computers. In this study, we demonstrate the performance of our parallel acoustic time-domain code. We present the results for a very complex example - a random medium model. Last but not least, we compare our time-domain inversion results with the frequency-domain results calculated using the FULLWV code by G. Pratt et al.
-
-
-
Frequency-domain acoustic wave modeling using a hybrid direct-iterative solver based on a parallel domain decomposition method: a tool for 3D Full Waveform Inversion?
Authors F. Sourbier, A. Haidar, L. Giraud, S. Operto and J. VirieuxFrequency-domain full-waveform tomography has been extensively developed during last decade to build
high-resolution velocity models (Pratt, 2004). One advantage of the frequency domain is that inversion
of few frequencies are enough to build velocity models from wide-aperture acquisitions. Multi-source
frequency-domain wave modeling requires resolution of a large sparse system of linear equations with
multiple right-hand side (RHS). In 2D, the method of choice for solving this system relies on direct solver
because multi-RHS solutions can be efficiently computed once the matrix was LU factorized. In 3D or
for very large 2D problems, the memory complexity of direct solvers precludes applications involving
hundred millions of unknowns. To overcome this limitation, we investigate a domain decomposition
method based on a Schur complement approach for 2D/3D frequency-domain acoustic wave modeling.
The method relies on a hybrid direct-iterative solver. Direct solver is applied to sparse matrices assembled
on each sub-domain, hence, mitigating the memory complexity of the overall simulation. Iterative solver
based on a preconditioned Krylov method is used to solve the interface nodes between adjacent domains.
Drawback of the hybrid approach is that the time complexity of the iterative part linearly increases with
the number of RHS. In the following, we introduce the domain decomposition method before illustrating
its potentialities with 2D and 3D simulations.
-
-
-
Efficient iterative solution of the 3D acoustic wave equation
By A. UmplebyOutlines the iterative solver that we have developed for the 3D visco-acoustic wave equation
in the frequency-domain, explains how the system of equations is preconditioned and how the
method is parallelised for tomography, and presents hardware benchmarks for the method.
-
-
-
Waveform inversion by one-way wavefield extrapolation
By J. ShraggeOne-way Riemannian wavefield extrapolation (RWE) on computational
meshes conforming to the direction of turning-wave
propagation is presented as an alternative forward modeling
procedure for waveform inversion. Forward modeling tests
demonstrate that the RWE approach may be a sufficiently accurate
approximation for calculating the wavefield phases important
for early-arrival waveform inversion. Initial results indicate
that RWE waveforms are well matched at wide offsets to
finite-difference data, and can be used in a waveform inversion
scheme to invert synthetic data for 1D velocity perturbations.
-
-
-
Sources Near The Free-Surface Boundary: Pitfalls for Elastic Finite-Difference Seismic Simulation And Multi-Grid Waveform Inversion
Authors J. E. Anderson, J. R. Krebs and D. HinkleyElastic full-waveform inversion requires the capability to efficiently do accurate forward seismic
simulations based upon an earth model. The rotated staggered grid (Saenger, 2000) has been
a great advance for accurate elastic wave simulation in isotropic and anisotropic media but has
requirements that complicate source insertion too close to a free surface boundary. Coarse-grid
simulations limited to lower-frequency components in the data are frequently used to accelerate
early stages of waveform inversion both for computational efficiency and to better condition the
inversion (Bunks et al., 1995). This works better for acoustic waveform inversion than in the
elastic case. In the elastic case, the free-surface boundary requires a fine spatial grid both for
accurate computation of surface waves and for accurate source insertion. These requirements
offset part of the multi-grid advantage, especially for typical seismic acquisition geometries with
sources and receivers located somewhere near the surface of the earth.
-
-
-
3D Pre-Stack Plane Wave Full Waveform Inversion
Authors D. Vigh and E. W. StarrPSDM has been in place for decades with numerous tools to derive velocity fields in depth,
particularly in mature areas such as the Gulf of Mexico. The existing methods have reached
their limits to further define and/or fine-tune the velocity models. With the new wide azimuth
acquisitions where better illumination is foreseeable, velocities can be more accurately
determined. One of the most advanced tools is to use full waveform inversion. Pre-stack
seismic full waveform inversion is a highly challenging task due to non-linearity and nonuniqueness.
Combined with compute intensive forward modeling and residual wavefield back
propagation, the method makes the technique computer intensive, especially for 3D projects.
For PSDM, forward modeling and reverse time migration are widely used in the industry.
Even in the 3D sense the focus has become what sort of methods could be used to achieve
higher resolution velocity models. The answer is full waveform inversion. Here we examine
the time domain plane wave implementation of 3D waveform inversion supported with
synthetic and field data examples.
-
-
-
3D acoustic frequency-domain full waveform tomography (FWT): application to the SEG/EAGE Overthrust model
Authors H. Ben-Hadj-Ali, S. Operto, J. Virieux and F. SourbierWe present a massively parallel frequency-domain full-waveform tomography (FWT) algorithm for imaging
3D acoustic media. FWT refers to imaging method based on the complete solution of the two-way
wave equation for the forward problem and on inverse problem theory for the imaging problem (Tarantola,
1987). A model is built by minimization of the misfit between the recorded data and that computed
in a starting model. The frequency-domain (FD) formulation of FWT was originally developed
for 2D cross-hole acquisition surveys which involve wide-aperture propagations (Pratt and Worthington,
1990). Only few discrete frequencies are required to develop a reliable image of the medium thanks to
the wavenumber redundancy provided by multifold wide-aperture geometries. Full wave propagation
modeling is a critical issue in FWT methods since it is the most computationally expensive task in the
processing flow. In the frequency domain, the forward problem reduces to the resolution of a large sparse
system of linear equations for each frequency to be considered. Therefore, a 3D optimal finite-difference
stencil was designed by Operto et al. (2007) that leads to 4 grid points per wavelength. Aim of this work
is to provide some insights on the feasibility and relevance of 3D frequency-domain FWT for building
high-resolution velocity models of isotropic acoustic media. Indeed, we present application of FWT to
two targets of the SEG/EAGE Overthrust model.
-
-
-
Practical 3D wavefield tomography on field datasets
More LessWaveform tomography has been used for a number of years in 2D on synthetics and field data
and recently in 3D (Stekl et al 2007, Ben Hadj Ali et al 2007, Sirgue et al 2007) but only on
synthetics data sets. 3D waveform inversion is still computationally expensive procedure but
recent developments on computing power have enabled application of the procedure on the
production scale data. We are going to show results obtained by 3D waveform inversion
method developed by Stekl, Warner, Umpleby 2007 on a field data set over a shallow
channel.
-
-
-
Multiscale Waveform Tomography with an Adaptive Early-Arrival Muting Window
Authors C. Boonyasiriwat, W. Cao, G. T. Schuster, P. Valasek, P. Routh and B. MacyWe propose a time-domain multiscale waveform tomography method by combining earlyarrival
waveform tomography and time-domain multiscale waveform tomography. A Wiener
filter is used for data processing and a multiscale V-cycle with an adaptive early-arrival
muting window for the inversion. The proposed method is very robust and can reconstruct an
accurate velocity structure from marine seismic data.
-
-
-
Viscoelastic Modeling and Full Waveform Inversion in Attenuating Media
Authors G. T. Royle and S. C. SinghSince the 1980’s a significant amount of research in the field of full waveform has been conducted,
and we are now at the stage where more accurate descriptions of the subsurface are
being sought in addition to the elastic parameters. Elasticity accounts for materials that have a
capacity to store mechanical energy with no dissipation of energy. A more accurate description
of real earth media takes into account the time-dependent nature of previous stress states. In
other words, the material possess a characteristic referred to as a memory effect (Robertsson
et al. 1994).
-
-
-
The campanian active volcanoes: Somma-Vesuvius and Campi Flegrei
Authors D. M. Palladino, S. Simei and R. TrigilaSomma-Vesuvius and Campi Flegrei are two of the most risky active volcanoes on earth, being
located in a three million people inhabited area. Although contiguous and coeval, the two volcanoes
do not share any important classificative patterns in the overall morphology, structure or magma
composition.
Several pieces of evidence on the relationships between the volcanic catastrophic events, the natural
environment and the population density, make the Campanian region a point of reference for the
volcanological research in the world.
In the last few years, specific studies, focused on stratigraphy, eruptive mechanisms and volcanotectonic
events, contributed to expand our knowledge on this area (see Mastrolorenzo et al., 2004
for a comprehensive review). At the present time, a still increasing amount of geo-archaeological
information allows us to understand better the effects of natural events which were acting on the
territory with different intensities and at different time scales. In this relatively small area, which
includes the Campanian Plain with its coasts and reliefs, few key sites, with well exposed sections,
will allow us to summarize its volcanological history by a through field-trip lasting two days.
Here, we find Somma-Vesuvius and Campi Flegrei, two of the most important districts of the
Quaternary volcanism of the Tyrrhenian margin. They have grown mostly on alluvial and marine
sediments filling up the graben formed during the Pliocene and Pleistocene by the subsidence of
Mesozoic carbonate platforms that make up the substrate of the Campanian plain; this platform now
lies about 2 km below the volcano (Scandone et al., 1991).
-
-
-
The Laga Basin: stratigraphic and structural setting
Authors S. Bigi, M. Moscatelli and S. MilliMost of the ancient turbidite systems are known being deposited in foredeep basins
at the front of active thrust belt. Differently from fluvio-deltaic systems generally
lacated in the more internal portion of these basins, the turbidite systems occur at
different depth in the more deeper portions of these basins (foredeep turbidite systems)
or in the relatively shallower tectonically confined depressions occurring on top of the
thrust belt (wedge-top turbidite systems) (see discussion in Mutti et al. 2002, 2003).
Foredeep turbidite systems represent the classical sedimentation in a broad and flat
basin plain, showing thick to thin parallel and continuous sandstone beds with the
Bouma-type depositional division. Wedge-top turbidite systems are directly fed by
fluvio-deltaic systems and more clearly record both climate changes affecting the source
areas and tectonic activity of the orogenic wedge.
Messinian turbidite deposits of the northern and central Apennines show many
characters indicating sedimentation in confined basins, formed since the upper
Tortonian in relation to the segmentation of the Langhian-lower Tortonian Marnoso-
Arenacea foredeep basin (inner stage of the Marnoso-Arenacea, Ricci Lucchi, 1986). In
these last years, detailed facies and physical stratigraphic analyses as a well as structural
and thermal analyses, conducted on the Laga and Argilloso-Arenacea Fms (central
Apennines), demonstrate as these basins were located at the hinge between foredeep and
wedge-top depozones of the Messinian Apennine thrust belt (Milli and Moscatelli,
2000, 2001; Bigi et al., 2003; Moscatelli, 2003; Milli et al., 2004; Falcini et al., 2006;
Stanzione et al., 2006; Casero and Bigi, 2006; Aldega et al., 2006; Critelli et al., 2007;
Milli et al., 2007). Anisotropy of the subducted plate and thrust propagation rate deeply
controlled the onset of complex basins at the top of the orogenic wedge (Casero and
Bigi 2006; Bigi et al., 2006). The resulting topography of these basins and the
concomitant climate changes exerted a strong control on turbidite sedimentation and on
the stratigraphic organization of these deposits.
-
-
-
Walking through downtown Rome. A discovery tour on the key role of geology in the history and urban development of the city
Authors R. Funiciello, G. Giordano, B. Adanti, C. Giampaolo and M. ParottoMany characteristics of the natural environment where Rome has developed for the last 3000 years have played a major positive role in promoting the excellence of Rome as a political, economic and administrative power, the so-called Caput Mundi of the ancient world. Aside from anthropological and ethnological factors, the positive geological and geomorphological setting of the future site of Rome favoured the settlement of several archaic villages along the left bank of the Tiber River since the beginning of the third millennium B.P. The sites were strategically located, being characterized by proximity to the river, over isolated tufaceous cliffs dominating the alluvial plain, the abundance of spring water and the wide availability of stones and natural building material that promoted a quick technological development of building and infrastuctural services to the growing town. The main natural factors playing a strategic role in the development of the long-lived city of Rome have been:
- The geomorphology of the distal volcanic plateau
- Tiber river network and the related alluvial deposits
- The surface geology and its natural materials
- The hydrogeology and microclimatic constraints
-
-
-
Mathematics of Modeling, Migration and Inversion with Gaussian Beams
By N. BleisteinGaussian beams are extensions of asymptotic ray theory to waves with complex traveltime. These waves have Gaussian decay orthogonal to a central ray. Solutions of wave problems are represented by integrals (sums) over a suite of Gaussian beams. This tends to produce representations that are smoother than those produced by symptotic ray theory, facilitating smoother modeling, migration and inversion output than what is produced by classical asymptotic ray theory.
Asymptotic ray theory is reviewed in the hierarchy of Cartesian coordinates, ray-centered coordinates, ray-centered coordinates with complex traveltimes (Gaussian beams!). Green’s functions and plane-wave modeling are described in each case, with the last requiring integrals over suites of Gaussian beams. Examples of Kirchhoff migration/inversion using Gaussian beam representations of Green’s functions are presented.
-