|
Pipeline Risk Management Manual, 3rd Edition
|
|
The third edition of W. Kent Muhlbauer’s popular text -- "Pipeline Risk Management Manual" -- was released earlier this year. We are already working on the fourth edition; please click on the "4th Edition" button to the left to learn more.
To order, please visit your local bookseller or visit an internet site. The buttons below will take you to the listed internet book stores.
Thank you for your interest.
|
|
|
|
|
|
|
To users of previous editions of this book: The first edition of this book was written at a time where formal risk assessments of pipelines were fairly rare. Sure, there were some repair/replace models out there and a few prioritization schemes, but generally, those who embarked on a formal process for assessing pipeline risks were not following the norm and were doing so for their own reasons.
It didn’t take a detailed market analysis to see that this was an interesting topic to many, but clouded by preconceptions of requirements of huge databases, complex statistical analyses, and obscure probabilistic techniques. In reality, good risk assessments can and always have been done, even in a data-scarce environment. This was the major premise of the earlier editions. The first edition even had a certain sense of being a risk assessment cookbook--"here are the ingredients and here is how to combine them.” Well, the situation is different now, especially in the Western World. Risk management is now being mandated by regulations. Risk assessment programs are being directly audited by regulators. Risk management plans are increasingly coming under direct public scrutiny. A risk assessment seems to be the centerpiece of every approval process and every pipeline litigation. This is not a bad thing of course—“If you don’t have a number, you don’t have a fact; you have an opinion.”
The objective of the 3rd edition will be to give the cookbook again (although much less "thou shalt" and more "consider this"), but the more important thrust will be a reference book for concepts, ideas, and maybe a few templates.
While we generally shy away from technical books that get too philosophical and are weak in specific "how-to's," it is just simply not possible to adequately discuss risk without getting into social and psychological issues.
Computer systems are more robust so short-cuts, general assumptions, and easy approximations are less justifiable. It was more appropriate to advocate a very simple approach when practitioners were picking this up only as a 'good thing' to do, rather than as a mandated, carefully scrutinized activity. There is certainly still a place for the simple risk assessment. As with the most robust approach, even the simple techniques crystallize thinking, remove much subjectivity, help to ensure consistency, and generate a host of other benefits. So, the basic risk assessment model of the 2nd edition is preserved in this edition, although it is tempered with many alternative and supporting evaluation ideas. This is done with the belief that an idea and reference book will best serve the present needs of pipeline risk managers and anyone interested in the field.
For those who have systems in place based on previous editions, the recommendation is simple: retain your current model and all its variables, but build up a modern foundation beneath those variables. This is straightforward and will be a worthwhile effort. Work to replace the high-level assessments of "good," "fair," and "poor," with evaluations that combine several data-rich sub-variables. In many cases, this allows your "as-collected" data and measurements to be used directly in the risk model—no extra interpretation steps required.
The results will usually be the same since the previous high-level labels were no doubt based on these same sub-variables, only informally. If your new processes do yield different results than the previous assessments, then some valuable knowledge has been gained. Find the disconnect and learn why one of the approaches was not "thinking" correctly. In the end, the risk assessment has been improved!
|
|
|
|
To users of previous editions of this book: The first edition of this book was written at a time where formal risk assessments of pipelines were fairly rare. Sure, there were some repair/replace models out there and a few prioritization schemes, but generally, those who embarked on a formal process for assessing pipeline risks were not following the norm and were doing so for their own reasons.
It didn’t take a detailed market analysis to see that this was an interesting topic to many, but clouded by preconceptions of requirements of huge databases, complex statistical analyses, and obscure probabilistic techniques. In reality, good risk assessments can and always have been done, even in a data-scarce environment. This was the major premise of the earlier editions. The first edition even had a certain sense of being a risk assessment cookbook--"here are the ingredients and here is how to combine them.” Well, the situation is different now, especially in the Western World. Risk management is now being mandated by regulations. Risk assessment programs are being directly audited by regulators. Risk management plans are increasingly coming under direct public scrutiny. A risk assessment seems to be the centerpiece of every approval process and every pipeline litigation. This is not a bad thing of course—“If you don’t have a number, you don’t have a fact; you have an opinion.”
The objective of the 3rd edition will be to give the cookbook again (although much less "thou shalt" and more "consider this"), but the more important thrust will be a reference book for concepts, ideas, and maybe a few templates.
While we generally shy away from technical books that get too philosophical and are weak in specific "how-to's," it is just simply not possible to adequately discuss risk without getting into social and psychological issues.
Computer systems are more robust so short-cuts, general assumptions, and easy approximations are less justifiable. It was more appropriate to advocate a very simple approach when practitioners were picking this up only as a 'good thing' to do, rather than as a mandated, carefully scrutinized activity. There is certainly still a place for the simple risk assessment. As with the most robust approach, even the simple techniques crystallize thinking, remove much subjectivity, help to ensure consistency, and generate a host of other benefits. So, the basic risk assessment model of the 2nd edition is preserved in this edition, although it is tempered with many alternative and supporting evaluation ideas. This is done with the belief that an idea and reference book will best serve the present needs of pipeline risk managers and anyone interested in the field.
For those who have systems in place based on previous editions, the recommendation is simple: retain your current model and all its variables, but build up a modern foundation beneath those variables. This is straightforward and will be a worthwhile effort. Work to replace the high-level assessments of "good," "fair," and "poor," with evaluations that combine several data-rich sub-variables. In many cases, this allows your "as-collected" data and measurements to be used directly in the risk model—no extra interpretation steps required.
The results will usually be the same since the previous high-level labels were no doubt based on these same sub-variables, only informally. If your new processes do yield different results than the previous assessments, then some valuable knowledge has been gained. Find the disconnect and learn why one of the approaches was not "thinking" correctly. In the end, the risk assessment has been improved!
|
|
|
|
Acknowledgements Preface Introduction Risk Assessment at a Glance Chapter 1 Risk: Theory and Application Chapter 2 Risk Assessment Process Chapter 3 Third-Party Damage Index Chapter 4 Corrosion Index Chapter 5 Design Index Chapter 6 Incorrect Operations Index Chapter 7 Leak Impact Factor Chapter 8 Data Management and Analyses Chapter 9 Additional Risk Modules Chapter 10 Service Interruption Risk Chapter 11 Distribution Systems Chapter 12 Offshore Pipeline Systems Chapter 13 Stations and Surface Facilities Chapter 14 Absolute Risk Estimates Chapter 15 Risk Management Appendix A Typical Pipeline Products Appendix B Leak Rate Determination Appendix C Pipe Strength Determination Appendix D Surge Pressure Calculations Appendix E Sample Pipeline Risk Assessment Algorithms Appendix F Receptor Risk Evaluation Appendix G Examples of Common Pipeline Inspection and Survey Techniques Glossary References Index
|
|
|
|
I. The science and philosophy of risk
Embracing paranoia
One of Murphy’s famous laws states that "left to themselves, things will always go from bad to worse." This humorous prediction is, in a way, echoed in the second law of thermodynamics. That law deals with the concept of entropy. Stated simply, entropy is a measure of the disorder of a system. The thermodynamics law states that "entropy must always increase in the universe and in any hypothetical isolated system within it." Practical application of this law says that to offset the effects of entropy, energy must be injected into any system. Without adding energy, the system becomes increasingly disordered.
Although the law was intended to be a statement of a scientific property, it was seized upon by "philosophers" who defined "system" to mean a car, a house, economics, a civilization, or anything that became disordered. By this extrapolation, the law explains why a desk or a garage becomes increasingly cluttered until a cleanup (injection of energy) is initiated. Gases diffuse and mix in irreversible processes, unmaintained buildings eventually crumble, and engines (highly ordered systems) break down without the constant infusion of maintenance energy.
Here is another way of looking at the concept: "Mother Nature hates things she didn’t create." Forces of nature seek to disorder man’s creations until the creation is reduced to the most basic components. Rust is an example—metal seeks to disorder itself by reverting to its original mineral components.
If we indulge ourselves with this line of reasoning, we may soon conclude that pipeline failures will always occur unless an appropriate type of energy is applied. Transport of products in a closed conduit, often under high pressure, is a highly ordered, highly structured undertaking. If nature indeed seeks increasing disorder, forces are continuously at work to disrupt this structured process. According to this way of thinking, a failed pipeline with all its product released into the atmosphere or into the ground or equipment and components decaying and reverting to their original premanufactured states represent the less ordered, more natural state of things.
These quasi-scientific theories actually provide a useful way of looking at portions of our world. If we adopt a somewhat paranoid view of forces continuously acting to disrupt our creations, we become more vigilant. We take actions to offset those forces. We inject energy into a system to counteract the effects of entropy. In pipelines, this energy takes the forms of maintenance, inspection, and patrolling; that is, protecting the pipeline from the forces seeking to tear it apart.
After years of experience in the pipeline industry, experts have established activities that are thought to directly offset specific threats to the pipeline. Such activities include patrolling, valve maintenance, corrosion control, and all of the other actions discussed in this text. Many of these activities have been mandated by governmental regulations, but usually only after their value has been established by industry practice. Where the activity has not proven to be effective in addressing a threat, it has eventually been changed or eliminated. This evaluation process is ongoing. When new technology or techniques emerge, they are incorporated into operations protocols. The pipeline activity list is therefore being continuously refined.
A basic premise of this book is that a risk assessment methodology should follow these same lines of reasoning. All activities that influence, favorably or unfavorably, the pipeline should be considered—even if comprehensive, historical data on the effectiveness of a particular activity are not yet available. Industry experience and operator intuition can and should be included in the risk assessment.
The scientific method
This text advocates the use of simplifications to better understand and manage the complex interactions of the many variables that make up pipeline risk. This approach may appear to some to be inconsistent with their notions about scientific process. Therefore, it may be useful to briefly review some pertinent concepts related to science, engineering, and even philosophy.
The results of a good risk assessment are in fact the advancement of a theory. The theory is a description of the expected behavior, in risk terms, of a pipeline system over some future period of time. Ideally, the theory is formulated from a risk assessment technique that conforms with appropriate scientific methodologies and has made appropriate use of information and logic to create a model that can reliably produce such theories. It is hoped that the theory is a fair representation of actual risks. To be judged a superior theory by the scientific community, it will use all available information in the most rigorous fashion and be consistent with all available evidence. To be judged a superior theory by most engineers, it will additionally have a level of rigor and sophistication commensurate with its predictive capability; that is, the cost of the assessment and its use will not exceed the benefits derived from its use. If the pipeline actually behaves as predicted, then everyone’s confidence in the theory will grow, although results consistent with the predictions will never "prove" the theory.
Much has been written about the generation and use of theories and the scientific method. One useful explanation of the scientific method is that it is the process by which scientists endeavor to construct a reliable and consistent representation of the world. In many common definitions, the methodology involves hypothesis generation and testing of that hypothesis:
1. Observe a phenomenon. 2. Hypothesize an explanation for the phenomenon. 3. Predict some measurable consequence that your hypothesis would have if it turned out to be true. 4. Test the predictions experimentally.
Much has also been written about the fallacy of believing that scientists use only a single method of discovery and that some special type of knowledge is thereby generated by this special method. For example, the classic methodology shown above would not help much with investigation of the nature of the cosmos. No single path to discovery exists in science, and no one clear-cut description can be given that accounts for all the ways in which scientific truth is pursued.
Common definitions of the scientific method note aspects such as objectivity and acceptability of results from scientific study. Objectivity indicates the attempt to observe things as they are, without altering observations to make them consistent with some preconceived world view. From a risk perspective, we want our models to be objective and unbiased (see the discussion of bias later in this chapter). However, our data sources often cannot be taken at face value. Some interpretation and, hence, alteration is usually warranted, thereby introducing some subjectivity. Acceptability is judged in terms of the degree to which observations and experimentations can be reproduced. Of course, the ideal risk model will be accurate, but accuracy may only be verified after many years. Reproducibility is another characteristic that is sought and immediately verifiable. If multiple assessors examine the same situation, they should come to similar conclusions if our model is acceptable.
The scientific method requires both inductive reasoning and deductive reasoning. Induction or inference is the process of drawing a conclusion about an object or event that has yet to be observed or occur on the basis of previous observations of similar objects or events. In both everyday reasoning and scientific reasoning regarding matters of fact, induction plays a central role. In an inductive inference, for example, we draw conclusions about an entire group of things, or a population, on the basis of data about a sample of that group or population; or we predict the occurrence of a future event on the basis of observations of similar past events; or we attribute a property to a nonobserved thing on the grounds that all observed things of the same kind have that property; or we draw conclusions about causes of an illness based on observations of symptoms. Inductive inference permeates almost all fields, including education, psychology, physics, chemistry, biology, and sociology. The role of induction is central to many of our processes of reasoning.
At least one application of inductive reasoning in pipeline risk assessment is obvious—using past failures to predict future performance. A more narrow example of inductive reasoning for pipeline risk assessment would be: "Pipeline ABC is shallow and fails often, therefore all pipelines that are shallow fail more often."
Deduction on the other hand, reasons forward from established rules: "All shallow pipelines fail more frequently; pipeline ABC is shallow; therefore pipeline ABC fails more frequently."
As an interesting aside to inductive reasoning, philosophers have struggled with the question of what justification we have to take for granted the common assumptions used with induction: that the future will follow the same patterns as the past; that a whole population will behave roughly like a randomly chosen sample; that the laws of nature governing causes and effects are uniform; or that we can presume that a sufficiently large number of observed objects gives us grounds to attribute something to another object we have not yet observed. In short, what is the justification for induction itself? Although it is tempting to try to justify induction by pointing out that inductive reasoning is commonly used in both everyday life and science, and its conclusions are, by and large, proven to be correct, this justification is itself an induction and therefore it raises the same problem: Nothing guarantees that simply because induction has worked in the past it will continue to work in the future. The problem of induction raises important questions for the philosopher and logician whose concern it is to provide a basis for assessment of the correctness and the value of methods of reasoning.
Beyond the reasoning foundations of the scientific method, there is another important characteristic of a scientific theory or hypothesis that differentiates it from, for example, an act of faith: A theory must be “falsifiable.” This means that there must be some experiment or possible discovery that could prove the theory untrue. For example, Einstein’s theory of relativity made predictions about the results of experiments. These experiments could have produced results that contradicted Einstein, so the theory was (and still is) falsifiable [56]. On the other hand, the existence of God is an example of a proposition that cannot be falsified by any known experiment. Risk assessment results, or “theories” will predict very rare events and hence not be falsifiable for many years. This implies an element of faith in accepting such results. Because most risk assessment practitioners are primarily interested in the immediate predictive power of their assessments, many of these issues can largely be left to the philosophers. However, it is useful to understand the implications and underpinnings of our beliefs.
Modeling
As previously noted, the scientific method is a process by which we create representations or models of our world. Science and engineering (as applied science) are and always have been concerned with creating models of how things work. As it is used here, the term model refers to a set of rules that are used to describe a phenomenon. Models can range from very simple screening tools (i.e., “if A and not B, then risk = low”) to enormously complex sets of algorithms involving hundreds of variables that employ concepts from expert systems, fuzzy logic, and other artificial intelligence constructs.
Model construction enables us to better understand our physical world and hence to create better engineered systems. Engineers actively apply such models in order to build more robust systems. Model building and model application/evaluation are therefore the foundation of engineering. Similarly, risk assessment is the application of models to increase the understanding of risk, as discussed later in this chapter.
In addition to the classical models of logic, logic techniques are emerging that seek to better deal with uncertainty and incomplete knowledge. Methods of measuring "partial truths"—when a thing is neither completely true nor completely false—have been created based on fuzzy logic originating in the 1960s from the University of California at Berkley as techniques to model the uncertainty of natural language. Fuzzy logic or fuzzy set theory resembles human reasoning in the face of uncertainty and approximate information. Questions such as "To what degree is x safe?" can be addressed through these techniques. They have found engineering application in many control systems ranging from "smart" clothes dryers to automatic trains.
|
|
|
|
|
|
|
|