WKM_Consultancy
www.pipelinerisk.com   
Beliefs 
Resources 
Links 
4th%20Edition 
About%20WKMC 
Home 
Expert Advice
The first two items link to "Enhanced Pipeline Risk Assessment" Parts 1 and 2. These papers contain updated algorithms that will be part of the 4th edition "Pipeline Risk Management Manual" by W. Kent Muhlbauer. For more information on the book, please click on the "4th Edition" button to the left.

The following transcript/notes are from a recent address to an industry group combined with excerpts from the paper, "Lessons Learned." Both were authored by W. Kent Muhlbauer and are available in the "Downloads" section of the "Resources" page (click on the button to the left to go there).
An Overview of the Risk Management Process
By now everyone is probably familiar with the most common definition of risk:

Risk = Probability x Consequences
(R = P x C)


I always like to start with this relationship because it helps us develop a mindset that there are really two large components of risk—probability of failure and consequences of failure.

The new integrity management (IM) rule subtly employs this probability-consequence relationship. Integrity management is really an attack on probability of failure, conducted in high consequence areas. Anything that threatens integrity increases probability of failure and hence, risk. And then we have high consequence area (HCA) component. Any failures in HCA will probably be more significant than in non-HCA’s and hence, riskier. We are to apply the highest IM standards in the HCA’s. This is classic risk Management.

Risk Mitigation = IM x HCA


Embrace Paranoia

I think the best way to begin risk management is to embrace paranoia. There are forces at work at this very instant trying to break our pipelines. Let's list some of them: internal pressure creating stress in the wall and tempting microscopic flaws to grow, a corrosive environment trying to eat away steel, water infiltrating and deteriorating the coatings, Joe Contractor firing up his backhoe, earth movements, erosion, off spec products, fatigue, microbes, etc. We know of all these threats. We constantly inject energy to offset these forces, to keep the system intact. It’s been said, "mother nature hates things she didn’t create." It’s useful to embrace this paranoia because it makes us recognize that a pressurized pipeline is not a natural thing in the world. It won’t continue to exist by itself. That’s an important mindset. We must be vigilant and work to offset the natural forces that are constantly trying to make it go away.

What Risk Assessment Can and Cannot Do

An important part of this vigilance is risk assessment. There is no universally accepted way to assess risks from a pipeline.

It is important to recognize what a risk assessment can and cannot do, regardless of the methodology employed. The ability to predict pipeline failures (when and where they will occur) would obviously be a great advantage in reducing risk. Unfortunately, this cannot be done at present. Pipeline accidents are relatively rare and often involve the simultaneous failure of several safety provisions. This makes accurate failure predictions almost impossible. So, modern risk assessment methodologies provide a surrogate for such predictions. Assessment efforts by pipeline operating companies are normally NOT attempts to predict how many failures will occur or where the next failure will occur. Rather, efforts are designed to systematically and objectively capture everything that can be known about the pipelines and their environments, put this information into a risk context, and then use it to make better decisions.

Risk Management

So, given what risk assessment can and cannot do, the objectives of risk management are to:

* Increase Understanding
o Decision support tool
o Resource allocation tool

* Reduce Risks
* Reduce Costs


Don’t Fear Data

Good risk assessment requires data. Megabytes of expensive, resource-consuming data are routinely gathered on pipeline systems. One of the great tragedies is not making full use of this. Full use means using this data in the context of other data and continuing to use it until it is refreshed/replaced with newer information. Not looking at it, filing it away, and forgetting about it (unless an incident happens) is not making good use of data. Today’s computers make large databases manageable and cost effective, so it no longer costs a premium to get to the details.

Let’s look at IM data requirements:

These are some specific data requirements (on a segment-specific basis) listed in the regulation (195.452 (e) and Appendix C). One of the supporting documents for the new reg says, "Through this required program, hazardous liquid operators will comprehensively evaluate the entire range of threats to each pipeline segment’s integrity by analyzing all available information about the pipeline segment and consequences of a failure on a high consequence area."

Good risk assessment can and should use large quantities of data. It's not only the use of individual pieces of data, its also the way in which we combine it. Combining all the details reveals things that would otherwise be obscured. But this is really a fairly straight forward thing to do. And, as noted before, in today’s information age, it’s more and more cost-effective.

So, let's talk a bit about risk assessment and the processes behind it.

Familiarization with the Building Blocks

It’s useful to become familiar with the building blocks of risk assessment. I view scenarios, event trees, and fault trees as the core building blocks of any risk assessment. They are NOT, however, risk assessments themselves. Rather, they are tools that we use to capture our understanding of sequences that lead to failures. They form a basis for a risk model. They aren’t risk models themselves, in my opinion, because they do not pass the risk model tests that I will propose in a moment. HAZOPS and FMEA are also very useful tools especially when you extend your risk assessments to surface facilities like tank farms and pump stations. But again, these are tools (components) of a complete risk model.

A segmenting strategy will be required. The two main approaches are fixed interval segments (which includes strategies like every mile, between valve sites, etc.) and dynamic segmentation, where a new segment is created whenever a risk variable changes. I used to prefer the fixed interval approach, but with today’s computing environment, I now prefer the dynamic approach. There are tradeoffs, but I like the fact that with dynamic segments, you get iso-risk segments (segments of equal risk). Each segment is unique from its neighbors and you avoid the compromises of having to use average or worst case conditions, and the difficulties in getting cumulative risks. I find that risk management is much cleaner when you’ve created these "constant risk" segments.

Handling Uncertainty

As noted earlier, we all know the threats. We understand the mechanisms underlying the threats. We know the options in mitigating the threats. But in knowing these things, we also must know the uncertainty involved—we can’t know and control enough of the details to eliminate risk. Just as in weather prediction, at any point in time, there are thousands of forces acting on a pipeline, the magnitude of which are "unknown and unknowable." (We could easily digress into chaos theory and entropy here.)

It is important to decide early on how to deal with uncertainty in assessing risks. My advice is simple: assume "guilty until proven innocent." Assume the worst until data shows otherwise. This is not only consistent with the conservatism we engineers are taught to use, it also makes very good political sense. I can illustrate why. There are two ways to be wrong in any part of a risk assessment.

* "Call it good when it's really bad"
* "Call it bad when it's really good"


Lets look at the worst thing that happens in either case:

When you call something "bad," it shows up on your "radar screen." You can investigate, find that it's really "good," and correct the data. You’ve spent some resources.

In the other case, you’ve already called it "good." There’s no incentive to go check. You won’t be looking for a problem, so you won't find the error until an incident occurs or an outside auditor finds it. At that point, the error is often made public with accompanying suspicions that the rest of model cannot be trusted, and the company is assuming things are okay, which leads to a general loss of credibility.

I know this is tough to do at times. You’re penalizing a lot of pipe because of the slight chance that some areas have become bad since you last checked. Nevertheless, in my experience, this is the way to go.

It is also important that a risk assessment identify the role of uncertainty in its use of inspection data. Information should have a "life span," reflecting that conditions are always changing and recent information is more useful than older information. Eventually, aged information has little value at all in the risk analysis. This applies to inspections, surveys, etc.

I’m proposing that any risk assessment methodology be able to pass the following four tests.

Performing Risk Assessment—Four Tests

1) "I didn’t know that" test
2) "Why is that?" test
3) "Point to a map" test
4) "What about ___?" test


"I Didn’t Know That!" Test: New Knowledge

The risk model should be able to do more than you can do in your head or even with your experts gathered. Most humans can simultaneously consider a handful of factors in making a decision. While the real world situation might be influenced by dozens of variables simultaneously, your model should be able to simultaneously consider dozens or even hundreds of pieces of information.

The model should tell you things you didn’t already know. As a matter of fact, I’ll go so far as to say that if there aren’t some surprises in the assessment results, I would be suspicious of the model’s completeness. Naturally, when given a surprise, you should then be skeptical, and need to be convinced. That helps to validate your model and leads to the next points:

"Why is That?": Drill Down

So let's say that the new knowledge is that your line XYZ in Barker County is high risk. You say, "What?! Why is that high risk?" You should be skeptical, by the way. The model should be able to tell you its reasons: it's because there are coincident occurrences of population density, a vulnerable aquifer, and state park lands, coupled with 5 years since a close interval survey, no ILI, high stress levels, and questionable coating condition, which create a riskier-than-normal situation. And you say, "Well, okay, looking at all that, it makes sense."

Point to a Map: Know the Risk

This test is often overlooked. Basically, it means that you should be able to pull out a map of your system, put your finger on any point along the pipeline and determine the risk at that point—either relative or absolute. Furthermore, you should be able to find out specifically the corrosion risk, the third party risk, the types of receptors, the spill volume, etc. This may seem an obvious thing for a risk assessment to do, but you’d be surprised how many cannot do this. Some have pre-determined their risk areas so they know little about other areas (and one must wonder about this pre-determination). Others do not retain information specific to a given location. Others don’t role up risks into summary judgments. The risk information should be a characteristic of the pipeline at all points.

"What About ___?": A Measure of Completeness

Someone should be able to query the model on any aspect of risk. Such as "What about subsidence risk? What about stress corrosion cracking?" Make sure all the probability issues are addressed. All known failure modes should be considered, even if they are very rare for your particular system. You never know when you will be comparing yours against one that has that failure mode.

Make sure that the very complex consequence potential is assessed in a way that you want and need. Are all receptors and receptor sensitivities addressed; spill sizes; leak detection; emergency response; product characteristics? My favorite way to look at consequences is the product of four factors: spill x spread x receptors x product hazard. If any of these goes to zero, then there are no consequences, no matter how bad the other three are. It seems to me that any complete consequence evaluation will consider at least these four variables.

Bonus Capabilities

Link to Absolute Risk

There are advantages to relative risk approaches and absolute risk approaches. The best of both worlds (in my opinion) would be where we use the relative scores day-to-day, but easily link to absolute numbers when needed. Our preliminary work indicates that the relationship between an absolute failure probability scale and a relative scale will be defined by some curve, asymptotic to at least one axis, either beginning flat or beginning steep.

A good scoring model should show that at one end of the scale, you’ve got a pipeline with no safety provisions, operated in the most hostile environment—failure is imminent. On the other side of the scale, you’ve got the "bullet proof" version—buried 20 ft deep, quadruple heavy wall, fracture resistant, corrosion proof metal with secondary containment, fenced and guarded ROW 24 hrs, daily integrity verification, etc.—virtually no chance of failure. The end regions we understand better than the middle ground. A beginning as either steep or shallow each has a reasonable logical basis. The middle region will be the most critical, and that’s where more data will strengthen our ability to finalize a curve.

We’re bound to have misjudged some of the variable importance in our thinking. Only time will tell. But, even if the quantification of the risk factors is imperfect, the results nonetheless will usually give a reliable picture of places where risks are relatively lower (fewer "bad" factors present) and where they are relatively higher (more "bad" factors are present).

Data Collection

As noted earlier, data is essential to good risk assessment. So, it’s important to have a clear understanding of the data collection process:

What Will the Data Represent?

The data is to represent the sum of our knowledge of the pipeline section, relative to all other sections.

How Will the Values Be Obtained?

How will you be collecting your data? Will you have one person traveling to every region or several people in different offices? Team assessments? Will you have a hard proof requirement or will you accept opinion data? What will be the maximum penalty for any responses of "don’t know?" Be sure you know the penalties for your workers and that they understand the impact those answers have on the overall process.

What Sources of Variation Exist?

Sources of variation:

1) PL environ
2) PL ops
3) Amount info available
4) Evaluator-to-evaluator differences
5) Day-to-day variation in single evaluator
6) “Signal to noise” ratio


Why is the Data Being Collected?

"Why" should tie back to mission statement of whole program:

* Identify hot spots,
* Check regulatory compliance,
* Model resource allocation,
* Prioritize maintenance, and
* Create an operating discipline.


Think Organic

I think it’s useful to keep the "nature theme" going by envisioning this as an organic process. There will be new inspection data, new inspection techniques, new statistical data sets to help determine weightings, missed risk indicators, new operating disciplines, etc. These will need to be constantly incorporated. Plan for the most flexible environment possible. Make changes easy to incorporate.

The organic model also serves to anchor another basic concept--risk is never completely in our control, but the more we know about it and the more attention we pay to it, the more predictable it will be.

So, we can say that the ideal risk assessment model will have these characteristics:


* Simple/easy to understand—should be able to show a layman how risks are determined;
* Comprehensive—capturing not only statistical data, but experience and judgment when statistical data paints an incomplete picture;
* Accurate—even though I said earlier that a risk assessment can’t predict where accidents will occur, if the risk model is constantly pointing "here" when leaks are happening "there," then something is wrong;
* Expandable—new info, new techniques, new statistical relationships; and
* Cheap—cost of model should never outweigh its benefit.


A couple of quick comments about pitfalls in this process: Pipeline risk assessment is still relatively new, so there are very few roadmaps to follow out there. It’s a bit like the early days of GIS; there is great potential for over-promising and under-delivery if you’re trying to purchase a turnkey solution.

Doing the Right Thing

It’s a bit concerning when an operator asks, "What will I need to pass an audit?" The whole point is that as an operator you know more than the regulator does. I submit that if you are doing the right thing, getting the information you need to make good decisions and using it intelligently, regulators will be satisfied. The emphasis should be on good risk management, rather than on compliance. Set up the right model to help make better decisions.

Study Results

Which leads to this point: USE those results. This might seem obvious, but it is surprising how many people really don't appreciate what they have available after completing a thorough risk assessment. Remember that your final risk numbers should be completely meaningful in a practical, real-world sense. They should represent everything you know about that piece of pipe—all your years of experience, all the statistical data you can gather, all your gut feelings, all your sophisticated engineering calculations. If you can't really believe your numbers, something is wrong with the model. When, after careful evaluation and much experience, you can really believe the numbers, you will find many ways to use them that you perhaps did not foresee.

They can:
* Design an operating discipline,
* Assist in route selection,
* Optimize spending,
* Strengthen project evaluation,
* Determine project prioritization,
* Determine resource allocation,
* Ensure regulatory compliance,
* Etc.


Resource allocation

For example, one of the most powerful uses is in resource allocation—how are you using your people, your money, and your equipment to minimize risks? Your risk model should play a key role in this.

Closing points to ponder

A bit of paranoia can be good—remember, a pressurized pipeline is not a natural thing, there are forces at work trying to make it go away.

Strategize and conceptualize—this doesn’t mean complexity. Complexity is not necessary and might even be a danger sign. It does mean that you need to know where you’re going in order to best design your risk systems.

It’s a Project

Think of the risk assessment and risk management as a project that requires all the elements of a good project execution, including conceptualization, design, construction, documentation, and training. These elements are all necessary to ensure a good result.

Keep an open mind to the possibilities that this new knowledge can bring. You’re still using the same information you’ve been using for years, but now its being put together in a way that turns information into knowledge.
Risk Assessment as a Measurement Tool
In creating a risk assessment system, you have in effect created a measurement tool. As with any measurement tool, it must have a suitable "signal-to-noise ratio." This means that the "noise," the amount of uncertainty in the measurement (due to numerous causes) must be low enough so that the "signal," the risk value of interest can be read. In the case of pipeline risk, some sources of "noise" that must be dealt with include:

* Varying static conditions along a pipeline and between compared pipelines--different soils, vegetation, temperatures, pipe materials, pipe sizes, operating practices, etc.,
* Varying dynamic conditions--activities of people, presence of people, weather events, stress conditions, soil moisture content, etc.,
* High level of uncertainty associated with the modeling of phenomena such as dispersion and explosion,
* Small amounts of statistical data from which to predict event frequencies,
* Large numbers of variables that can contribute to risk changes and which are often confounded with each other,
* Etc.


These and other considerations limit the ability of the risk assessment tool to distinguish real changes in risk level from changes that do not necessarily contribute to risk. We should be careful not to think we can find a 2% change in risk with a tool that is only sensitive to +/- 10%.

This is similar to the "accuracy" of the model, but involves additional considerations that surround the high level of uncertainty associated with risk management. However, it would not be reasonable to assume that this tool cannot be continuously improved. Improvement opportunities should be constantly sought.
Managing Your Risk Assessment Project
A useful way to view this process is a direct analogy with new pipeline construction. In either case, a certain discipline is required. As with new construction, failures in risk modeling occur through inappropriate expectations and poor planning, while success happens through thoughtful planning and management.

Below, the project phases of a pipeline construction are compared to a risk assessment effort.

I. Conceptualization and Scope-Creation Phase

Pipeline: Determine the objective, the needed capacity, the delivery parameters and schedule.

Risk Assessment: Several questions to the pipeline operator may better focus the effort and direct the choice of a formal risk assessment technique:

* What data do you have?
* What is your confidence in the predictive value of the data?
* What are the resource demands (and availability) in terms of costs, man-hours, and time to set up and maintain a risk model?
* What benefits do you expect to accrue, in terms of cost savings, reduced regulatory burdens, improved public support, and operational efficiency?


Subsequent defining questions might include: What portions of your system are to be evaluated-pipeline only? Tanks? Stations? Valve sites? Mainlines? Branch lines? Distribution systems? Gathering systems? On-shore/offshore? To what level of detail? Estimate the uses for the model, then add a margin of safety because there will be unanticipated uses. Develop a schedule and set milestones to measure progress.

II. Route Selection/ROW Acquisition

Pipeline: Determine the optimum routing, begin the process of acquiring needed ROW.

Risk Assessment: Determine the optimum location for the model and expertise: Centrally done from corporate headquarters? Field offices maintain and use information? Unlike the pipeline construction analogy, this aspect is readily changed at any point in the process and does not have to finally decided at this early stage of the project.

III. Design

Pipeline: perform detailed design hydraulic calculations; specify equipment, control systems, and materials.

Risk Assessment: The heart of the risk assessment will be the model or algorithm--that component which takes raw information such as wall thickness, population density, soil type, etc. and turns it into risk information. Successful risk modeling involves a balancing between various issues:

* Identifying an exhaustive list of contributing factors vs choosing the critical few to incorporate in a model (complex vs simple),
* “Hard” data vs engineering judgement (how to incorporate widely-held beliefs which do not have supporting statistical data),
* Uncertainty vs statistics (how much reliance to place on predictive power of limited data), and
* Flexibility vs situation-specific model (ability to use same model for variety of products, geographical locations, facility types, etc.).


It is important that ALL risk variables be considered, even if only to conclude that certain variables will not be included in the final model. In fact, many variables will not be included when such variables do not add significant value but reduce the usability of the model. These "use or don't use" decisions should be done carefully and with full understanding of the role of the variables in the risk picture. Note that many simplifying assumptions are often made, especially in complex phenomena like dispersion modeling, fire and explosion potentials, etc., in order to make the risk model easy to use and still relatively robust.

Both probability variables and consequence variables are examined in most formal risk models. This is consistent with the most widely accepted definition of risk:

(event risk) = (event probability) x (event consequence)


IV. Material Procurement

Pipeline: Identify long-delivery items, prepare specifications, determine delivery and quality control processes.

Risk Assessment: Identify data needs that will take the longest to obtain and begin those efforts immediately. Identify data formats and level of detail. Take steps to minimize subjectivity in data collection. Prepare data collection forms or formats and train data collectors to ensure consistency.

V. Construction

Pipeline: Determine number of construction spreads, material staging, critical path schedule, inspection protocols.

Risk Assessment: form the data collection team(s); clearly define roles and responsibilities; create critical path schedule to ensure timely data acquisition; schedule milestones; take steps to ensure quality assurance/quality control.

VI. Commissioning

Pipeline: Testing of all components, startup programs completed.

Risk Assessment: Use statistical analysis techniques to partially validate model results from a numerical basis. Perform a sensitivity analysis and some trial "what-if's" to ensure that model results are believable and consistent. Hopefully the risk assessment characteristics were earlier specified in the design and concept phase of the project, but here is a final place to check to ensure that:

* All failure modes are considered,
* All risk elements are considered and the most critical ones are included,
* Failure modes are considered independently as well as in aggregate,
* All available information is being appropriately utilized,
* Provisions exist for regular updates of information, including new types of data,
* Consequence factors are separable from probability factors,
* Weightings, or other methods to recognize relative importance of factors, are established,
* The rationale behind weightings is well documented and consistent,
* A sensitivity analysis has been performed,
* The model reacts appropriately to failures of any type,
* Risk elements are combined appropriately ("and" vs "or" combinations),
* Steps are taken to ensure consistency of evaluation,
* Risk assessment results form a reasonable statistical distribution (outliers?),
* There is adequate discrimination in the measured results (signal-to-noise ratio), and
* Comparisons can be made against fixed or floating "standards" or benchmarks.


VII. Project Completion

Pipeline: Finalize manuals, complete training, ensure maintenance protocols are in place, turn system over to operations.

Risk Assessment: Carefully document the risk assessment process and all sub-processes, especially the detailed workings of the algorithm or central model.

Set up administrative processes to support on-going program. Refer to DOT Risk Management Demonstration Program control documents for details on aspects of a good adminsitrative program, including:

* Assigning responsibilities,
* Measuring improvement,
* Re-visiting processes,
* Management of change,
* Etc.

Moving From Risk Assessment to Risk Management (Philosophy): Part I
The use of a formal risk assessment tool or process allows one to make more consistent and hopefully "better" decisions regarding risk management. Managing the risks implies that the manager has some pre-determined notion of where the risk levels should be. Realistically though, it is only in rare cases that risk levels will be clearly acceptable or unacceptable. In such cases, that determination was probably apparent even before a formal risk assessment was performed. In the majority of cases, risks will be acceptable or unacceptable (or tolerable/intolerable) only in context with other comparable risks and with the rewards of the undertaking. So, rather than establishing fixed acceptability levels for risk, the risk manager is normally attempting to optimize his resources in a way that produces the optimum risk-reward scenario. Therefore, risk managers should be prepared for more challenging decision-making environments with fewer opportunities for setting absolute limits.
Moving From Risk Assessment to Risk Management (Philosophy): Part II
In some sense, we have near-complete control of the risk. We can spend nothing on preventing accidents or we can spend enormous sums of money over-designing facilities, employing an army of inspectors, and routinely shutting down lines for preventive maintenance and replacement. Pragmatically, operators spending too little on preventing accidents will be out of business from regulatory intervention or from the cost of accidents. On the other hand, if an operator spends too much on accident prevention, he can be driven out of business by the competition--even if, perhaps, that competition has more accidents!

Risk management, to a large extent, revolves around the central process of making choices in the design and day-to-day operations of a pipeline system. Many of these choices are mandated by regulations whereas others are economically (budget) constrained. By assigning a cost to pipeline accidents (a sometimes difficult and controversial thing to do) and including this in the cost of operations, the optimum balance point is the lowest cost of operations.

No operator will ever have all of the relevant information he needs to guarantee safe operations. There will always be an element of the unknown. Managers must control the "right" risks with limited resources as there will always be limits on the amount of time, manpower, or money to apply to a risk situation. Managers must weigh their decisions carefully in light of what is known and unknown. The deliverable most requested after risk assessment is therefore a "resource allocation model." In such a model, the output of the risk assessment would play a key role in evaluating the benefits of any project or activity. The user would in essence be performing "what-if" scenarios to see the risk level which results after any proposed action.
Enhanced Pipeline Risk Assessment: Part I
Scoring or ranking type pipeline risk assessments have served the pipeline industry well for many years. However, risk assessments are being routinely used today in ways that were not common even a few years ago. The new roles of risk assessments have prompted some changes to the way risk algorithms are being designed. The changes lead to more robust risk results that better reflect reality and, fortunately, are readily obtained from data used in previous assessments.

Please click on the title above to view the entire paper.