WKM_Consultancy
www.pipelinerisk.com   
PDF%20File 
Beliefs 
Home 
Overview
The following transcript/notes are from a recent address by W. Kent Muhlbauer to an industry group. It was designed to accompany PowerPoint slides and has been adapted for use here. To view/save the Adobe Acrobat (.pdf) version of this paper, please click on the "pdf file" link to the left.
An Overview of the Risk Management Process
By now everyone is probably familiar with the most common definition of risk:

Risk = Probability x Consequences
(R = P x C)


I always like to start with this relationship because it helps us develop a mindset that there are really two large components of risk—probability of failure and consequences of failure.

The new integrity management (IM) rule subtly employs this probability-consequence relationship. Integrity management is really an attack on probability of failure, conducted in high consequence areas. Anything that threatens integrity increases probability of failure and hence, risk. And then we have high consequence area (HCA) component. Any failures in HCA will probably be more significant than in non-HCA’s and hence, riskier. We are to apply the highest IM standards in the HCA’s. This is classic risk Management.

Risk Mitigation = IM x HCA


Embrace Paranoia

I think the best way to begin risk management is to embrace paranoia. There are forces at work at this very instant trying to break our pipelines. Let's list some of them: internal pressure creating stress in the wall and tempting microscopic flaws to grow, a corrosive environment trying to eat away steel, water infiltrating and deteriorating the coatings, Joe Contractor firing up his backhoe, earth movements, erosion, off spec products, fatigue, microbes, etc. We know of all these threats. We constantly inject energy to offset these forces, to keep the system intact. It’s been said, "mother nature hates things she didn’t create." It’s useful to embrace this paranoia because it makes us recognize that a pressurized pipeline is not a natural thing in the world. It won’t continue to exist by itself. That’s an important mindset. We must be vigilant and work to offset the natural forces that are constantly trying to make it go away.

What Risk Assessment Can and Cannot Do

An important part of this vigilance is risk assessment. There is no universally accepted way to assess risks from a pipeline.

It is important to recognize what a risk assessment can and cannot do, regardless of the methodology employed. The ability to predict pipeline failures (when and where they will occur) would obviously be a great advantage in reducing risk. Unfortunately, this cannot be done at present. Pipeline accidents are relatively rare and often involve the simultaneous failure of several safety provisions. This makes accurate failure predictions almost impossible. So, modern risk assessment methodologies provide a surrogate for such predictions. Assessment efforts by pipeline operating companies are normally NOT attempts to predict how many failures will occur or where the next failure will occur. Rather, efforts are designed to systematically and objectively capture everything that can be known about the pipelines and their environments, put this information into a risk context, and then use it to make better decisions.

Risk Management

So, given what risk assessment can and cannot do, the objectives of risk management are to:

* Increase Understanding
o Decision support tool
o Resource allocation tool

* Reduce Risks
* Reduce Costs


Don’t Fear Data

Good risk assessment requires data. Megabytes of expensive, resource-consuming data are routinely gathered on pipeline systems. One of the great tragedies is not making full use of this. Full use means using this data in the context of other data and continuing to use it until it is refreshed/replaced with newer information. Not looking at it, filing it away, and forgetting about it (unless an incident happens) is not making good use of data. Today’s computers make large databases manageable and cost effective, so it no longer costs a premium to get to the details.

Let’s look at IM data requirements:

These are some specific data requirements (on a segment-specific basis) listed in the regulation (195.452 (e) and Appendix C). One of the supporting documents for the new reg says, "Through this required program, hazardous liquid operators will comprehensively evaluate the entire range of threats to each pipeline segment’s integrity by analyzing all available information about the pipeline segment and consequences of a failure on a high consequence area."

Good risk assessment can and should use large quantities of data. It's not only the use of individual pieces of data, its also the way in which we combine it. Combining all the details reveals things that would otherwise be obscured. But this is really a fairly straight forward thing to do. And, as noted before, in today’s information age, it’s more and more cost-effective.

So, let's talk a bit about risk assessment and the processes behind it.

Familiarization with the Building Blocks

It’s useful to become familiar with the building blocks of risk assessment. I view scenarios, event trees, and fault trees as the core building blocks of any risk assessment. They are NOT, however, risk assessments themselves. Rather, they are tools that we use to capture our understanding of sequences that lead to failures. They form a basis for a risk model. They aren’t risk models themselves, in my opinion, because they do not pass the risk model tests that I will propose in a moment. HAZOPS and FMEA are also very useful tools especially when you extend your risk assessments to surface facilities like tank farms and pump stations. But again, these are tools (components) of a complete risk model.

A segmenting strategy will be required. The two main approaches are fixed interval segments (which includes strategies like every mile, between valve sites, etc.) and dynamic segmentation, where a new segment is created whenever a risk variable changes. I used to prefer the fixed interval approach, but with today’s computing environment, I now prefer the dynamic approach. There are tradeoffs, but I like the fact that with dynamic segments, you get iso-risk segments (segments of equal risk). Each segment is unique from its neighbors and you avoid the compromises of having to use average or worst case conditions, and the difficulties in getting cumulative risks. I find that risk management is much cleaner when you’ve created these "constant risk" segments.

Handling Uncertainty

As noted earlier, we all know the threats. We understand the mechanisms underlying the threats. We know the options in mitigating the threats. But in knowing these things, we also must know the uncertainty involved—we can’t know and control enough of the details to eliminate risk. Just as in weather prediction, at any point in time, there are thousands of forces acting on a pipeline, the magnitude of which are "unknown and unknowable." (We could easily digress into chaos theory and entropy here.)

It is important to decide early on how to deal with uncertainty in assessing risks. My advice is simple: assume "guilty until proven innocent." Assume the worst until data shows otherwise. This is not only consistent with the conservatism we engineers are taught to use, it also makes very good political sense. I can illustrate why. There are two ways to be wrong in any part of a risk assessment.

* "Call it good when it's really bad"
* "Call it bad when it's really good"


Lets look at the worst thing that happens in either case:

When you call something "bad," it shows up on your "radar screen." You can investigate, find that it's really "good," and correct the data. You’ve spent some resources.

In the other case, you’ve already called it "good." There’s no incentive to go check. You won’t be looking for a problem, so you won't find the error until an incident occurs or an outside auditor finds it. At that point, the error is often made public with accompanying suspicions that the rest of model cannot be trusted, and the company is assuming things are okay, which leads to a general loss of credibility.

I know this is tough to do at times. You’re penalizing a lot of pipe because of the slight chance that some areas have become bad since you last checked. Nevertheless, in my experience, this is the way to go.

It is also important that a risk assessment identify the role of uncertainty in its use of inspection data. Information should have a "life span," reflecting that conditions are always changing and recent information is more useful than older information. Eventually, aged information has little value at all in the risk analysis. This applies to inspections, surveys, etc.

I’m proposing that any risk assessment methodology be able to pass the following four tests.

Performing Risk Assessment—Four Tests

1) "I didn’t know that" test
2) "Why is that?" test
3) "Point to a map" test
4) "What about ___?" test


"I Didn’t Know That!" Test: New Knowledge

The risk model should be able to do more than you can do in your head or even with your experts gathered. Most humans can simultaneously consider a handful of factors in making a decision. While the real world situation might be influenced by dozens of variables simultaneously, your model should be able to simultaneously consider dozens or even hundreds of pieces of information.

The model should tell you things you didn’t already know. As a matter of fact, I’ll go so far as to say that if there aren’t some surprises in the assessment results, I would be suspicious of the model’s completeness. Naturally, when given a surprise, you should then be skeptical, and need to be convinced. That helps to validate your model and leads to the next points:

"Why is That?": Drill Down

So let's say that the new knowledge is that your line XYZ in Barker County is high risk. You say, "What?! Why is that high risk?" You should be skeptical, by the way. The model should be able to tell you its reasons: it's because there are coincident occurrences of population density, a vulnerable aquifer, and state park lands, coupled with 5 years since a close interval survey, no ILI, high stress levels, and questionable coating condition, which create a riskier-than-normal situation. And you say, "Well, okay, looking at all that, it makes sense."

Point to a Map: Know the Risk

This test is often overlooked. Basically, it means that you should be able to pull out a map of your system, put your finger on any point along the pipeline and determine the risk at that point—either relative or absolute. Furthermore, you should be able to find out specifically the corrosion risk, the third party risk, the types of receptors, the spill volume, etc. This may seem an obvious thing for a risk assessment to do, but you’d be surprised how many cannot do this. Some have pre-determined their risk areas so they know little about other areas (and one must wonder about this pre-determination). Others do not retain information specific to a given location. Others don’t role up risks into summary judgments. The risk information should be a characteristic of the pipeline at all points.

"What About ___?": A Measure of Completeness

Someone should be able to query the model on any aspect of risk. Such as "What about subsidence risk? What about stress corrosion cracking?" Make sure all the probability issues are addressed. All known failure modes should be considered, even if they are very rare for your particular system. You never know when you will be comparing yours against one that has that failure mode.

Make sure that the very complex consequence potential is assessed in a way that you want and need. Are all receptors and receptor sensitivities addressed; spill sizes; leak detection; emergency response; product characteristics? My favorite way to look at consequences is the product of four factors: spill x spread x receptors x product hazard. If any of these goes to zero, then there are no consequences, no matter how bad the other three are. It seems to me that any complete consequence evaluation will consider at least these four variables.

Bonus Capabilities

Link to Absolute Risk

There are advantages to relative risk approaches and absolute risk approaches. The best of both worlds (in my opinion) would be where we use the relative scores day-to-day, but easily link to absolute numbers when needed. Our preliminary work indicates that the relationship between an absolute failure probability scale and a relative scale will be defined by some curve, asymptotic to at least one axis, either beginning flat or beginning steep.

A good scoring model should show that at one end of the scale, you’ve got a pipeline with no safety provisions, operated in the most hostile environment—failure is imminent. On the other side of the scale, you’ve got the "bullet proof" version—buried 20 ft deep, quadruple heavy wall, fracture resistant, corrosion proof metal with secondary containment, fenced and guarded ROW 24 hrs, daily integrity verification, etc.—virtually no chance of failure. The end regions we understand better than the middle ground. A beginning as either steep or shallow each has a reasonable logical basis. The middle region will be the most critical, and that’s where more data will strengthen our ability to finalize a curve.

We’re bound to have misjudged some of the variable importance in our thinking. Only time will tell. But, even if the quantification of the risk factors is imperfect, the results nonetheless will usually give a reliable picture of places where risks are relatively lower (fewer "bad" factors present) and where they are relatively higher (more "bad" factors are present).

Data Collection

As noted earlier, data is essential to good risk assessment. So, it’s important to have a clear understanding of the data collection process:

What Will the Data Represent?

The data is to represent the sum of our knowledge of the pipeline section, relative to all other sections.

How Will the Values Be Obtained?

How will you be collecting your data? Will you have one person traveling to every region or several people in different offices? Team assessments? Will you have a hard proof requirement or will you accept opinion data? What will be the maximum penalty for any responses of "don’t know?" Be sure you know the penalties for your workers and that they understand the impact those answers have on the overall process.

What Sources of Variation Exist?

Sources of variation:

1) PL environ
2) PL ops
3) Amount info available
4) Evaluator-to-evaluator differences
5) Day-to-day variation in single evaluator
6) “Signal to noise” ratio


Why is the Data Being Collected?

"Why" should tie back to mission statement of whole program:

* Identify hot spots,
* Check regulatory compliance,
* Model resource allocation,
* Prioritize maintenance, and
* Create an operating discipline.


Think Organic

I think it’s useful to keep the "nature theme" going by envisioning this as an organic process. There will be new inspection data, new inspection techniques, new statistical data sets to help determine weightings, missed risk indicators, new operating disciplines, etc. These will need to be constantly incorporated. Plan for the most flexible environment possible. Make changes easy to incorporate.

The organic model also serves to anchor another basic concept--risk is never completely in our control, but the more we know about it and the more attention we pay to it, the more predictable it will be.

So, we can say that the ideal risk assessment model will have these characteristics:


* Simple/easy to understand—should be able to show a layman how risks are determined;
* Comprehensive—capturing not only statistical data, but experience and judgment when statistical data paints an incomplete picture;
* Accurate—even though I said earlier that a risk assessment can’t predict where accidents will occur, if the risk model is constantly pointing "here" when leaks are happening "there," then something is wrong;
* Expandable—new info, new techniques, new statistical relationships; and
* Cheap—cost of model should never outweigh its benefit.


A couple of quick comments about pitfalls in this process: Pipeline risk assessment is still relatively new, so there are very few roadmaps to follow out there. It’s a bit like the early days of GIS; there is great potential for over-promising and under-delivery if you’re trying to purchase a turnkey solution.

Doing the Right Thing

It’s a bit concerning when an operator asks, "What will I need to pass an audit?" The whole point is that as an operator you know more than the regulator does. I submit that if you are doing the right thing, getting the information you need to make good decisions and using it intelligently, regulators will be satisfied. The emphasis should be on good risk management, rather than on compliance. Set up the right model to help make better decisions.

Study Results

Which leads to this point: USE those results. This might seem obvious, but it is surprising how many people really don't appreciate what they have available after completing a thorough risk assessment. Remember that your final risk numbers should be completely meaningful in a practical, real-world sense. They should represent everything you know about that piece of pipe—all your years of experience, all the statistical data you can gather, all your gut feelings, all your sophisticated engineering calculations. If you can't really believe your numbers, something is wrong with the model. When, after careful evaluation and much experience, you can really believe the numbers, you will find many ways to use them that you perhaps did not foresee.

They can:
* Design an operating discipline,
* Assist in route selection,
* Optimize spending,
* Strengthen project evaluation,
* Determine project prioritization,
* Determine resource allocation,
* Ensure regulatory compliance,
* Etc.


Resource allocation

For example, one of the most powerful uses is in resource allocation—how are you using your people, your money, and your equipment to minimize risks? Your risk model should play a key role in this.

Closing points to ponder

A bit of paranoia can be good—remember, a pressurized pipeline is not a natural thing, there are forces at work trying to make it go away.

Strategize and conceptualize—this doesn’t mean complexity. Complexity is not necessary and might even be a danger sign. It does mean that you need to know where you’re going in order to best design your risk systems.

It’s a Project

Think of the risk assessment and risk management as a project that requires all the elements of a good project execution, including conceptualization, design, construction, documentation, and training. These elements are all necessary to ensure a good result.

Keep an open mind to the possibilities that this new knowledge can bring. You’re still using the same information you’ve been using for years, but now its being put together in a way that turns information into knowledge.
Moving From Risk Assessment to Risk Management (Philosophy)
The use of a formal risk assessment tool or process allows one to make more consistent and hopefully "better" decisions regarding risk management. Managing the risks implies that the manager has some pre-determined notion of where the risk levels should be. Realistically though, it is only in rare cases that risk levels will be clearly acceptable or unacceptable. In such cases, that determination was probably apparent even before a formal risk assessment was performed. In the majority of cases, risks will be acceptable or unacceptable (or tolerable/intolerable) only in context with other comparable risks and with the rewards of the undertaking. So, rather than establishing fixed acceptability levels for risk, the risk manager is normally attempting to optimize his resources in a way that produces the optimum risk-reward scenario. Therefore, risk managers should be prepared for more challenging decision-making environments with fewer opportunities for setting absolute limits.