The Five Flaws of a Security Risk Assessment

Introduction

The Security Risk Assessment (SRA) is a bit like the blind men describing the elephant. It can mean different things to different people, and one IT professional’s SRA is typically very different from another’s. Partly this is due to the uncertain nature of risk, and the responsibility of the assessor to evaluate events that may not ever happen. Partly this is due to the rapidly changing nature of the security field. At the beginning of 2018, most would have said that careless insiders were the highest threat to healthcare organizations, and been proven very wrong by the rash of deliberate, targeted attacks that have characterized the last two years. And while the Office of Civil Rights (OCR) has cited many covered entities and business associates for having no security risk assessment, there has been very little guidance from them on what makes a good security risk assessment. We wrote this paper to help readers understand the security risk assessment and to avoid some common mistakes that can make the SRA useless or ineffective. Some, doubtless, view the SRA as a checkbox exercise. We believe that, just as healthcare providers take a lot of trouble to prevent patients from getting a post-operative infection, they should take some trouble to prevent patients from getting a post-operative identity theft – which can take longer to heal.

Background

At Techumen, we’ve done over 1500 security risk assessments, for all sizes and all kinds of providers. We have also done a large number of security projects that involved reviewing other companies’ security risk assessments – either SRAs that were done by our clients’ personnel, or those that were done by an independent third party.  We have been on the wrong end of a few OCR investigations, and we have kept a very close eye on the “wall of shame”. The best way to learn is from someone else’s mistakes, not through making your own.  Reading the list of enforcement actions and breach notifications is informative; some of the stories are of very bad luck, and some are of very bad thinking. Below are the five most common mistakes that render an SRA useless, ineffective, or more difficult than it needs to be.

Flaw #1 – The Scope is Incomplete

The SRA is not just about the electronic medical record (EMR) system. This many seem obvious to many readers, but there are covered entities out there whose analysis only addresses the EMR. Even those who are only concerned with achieving MIPS / Meaningful Use, must still perform an SRA that address all EPHI, in accordance with 45 CFR 164.308 (a)(1)(ii)(A), Security Management Process . And when it comes to security, of course, it’s typically what you don’t see that gets you. As a practical matter, in the event of a reportable breach, the first request from OCR will be the entity’s security risk analysis, and the first item they will verify is that the breached asset is included in that SRA. While it’s easy to say “have a complete inventory of everything on your network”, that’s especially challenging in this era of shadow IT and bring your own device (BYOD). Nor should “incidental ePHI” be ignored – the work in progress that can be the billing department’s quick and dirty spreadsheet, or an ad-hoc transcription in Word format, or a physician’s research notes – all of which tends to leak out onto workstations and smart devices. It is just as consequential to lose that as it is to lose the full patient chart, at least from OCR’s perspective. Nor should the ePHI stored at business associates be overlooked. 40% of the breaches on the wall of shame are the result of a flaw in a Business Associate, not in the Covered Entity. A complete inventory, like being in good shape, is more of a process than a destination. That said, there are ways to make sure that all assets are included. Web proxy logs, software contracts, and help-desk tickets are good sources of systems that may be overlooked or unknown to central IT. When a user calls asking for support on an asset you’ve never heard of, that’s a sign that your inventory needs updating.

Flaw #2 – The Analysis has no Methodology

If you have the pleasure of hosting OCR, their second question after “Show me your risk analysis”, will be “What methodology did you use”? Some consultants refer to a “proprietary methodology”, which is not going to satisfy OCR, nor will it help you understand what the consultant did, nor how to respond to that consultant’s findings. While calling a “proprietary methodology” a “made-up methodology” may be a little harsh, it’s often not inaccurate. Nor does citing a methodology mean that the methodology was actually followed; for instance, an SRA that claims to be based on NIST Special Publication 800-30, Rev. 1 had better include threat sources and threat events. A good practice is to use the same methodology for its risk assessments of new systems that you use for the annual risk analysis; this means that new systems, and the results of their SRAs, can be directly entered into the risk tracking tool, rather than in one giant update during the annual SRA, which is both time-consuming and prone to error, since you’re doing the same assessment twice.

Flaw #3 – The Analysis is Qualitative, not Quantitative

Of course, just because you follow a methodology doesn’t mean it’s a good one. Even NIST 800-30, which is as close to recommended as CMS can make it, can be misused. Oftentimes the risk ratings are set by “gut feel”, which is another fancy way of saying “made up”. A High, Medium, and Low risk, or a Risk Rating 1 through 5, or whatever risk scale you use, should mean something definite. Most importantly, Person A’s high risk should mean the same thing as Person B’s high risk; and a high risk in 2017 should mean the same as a high risk in 2020. While not a compliance risk, it does help avoid endless, repetitive arguments over what exactly is a “Low Impact”. If your methodology uses the algorithm Risk = Likelihood X Impact, you should also have definitions or High, Medium, and Low likelihood and impact. Historical info is very helpful here, and can focus the analysis wonderfully. If an event hasn’t happened in 20 years, it shouldn’t be scored as “high likelihood”, regardless of what the assessment team feels like. It can be very useful to put a timescale on likelihood; a High likelihood risk may happen once a year, a Medium likelihood risk between 1 and 5 years, and a Low likelihood risk between 5 and 20 years.

Flaw #4 – The Analysis is Not Repeated

The risk analysis you performed in 2005 when HIPAA went into effect is not going to be much use in 2020. OCRs “not quite yet a full-fledged recommendation” is for an annual update. A good practice is to use the same methodology to assess new systems that you use to perform the annual SRA. This will make the annual SRA much easier. Ideally, your risk assessments will show improvements over time, as High risks become Medium and Mediums become low. Which will be difficult to accomplish, if your SRA contains the fifth and worst flaw…

Flaw #5 – The Analysis Produces Bad Recommendations

You’ve done a fantastic job identifying, assessing, and tracking your risks. Now what are you going to do about them? Whatever you recommend, at least make sure that your organization can accomplish them. Don’t recommend something that requires $3 million of budget, especially if your organization is short on cash. Don’t recommend something labor-intensive if you don’t have the people. A “pretty good” recommendation that is fully implemented, is better than a perfect solution that never gets done.

Once you identify these recommendations, management should formally respond to each of them – either by agreeing with the recommendation, by modifying the recommendation to something else, or by rejecting the recommendation and accepting the risk.  The IT department should not be the sole driver of risk management decisions, as not all security problems are IT problems, or require a new system.  When the recommendations are finalized and agreed-upon, each one should have an assigned responsibility and a target date for completion, and produce artifacts to show that the recommendation has been finished and is operational. Of course, each recommendation should impact the risk your organization faces – or what’s the point? The ideal situation shows improvements such as this:

Risks by Year

A chart like this, showing the change of risks over time, is a powerful visual for internal audiences, like management, and external audiences, such as auditors. It demonstrates that you’re aware of, and tracking more risks, and getting more granular about them, over time.  It also shows that your risk management efforts are having an effect, as high risks are managed down to a medium risks, and mediums to lows. A chart like this requires consistent, sustained effort over time, and is a clear sign of a mature security risk assessment and management program.