On Thursday, I wrote a bit about defining BES Cyber Assets, in response to some older discussion on the WICF Forum that is still an undecided topic of discussion today.
It's funny how often the definition of BES Cyber Asset has to be discussed. Tom touches on it again here in talking about the difficulty of an auditor determining whether something is a BCA based upon legal semantics. The BES CA definition is:
"A Cyber Asset that if rendered unavailable, degraded, or misused would, within 15 minutes of its required operation, misoperation, or non-operation, adversely impact one or more Facilities, systems, or equipment, which, if destroyed, degraded, or otherwise rendered unavailable when needed, would affect the reliable operation of the Bulk Electric System. Redundancy of affected Facilities, systems, and equipment shall not be considered when determining adverse impact. Each BES Cyber Asset is included in one or more BES Cyber Systems."
In NERC CIP Standards, definitions are often contrary to plain language or commonly-understood terminology, and all-too-often include definition by exclusion, even when it tortures reading comprehension. In the definition above, they are intending to close a loophole that one might exploit to get out of compliance scope. And to the point that an identically-configured system might be just as vulnerable to a flaw or cyber attack, it's a valid point. But there are many scenarios where redundancy alone makes the idea of a functional impact into a very remote possibility.
The same article from Tom crosses over to touch upon the Streetlight Effect I wrote about here and here, and is particularly apt with the tagline I ended the second article with: "When your auditor insists upon a simple and clean way to measure compliance at the device level, they may be doing us all a disservice."
It boils down to a common management problem. I can't manage what I can't measure, because to determine the effectiveness of processes I must determine whether they change an outcome and whether that change is positive or negative. So determining what to measure and how it is related to a specific process is necessary, but the problem is that this is not always easy. Some measurements are meaningless. They either don't relate to the process, or the relate to the wrong (or misunderstood) process and therefore indicate results unrelated to the change I'm trying to make. If I don't choose the right test points and interpret the test results correctly, I am not gaining anything other than a thin cover story for why it's not my fault when things go drastically wrong. We have lots of people measuring lots of things and producing tons of documents to prove that they're testing them.
But are we looking at the right things or just the easy ones?
It's funny how often the definition of BES Cyber Asset has to be discussed. Tom touches on it again here in talking about the difficulty of an auditor determining whether something is a BCA based upon legal semantics. The BES CA definition is:
"A Cyber Asset that if rendered unavailable, degraded, or misused would, within 15 minutes of its required operation, misoperation, or non-operation, adversely impact one or more Facilities, systems, or equipment, which, if destroyed, degraded, or otherwise rendered unavailable when needed, would affect the reliable operation of the Bulk Electric System. Redundancy of affected Facilities, systems, and equipment shall not be considered when determining adverse impact. Each BES Cyber Asset is included in one or more BES Cyber Systems."
In NERC CIP Standards, definitions are often contrary to plain language or commonly-understood terminology, and all-too-often include definition by exclusion, even when it tortures reading comprehension. In the definition above, they are intending to close a loophole that one might exploit to get out of compliance scope. And to the point that an identically-configured system might be just as vulnerable to a flaw or cyber attack, it's a valid point. But there are many scenarios where redundancy alone makes the idea of a functional impact into a very remote possibility.
The same article from Tom crosses over to touch upon the Streetlight Effect I wrote about here and here, and is particularly apt with the tagline I ended the second article with: "When your auditor insists upon a simple and clean way to measure compliance at the device level, they may be doing us all a disservice."
It boils down to a common management problem. I can't manage what I can't measure, because to determine the effectiveness of processes I must determine whether they change an outcome and whether that change is positive or negative. So determining what to measure and how it is related to a specific process is necessary, but the problem is that this is not always easy. Some measurements are meaningless. They either don't relate to the process, or the relate to the wrong (or misunderstood) process and therefore indicate results unrelated to the change I'm trying to make. If I don't choose the right test points and interpret the test results correctly, I am not gaining anything other than a thin cover story for why it's not my fault when things go drastically wrong. We have lots of people measuring lots of things and producing tons of documents to prove that they're testing them.
But are we looking at the right things or just the easy ones?
No comments:
Post a Comment