Richard's post inspired me to rant about how I'm approaching the same problem. Previously I've seen systems and processes so overly complicated and detailed that they simply were not used or acknowledged. I sat on this for a while; not liking what was out there. I've began throwing ideas around based on the Tornado Fujita Scale. While the Fscale itself has specific measures (wind speeds) it also has outcomes and descriptive phrases (An F6 tornado is "Inconcievable", how cool is that?). Currently I have it down to:
|1|Loss of Information|
|2|Loss of Positive Control|
|3|Impact to Business|
I then matrix each of these with Types of Incidents: policy violation, malicious code, DoS, unauthorized access, other. Each of these give a basic guideline (not a rule or requirement) for each level of distinction. In the case of Malicious Software, for instance:
|0|Disruption|Spyware, one-off infection, phishing attempt|
|2|Loss of Positive Control|Active Command & Control channel|
|4|Black Swan|A TJX scale of infection that I can't or wont dream up|
Each of these levels will eventually have their own expectations on response and rules of engagement. I don't foresee a Level 0 (disruption) as necessarily being court admissible. This rules out chain of custody, forensics etc while a level 3 (impact to business) may very will have such expectations.
I like the guidelines approach, as the usage builds over time then I expect organic growth of the system to begin diving into specifics. But any initial scheme will go through maturation and not trying to create a complex framework will lead to a slower adaptation.