Friday, June 19, 2009

talking points

There are a lot of go-to sayings that are generic enough that folks will use as an ice-breaker. A way of getting you to nod your head as they build their premise. "security is a process" is one of those.

Who started that? The presumption with that is to point out that security is not a product. "You are not your firewall". I get that, but I do not like the expression anymore than any other watered-down talking point that politicians use on an election year.

I had at least one guy who privately objected to my 140 character assessment of this. I got the sense that such an expression is a truth; and to rail against such a truth implies that you should at least have a replacement. "If it's not a process then what is it?!?". It's rhetoric that helps you setup discussion. But hey, i'm game. Some replacements (off the top of my head):
  • Security is a characteristic
  • Security is a system of combined systems
  • Security is a vision
  • Security is indeed a product

What other talking points have the security folk been using for the last decade; thus pretending the pace of change has been minor?

Monday, June 1, 2009

Recognizing False Arguments

Change is hard. And improving security will create change. This creates resistance in various ways. One of my favorite types of resistance (eg, most frustrating) is the problem of induction. Let's create a scenario where you find a gaping vulnerability (say, incorrect firewall rules, SQL injection vulnerability, architecture issues, whatever) and approach the owning organization on correcting this. Resistance comes in the form of debate. This debate is goofy and asks the wrong questions or assumes certain risks they do not understand such as
  • "Is the risk that great? We've always had this and had zero issues!"
  • "I don't believe you. things are just dandy as they always have been!"
  • "We're the only ones who know, I accept the risk as it is low"
  • etc
How do you win this debate? These questions aren't necessarily the wrong questions to ask (I take back my above assertion) but more precisely their orientation is wrong. My newer tactics try to point this out. The classic "It's always been a risk and hasn't been a problem yet" is frustrating as it's stubborn. Logic may nor may not help, depending on the individual. Let's pretend it's an honest and not an emotional or political debate (ha!). Applying observations (again, the problem of induction) to this problem is not acceptable for several reasons:
  • Duration of observation is not enough to come to a meaningful conclusion. Sure, the sun will rise tomorrow. We have about 4.5 billion years of observations to reference and have a well rounded idea on the rate of occurrence for outliers (solar flares going out 1AU is really low but not impossible). This system has been deployed for (lets pretend) two years; of those two years the threat of being exploited as continually rised due to the proliferation of easy tools, malware, and the rise of internet-based crime. Indeed, security as a practice is to control the outliers in a sustained fashion.
  • Your observation has blindspots. Chances are that their observations focus on functionality but not on security data. Recon, attempts, or full breaches may have gone unnoticed. This is a tough argument; I'd suggest having data to back you up.
  • Risk does not mean what you think it means. The risk may have a low chance of occurrence, yet if it has a high severity then it is debatable that the risk is "high". This is exactly the point that should be persuaded.
"Persuaded" is a good word. This isn't a logic puzzle or a dissertation on how security works. Defeating such false arguments are a means to an end. At the end of the day the business must need to understand the risk versus cost. And so do you.

What other false arguments exist and how do you battle them?

Tuesday, May 26, 2009

Security Incident Tracking

This is a draft post I just ran across. I'm publishing it "as is" in case it may be useful to someone; sorry for the fragmented post.

Over the last year and a half (and arguably three) years I have been wrapping my head around tracking and reporting of incidents. This is an account of some the rules I have used up to this point. Some issues are easy, some are hard.

use a platform

KISS- stay away from complexity

treat it like an object:
enumerate attributes of incident
enumerate methods applied to incident
enumerate methods applied to a range of incidents

maintain confidentiality when possible; but don't sacrifice usability if the risk to confidentiality is low.

understanding severities
I loosely based how I classify severities as to the F-scale tornado classification. The F-Scale is based on describing the damage done as opposed to describing the actual event. I feel this is an important distinction as you can't realistically enumerate every incident/attack and doing so is futile. So don't try. instead create attributes or characteristics that are important to you. Good examples of attributes: scope, activity, impact, loss. Based on the attribute, a qualitative decision is made as to the severity. There are also times when a certain severity is mandated. This is typically based on legality requirements. For example a loss of PII, or health records should be a threshold event that immediately creates a higher severity level.

create rules of engagement based on severity levels.

if you don't know how many incidents were opened last week then you're not yet successful.

utilize new stuff. Like tagging.

Do not reinvent the wheel. Utilize NIST 800-61 when possible.

Do not reinvent the wheel. Use existing platforms.

Monday, March 23, 2009

SIEMs versus Incident Response

I attend several security conferences, webinars, and sales briefs a year. I am not an avid fan of SIEM technologies. To be clear: I am fairly unfamiliar with the various vendors, their value adds and differentiators. I know only from the various discussions I've had with them at said conferences, webinars, and sales briefs. I typically avoid these conversations, however sales folk are known to be persistent. I put this persistence to my advantage to see how the product may relate to my immediate needs. This is about the point I find the nearest soap box.
The value prop as I understand it: SIEMs let you quickly correlate and respond to an incident. Details be damned on how they achieve correlation; I want to know what happens once an incident is confirmed. Typically such a console event is treated as such; it's an event. From there you may be able to fire off a Remedy ticket; or count how many events have been reviewed or escalated. Basic work flow stuff that may add a reduction in the amount of monitoring hours. Maybe.

NIST 800-61 states: prepare -> detect -> contain -> eradicate -> recover -> lessons learned. This is pretty basic stuff. SIEMs appear to solely focus on the second step. But their value prop is to allow you to more quickly respond (aka: contain, eradicate, recover) from an incident. This is the disconnect for me. I have more detections than I can shake a stick at and I don't even own a SIEM. Funneling that through to yet another console that, in theory, gives a higher fidelity on the detection engine just isn't a value. What would be a value? What is the series of questions I ask every SIEM vendor who corners me?
  • Once you have a true positive alert, then what?
  • Can I apply my incident schema to it? I have specific severities, categories, and other attributes that must be reported on. Don't you dare give me generic classifications that mean near zero to my organization.
  • Can I report based on any response metrics? Response times? incident handlers? volume? reoccurring hosts or possibly related prior incidents?
  • Can it give me some hardcore incident analysis? Give me any several relevant data feeds; auto generated timeline capabilities would be very cool.
  • Can it track all efforts of containment, eradication and recovery? I need an authoritative post mortem for future reporting.
  • Can it track lessons learned and attach it per incident or collect aggregated lessons learned data?
  • Shit, can it do anything with aggregated data? If so, you're highly advanced already!
  • Can it handle different escalation paths dependent on the scope or severity?
Zero of these features require outside platforms or partnerships; it's simply adding more robust features than currently exist. I've only had brief conversations on this. Has anyone solved this? Is such an extension of SIEM an appropriate way to handle "response management"? Did I just invent a new term? Response Management The ability to properly prepare and respond from an incident in a measurable, managed, efficient, and sustained fashion. Routine operations such as drills, escalation, post mortems, lessons learned, and aggregated mitigation summaries should be fed into tactical and strategic plans.

Monday, March 16, 2009

Beyond operational security

An observation. The detection and response to incidents is regarded as completely operational. I generally introduce myself as "leading a security ops team" as that conveys the right responsibilities to anyone I may be talking to. What are those operations?
  • analyze alerts or escalations
  • create documentation trails (chain of custody, incident tracking, etc)
  • contain, eradicate, recover from any incidents
  • repeat
Occasionally we have time to lift our head up and learn from an incident and close a hole large enough that it'll eliminate an entire class of threat or method of attack. This is always the goal as nobody wants to play whackamole; but it's not easy. Other responsibilities or projects always fill the voids quicker than they should. How do you meaningfully move past operations and into a tactical mindset? Some thoughts on this:
  • parse through your incident tracking looking for trends (repeat offenders, categorization). Don't use this as material to feed your bosses but feed it back into the incident process and learn from it.
  • Don't just look at alert data. This is NSM all over again. Or is it? NSM starts at the alert and moves well past that into session and full capture data. What if we complement NSM by also starting at full capture data and looking for items that should have alerted (true negatives)? I suspect NSM advocates would say this falls under NSM but it doesn't truely seem to be practiced.
  • Part of the incident lifecycle is the lessons learned branch. In my experience this isn't done on minor severities (eg, the daily one-off infection). How do we lower the transaction costs of such lessons learned to be able to quickly capture these on an operational level?
  • Drills are important, you must make time for them.

That's tactical, what about strategic?

We (as in any security response team) need to make it easier for outside teams to respond with us. This means automating toolsets that we can't run remotely and they don't have to think about. and it needs to be quick. If we rely on a support center then we must provide them tools to quickly do what we need them to do; not expect them to figure it out.
An outline of capability blind spots needs to be done. Any opportunity to fill such a blind spot should be evident and taken advantage of. Too often these opportunities are missed as action is not done quickly enough.
Our capability is valuable; apply it to other needs when applicable to show that value. Think of log reviews, bandwidth troubleshooting, data preservation or identification. This should stay tactical or strategic and not become an operational duty.
create the proper barriers to limit other responsibilities from eating away at NSM and response. The balance will always tip in favor of tangible results over operational monitoring and response.

Monday, March 9, 2009

shmoocon 2009 recap

Apparently people actually read my 2008 recap/rant. If you condone such activities then they will continue on.
I sat in fewer talks this year but walked the floor a bit more and met and hung out with folks instead. As always the event ran very smoothly. All the content was fresh and didn't include any rehashed topics from previous years; based on the number of talks this is a feat. Random thoughts:

  • The Building Botnets talk was interesting; their SNMP smurf attack was clever and refreshing.
  • I fell asleep during Beale's Middler talk. This was due to an overdose of sushi and isn't a reflection of his talk. Middler appears very hip and is a clever attack based on weaknesses that wifi's shared medium has introduced.
  • Matt Blaze's keynote was okay though I was hoping for more. Certainly the talk was interesting and engaging, I simply hold keynotes to a higher standard. Especially keynotes which prevent me from having dinner until 9 o'clock in the evening.
  • A group of us chatted up one of the colonels in dress uniform in Murphy's at 2AM. The group was fun and interesting; way more than the Amway gathering a few years back. And they can hold their liquor too.
  • The poor kid who passed out near the garbage on the sidewalk as the party was ending was kind of sad. Luckily friends loaded him into a cab. It was unfortunate his head ended up on the cabbies lap, but i'm sure it'll make a great story if anyone tells him. I saw two others lose their stomachs. I was under the impression that geeks had high tolerances; Saturday night put this into question.
Sorry for the lack of links, visit shmoocon's site for more info. See everyone in 2010.

Monday, March 2, 2009

Red Team Journal

I have been accepted as a contributing editor for Red Team Journal. rtj focuses on the practice of red teaming, and I will be contributing my knowledge from an information security perspective.

I have been reading rtj for the last 5 months and have enjoyed their articles. I am looking forward to contributing as much as I can.

In the infosec realm red teaming is generally thought of as a pentesting role. But infosec as a field can not be narrowed down to a science and must have critical and imaginative thought behind it. As an ops guy I had to rewire my brain to think of possibilities, implications, and reactions while maintaining precision and speed. I was applying Boyd's OODA loop without knowing it existed. I discovered the OODA Loop in February of 2008; and it set me on a series of mental exercises that redirected my energy from a purely NSM mindset into something of a holistic response mindset that yet has a name. Merlin Mann's design pattern talk gives me hope that I can remove tools and products from the equation and focus on the inherent practice of monitoring and response. Finally, I hope my mindset as a digital native can raise questions as to older generational values versus data mobility and how it changes ideals such as privacy (it's lack of), open sharing, and information or news cycles. I hope working with the rtj will go further down these rabbit holes, and I can contribute interesting or new ideas to the community.
More to come.

Thursday, January 22, 2009


Last night I stopped by the liquor store to pick up a six pack of Sierra Nevada ESB. As I walked up the counter the store owner slid me a $20. Without something to compare to, it looked legitimate. After he gave me a true $20 I could note the difference in texture, and the sloppy printing. Once I had a comparison point it was obvious it came from an inkjet printer.
The problem here isn't that technology has progressed far enough that it can cheaply reproduce a good imitation that a layperson can't spot. The problem is that technology doesn't exist that can quickly and easily validate it's legitimacy. The reliance on sight and texture as validation just won't be able to keep pace with technology. This will only become more of an issue as the economy keeps tanking.
Visa and Mastercard must be thrilled.

Thursday, January 8, 2009

Shmoocon 2009 hype

Quick note: I'll be at Shmoocon this year, had my ticket for a few months. I'm sure I'll be tweet-commenting on every talk. The popularity of twitter should be entertaining, they appear to have an official twitter account which I hope will be used well during the event.
There's no single talk that stands out this year. I'm glad that the typical talks are gone; the only repeat speaker I recognize from last year is Charlie Miller. This is a good thing; as much as I love johnny or kaminksy, the freshness will be nice.

Watch out for lots of geeks with tweets.

Tuesday, January 6, 2009

2009 Books

Some books I have on my bookshelf and will be focusing on in the first half of 2009.

On War by Carl Von Clausewitz
I've read the first two books, however I'd still like to complete the entire thing. The Chicago Boyz blogis hosting a roundtable covering the book through January and February. I hope to lurk and keep up with the discussions.

Strategies for Creative Problem Solving by Fogler & LeBlanc
I saw this this book referenced by redteam journal and managed to receive it as a gift for Christmas. I'm nearly halfway through it and am taking notes for a future discussion. I definitely already finding value in some of the techniques discussed.

The Ten Most Beautiful Experiments by George Johnson
Just sounds like it'll be a fun read

Fabric of the Cosmos by Brian Greene

200 pages into this. My mind is already blown.

Illicit by Moises Naim

On Leadership by HBS
I received this paperback as part of a course in late 2007 as recommended reading. It's high time I read the last half.

Tipping Point by Malcolm Gladwell
I generally enjoyed Blink and expect this to be another light and interesting read.

Some items that have been on my antilibrary for too long and I'd like to flip through

Beautiful Evidence by Edward Tufte

Guns, Germs, and Steel by Jared Diamond

Albert Einstein: A Biography by Fosling

Patton, A Soldiers Life by Stanley Hirshson

Blackhawk Down by Mark Bowden

Others in the hopper:

The Unthinkable: Who Survives When Disaster Strikes - and Why by Amanda Ripley

Black Swan by Nassim Nicholas Taleb

Dialog Mapping by Jeff Conklin

Reinventing Collapse by Dmitry Orlov

The Rise and Decline of the State by Martin van Creveld

The Starfish and the Spider by Brafman and Beckstrom

The Exploit by Galloway and Thacker

John Boyd Roundtable by Mark Safranski