Tuesday, November 30, 2010

secops superiority

$brainstorm = 1; // I reserve the right to contradict myself repeatedly and leave gaps.  you've been warned.


I want to combine my last posts, Cause and Effect and Clausewitz and DiD.  Indeed, it was originally one post that I chopped up in order to provide some focus to each topic.  My Cause and Effect post was attempting to make the connection that offense and defense operations are simply two forms of security operations.  I also want to clarify my stance of Friction and DiD (actually, tweak it a bit).  Let's do that first:


friction : security operations :: defense in depth : security architecture

Defense in depth is a method of designing IT architecture in order to prevent bad things from happening.  I'm cool with that (though I believe it's generally practiced fairly poorly and is unrealistic).

The idea of friction applies more to security operations.  Architecture can arguably be a subset of security operations at any given moment in time.  This mean friction applies at the technology level but it's design is to work to the advantage of the security operations team and how the detect and respond.

I really wanted to coin my new term for the idea of applying friction and other novel concepts to security.  Ultimately, I believe all of that falls into the security operations umbrella.

I believe people wrongly categorize operations as being a component to a DiD strategy.  DiD is an architecture that ultimately outputs some degree of security.  Operations then must figure out how to  lower friction for defense and increase it for various opponent classes. In more concise terms, friction can be seen as  humans dealing with the architecture that they have been dealt with. Maybe they can overlap at a particular level, however DiD is the architecture; while operations is dealing with the output of the architecture. Applying the idea of friction, or operational superiority if you will, is an important mental construct.

Friday, October 15, 2010

Cause and Effect

A bit of a philosophical question to the reader:  What is the relationship of security defenders to offense/criminals?  I submit the current accepted belief sees us (the defenders) as a reaction to offensive tactics.  More precisely, security folk tend to observe "defense" as an effect of it's cause, namely, "attack" or possibly "vulnerability".  This rationality manifests itself through reliance on compliance programs or classical risk analysis ( R=(tva), ALE, etc).

I think we can do better, but first need to rationalize security in a different light.  Resemblance.  We need to compare ourselves to our various adversaries and recognize we run similar operations.  Defense holds significant characteristics and qualities of "Offense".  This includes attributes such as
time, motive, ability, techniques, tools, tactics, procedures, operations security (and deception!), collaboration with peers, reputation, money, clarity of mission, infrastructure, law, enemies, competition, allies, customs/culture, politics, visibility, knowledge, skill, strategy, team size, team cohesion, maturity, experience, rapidity, incentive/reward, friction, customers, sellers, brokers, trust, primary loyalty,and more. (in no particular order and not exhaustive)
I do think it's healthy to consider defense and offense as one and the same instead of polar opposites.  If we can compare a defenders attributes versus a particular class of adversary we could potentially draw a consistent method of finding strategic weaknesses to learn from.

So briefly applying this idea:  Our (as in defenders) five largest and consistent gaps compared to attackers include skills, collaboration, money, clarity of mission, and infrastructure.  The offense is better than us.  Indeed, not only do they collaborate as a community but they also freely lift novel techniques from legitimate security researchers.  They are not a cost center but actually turn a profit.  This happily leads to a very clear mission.  All this in combination also allows clever and resilient infrastructures. One fortunate item, the  broad offensive community is a black market.  Black markets focal point is trust.  Folks with authority should go focus on disruption of these trust reputations between members (linkages between sellers, brokers, customers, etc).

rubbish? redundant? useful?

Wednesday, October 13, 2010

Clausewitz and Defense in Depth

 I want to introduce and examine Clausewitzian ideas of friction.

In an attempt to explain why the seemingly simple concepts of warfare are actually quite complex Clausewitz (in 1832) suggested a mechanism called 'friction' to help distinguish 'war on paper' and 'real war' in a book titled "On War".  This idea of friction is the attempt to explain external factors such as chance, weather, individual will, opponent strength and how such variables will swiftly throw any plans out the window.  In my words: complexities in the battlefield must never be assumed to be accounted for. When I speak to external factors, it's important to point out that your 'external factors' may overlap with the offenses 'internal factors' and vise versa.

COL John Boyd recognized Clausewitz did not take this idea far enough.  The commander has the ability to increase this friction for the enemy as well as reducing his own.  A great Boyd quote:
The essence of winning and losing is in learning how to shape or influence events so that we not only magnify our spirit and strength but also influence potential adversaries as well as the uncommitted so that they are drawn toward our philosophy and are empathetic towards our success.” (source)
With that as a backdrop, let's talk about "Defense in Depth".  The current practice of vulnerability management is arguably thought of as a major component of "Defense in Depth". It's effectiveness (or lack of) I've been known to rant at great lengths.  This idea of friction points out it's weakness as a conventionally relied upon tactic. Vulnerability management focuses on removing one's known weaknesses before they can cause harm.  Other conventional components of "defense in depth" include blocking, filtering, proxies, antivirus, authentication, access controls, etc.  I suggest this defense in depth methodology, either explicitly or implicitly believes in a condition of creating moderate to high deterrence environment.  That's where things stop.

"Defense in depth" has become a standard argument for security architecture costs/complexity analysis at the cost of not applying the concepts of Clausewitzian friction. But this is help stagging where the 'battle' will be held.  This is you, as commander, preparing for invasion by increasing friction to the enemy through closing doors, windows, and up-righting walls and turrets (Incidentally it's also adding a degree of friction to you: all this work takes valuable time and effort). And that's where you stop. But we can't stop there: we need to additionally throw barriers, traps and make the 'terrain' as difficult an environment as possible for the opponent through the use of deception, feigns, warning signals, intelligence, etc.

I believe a great and fundamental question that needs raised is: Does your implementation of "Defense in Depth" increase friction for the offense while decreasing friction for the defense?

Also, you should follow the #mircon hashtag this week.  Lots of good tweets on interesting subjects which inspired me to finish this post.

Note:  The bad guys have learned these lessons already.  Indeed, their infrastructures are far more resilient and clever than the ones they are attacking.

One more implication: Using standard best practices can harm you.  The procedures of (let's say) patching or enforcing complex passwords create a certain degree of understanding (aka- lowering the friction for both sides) between the defense and offense of the tactics and procedures you're organization will be adhering to.  (Don't even get me started on Antivirus.)

Anyone use this approach?

Tuesday, October 12, 2010

Utilizing the casebook method


I'm wrapping up Allen Dulles' book "The Craft of Intelligence".  The book focuses on the historical context to intelligence agencies however Dulles briefly touched on two methods used in training case officers which resonated with me.

First, he referenced the casebook method.  This is used heavily in law school. This method analyzes previous court arguments and rulings to generate dialog, act out, and properly identify and understand the proceedings.  Presumably the CIA trainee is given both the various evidence known at the time as well as what actually transpired and how the operator responded.  The trainee then analyzes the data to determine if the operator missed a critical piece of data, or otherwise made the best decision.  Hindsight 20/20 can be a valuable training tool.  Secondly, he quickly summarized live exercises or throwing the trainee into realistic simulations.  These can be from various perspectives to learn the underlying motives, responses, and behavior of each side.

Does your team actively leverage these concepts?  

Using casebook methodology for junior level staff to review previous incident case data and find weak areas of response, wrong and/or right assumptions, and primarily to discover how the senior level analyst proceeded through his investigation.  You do keep historical incident datasets and assessments, right?  If not, consider using the Honeynet challenges as 'casebooks'.

Also, all staff should frequently sit in various exercises including table tops and live drills.  I classify these drills into two categories:  training drills or preparedness drills. Training drills are best way to experience the emotion, uncertainty, and quick-mindedness needed outside of actual incident.  Smaller exercises can focus on preparedness (eg. Does the entire team have contact information for escalation points at the ready?  Are their toolsets ready for rapid deployment?).

I see the casebook method and live fires as a superb tool in escalating team members capabilities and discipline. This is different than knowledge transfer, which is what most infosec courses or certifications stress. It's not a replacement but instead a complement to such courses.  What sort of training methods have worked for you?

Friday, June 25, 2010

Your orientation is showing.

This is an incomplete thought experiment.
First, some premise stuff:
John Boyd posited the concept that one's orientation is the significant factor when making decisions. Orientation combined with observation creates a feedback decision loop (the OODA Loop). If you can either act unexpectedly, faster, or more correctly than an adversary you can 'operate inside an adversary's OODA loop'. Once you can begin down that path it will disorient the adversary which will make the adversary slower or incorrectly respond as the conflict changes. Continue on and it'll get to a point where the adversary simply can't make decisions based on accurate observations. End game.

This is all neat stuff but that's beyond what I want to talk about. I want to talk about orientation. Orientation (per Boyd) includes an individuals experiences, cultural heritage and traditions, synthesis of information and new information. This set of attributes has a large bearing on which decisions are made. I want to talk about the defender's orientation. Incident handlers are trained to understand and use the response lifecycle:
prepare, detect, contain, eradicate, recover, lessons learned.
There's a few minor variations of this, however it's in all coursework. My experience suggests that this lifecycle is the framework that nearly all security industry products, incident response plans, incident handlers, and organizations use. In short, it's part of our training and traditional thought process and has an influence on our orientation.

Every phase of the traditional response life cycle imples an inward focus. If one is so internally focused then one is not observing new information and unfolding circumstances. This seemed reasonable in 1998 when defensive tactics were against self propagating malware DoSing servers left and right. This is no longer the world we live in; now it's theft, extortion and espionage. We must additionally be outward facing and better engage the adversary. Note the lack of questions surrounding who initiated a compromise or why. These are not philosophical (or law enforcement) questions, but focus on the very real concern of uncovering "who was the operator behind this? what was she seeking? did she achieve her goals? is this part of a larger sequence of events? what can I do to break her OODA Loop?".

Let's tinker with a different incident response approach.

Trigger -> Operational Response -> Tactical Response -> Prepare

Trigger. Observe, Detect, etc.
Consider the trigger as all observations. It begins at the traditional 'detect' phase of the doctrinal lifecycle. This is stuff like IDS events, AV alarms, SIEM alerts, and end user or third party notifications. It's also searching for unknown unknowns through both automated and manual system or network profiling. This includes another set of data including network traffic, filenames, hashes, domains, or IP addresses.

Operational Response. Contain -> Eradictate -> Recover -> Lessons Learned

After the trigger is initiated we move into operational response. This is the traditional response mechanisms on removing the active threat and recovering to full business measures. See NIST SP 800-61. Or the CISSP coursework, or any security 101 book. Clean up your systems, get them back to operational status, figure out if there's a way to prevent it from occurring again.

Tactical Response. Gather and Characterize Indicators -> Integrate and Analyze -> Confuse & Disorient

Next we move to tactical response. This is using threat intelligence by identifying new indicators of the compromise and rating, classifying, and assessing those indicators. Integrate and Analyze. We begin to integrate that dataset into our standard detection and protection controls. At the same time an analysis is performed against other compromise datasets to determine any similarities, odd dissimilarities, and draw conclusions and future use. For example, recon activity X is linked to Exploit Y which was also seen in Incident case numbers 43, 71, and 72. A more aggressive example: monitor adversary's suspected infrastructure for changes. A new or expired IP address or domain can be interesting and suggest upcoming changes. And finally, Confuse & Disorient. Feint weaknessness, show unexpected activity, make the adversary know that you know she knows you know what is going on, use misinformation. Maybe you should click any malicious link, lots of times from lots of locations. Do several layers of active DNS lookups, possibly query the webserver as to it's version. Turn the target into a honeypot. Contact owners of external systems affected. Plant passwords, files, fake contacts, pgp keys.

This is what's been rattling around in my head for the last week. Ultimately, we need to analyze not just our weaknesses, but our adversaries weaknesses. We need to fix our orientation to allow better decisions. We need to stop crippling our capabilities.

Thursday, June 24, 2010

Charmsec

I skipped up an opportunity to do a "lightning talk" while at FIRST 2010 this year. I came up with the idea of talking about citysec, charmsec and how things have progressed. I backed out since I didn't have time to put my thoughts together in any sort of coherent talk. This post is instead a preemptive attack if I do have another opportunity and prevent me from backing out.

CitySec meetups. A simple concept of regular meetups at a bar by security geeks to talk about whatever they'd like. I'm not sure who came up with the idea or where they started. I know that Boston, Chicago, San Fransisco, and NYC all have been doing citysec meetups for several years now. There was a website and forum setup several years ago that appears to be completely stagnant.

In 2005ish @reyjar started Charmsec. After two or three months it faltered. I never attended. In 2008 a friend and I agreed we should revive the Baltimore meetup. We announced our first meetup on the DC security geeks mailing list. Charmsec 4 had three attendees, including my friend and me. Charmsec 5 had three attendees. This continued for some time. We changed bars, we had maybe 6-8 attendees. We changed bars again, time passed. We're now up to averaging two dozen folks attending each month.

If you live near a city look for a citysec. If none exists, think about setting one up. Here's some lessons we've learned:

  • Think of citysec as using the Open Source model. That means a few things:
    • Low level of entry. It should be easy. Don't have RSVPs, don't have membership, don't have a steering commitee. Be informal and ask people to show up.
      It should be evident what citysec means to you. Charmsec is just charmsec and modeled on what we thought it should accomplish and look like. In another words: this is Baltimore's citysec. There may be many like it, but this one is ours.
    • Provide and recognize the value. charmsec provides value by offering a chance to get out of the house/office and drink beer and network with fellow like minded individuals. It's not on vendor presentations, job hunting, or gaining CPE points.
  • Twitter is a multiplier. The level of participation you can gain by announcing and leveraging twitter royally trounces any mailing lists, forums, websites, and generates more word of mouth.
  • Location, Location, Location. Be central and fairly easy to get to. The bar should not be loud. It should have a decent beer and food menu. It should have table service. It should take reservations. Bonus points if it takes reservations via tweet like @slaintepub.
  • Consistency. Use the same location, and pick a day of the month and stick to it. Don't be afraid to experiment to find a better venue or time but it should be irregular and include lots of reminders of such a change.
  • Expect low turnout for the first several. Charmsec didn't get any consistent level of participation past 5 attendees until at least charmsec 10.
  • Expect to lead and direct the conversation until the meetup finds it's legs. Ask questions, introduce folks, and play host until you're not needed to. Then stop.
  • Don't be harsh to vendors, hackers, govies, risk managers, auditors, college kids etc. Avoid geek wars.
Grant contributed to this post. This is appropriate since he built charmsec to what it is today while I was busy making a family. Did I mention that Charmsec 26 is tonight, Thursday the 24th at 7:00PM?

Friday, March 12, 2010

Bazaar vs Cathedral

Damballa recently released a report entitled The Command Structure of the Aurora Botnet.  It's a good whitepaper.  I like this section:


... Botnet operators also increasingly trade or sell segments of the
botnets they build. Once sold, the owner of the botnet typically deploys a new suite of malware onto compromised systems. The CnC provides the link between various campaigns run by the botnet operators and the multiple malware iterations. Since Damballa focuses on malicious, remote-controlled crimeware that depends on CnC to function, we were able to determine the evolution and sophistication of the Aurora botnet and its operators with greater detail and accuracy than other
reports to-date. In general, Aurora is “just another botnet” and typifies the advanced nature of the threat and the
criminal ecosystem that supports it. It is important to note, however, that botnets linked to the criminal operators behind Aurora may have been sold or traded to other botnet operators, either in sections or on an individual victim basis. This kind of transaction is increasingly popular.


This isn't really new, it's been known that both kits and botnets are sold and rented in the black market.  Admittedly, it is pretty dastardly to have a potentially adversary utilizing this market, further obfuscating them and their goal.

This is one of the realizations of John Robb's Open Source Warfare.  In his book, Robb made a wonderful extension of esr's The Cathedral and The Bazaar into warfare and terrorism:


According to the perspective of the organized military, the problem with a a bazaar is that it lacks a center of gravity -- a centralized command center that can be destroyed or a single set of motivations that can be undermined through psychological or political operations.  It is virtually immune to these approaches. [...]

Finally, OSW networks are extremely innovative.  The bazaar atmosphere makes it easy for innovations to develop and peculate among the members.  They don't need a single operational genius, just a large number of average members working together.


The disturbing undertone of the Damballa report is not the "old-school" nature of the botnet, or the seeming reliance on black market, but the rapidity of advancement through sharing and innovation.  Targeted threats are prospering and growing in the chaos of  the Bazaar.  Certainly when compared to the order and structure of the Cathedral.  The Cathedral, in this case is us, the CND operators.  We innovate but are impaired with constraints that limit the speed of innovation and instinctually hoard, instead of sharing, vital information.

This threat is inside our OODA loop.

Monday, March 1, 2010

CIA Triad

Let’s start with a list:


  1. “Our new company policy must protect Confidentiality, Integrity, and Availability”
  2. “The goal of information security is the protection of the CIA Triad”
  3. “Before we design this architecture, we need to assess the Risk of Availability, Integrity and Confidentiality”


Where did the concepts of the CIA trinity come from?  So far I’ve pinpointed Confidentiality being addressed by LaPadula and Bell in 1976 in their mandatory access control model for Honeywell Multics.  This, as you may have guessed, was to address the problem of disclosure to classified data on information systems.
Next, I found Clark and Wilson work in 1987 on Integrity recognizing the commercial sector’s primary focus was on the Integrity of the data on their information systems (think: accounting data).
Both of these were derived as “multilevel security” (think: orange book, 1983) as an operating system design principle.  And the third leg that creates the triumvirate?  Availability.  I simply couldn’t find anything I could use as an authoritative source.  If I were to guess, the Morris Worm may have had influence on Availability reaching the status it has. (Am I wrong?)

So when did we accept the wisdom that CIA is the core to information security?  When did CIA become potential risk?  When did we make the conscious decision to apply system design principles to complex systems of systems, policy, and more? CIA is good it is good as an anchor while architecting a system.

I’m hesitant to say CIA is good in wider contexts.  Indeed, I cringe when it’s used outside of system design principles.  It’s oversimplification which has the Risk of creating blind spots in thought.  For instance, CIA does not address mis-use of the system, especially when that mis-use does not have a functional impact.  If a system has a loss of positive control (say, it’s part of a botnet) and begins sending spam out at a rate of 10 messages/minute, does it impact CIA?  See Tragedy of the Commons.

I’m also not convinced CIA can truly represent secure systems of systems (networks) in any meaningful (indeed, measurable) manner due to the asymmetric conditions.  Ignoring high complexity, the pace of change to networks is too rapid to create a secure state that can be enforced.  A simple addition of one device could completely unbalance any CIA which was perceived to be in place.