Home page logo

basics logo Security Basics mailing list archives

RE: Possibly a different methodology for network testing
From: "Omar A. Herrera" <omar.herrera () oissg org>
Date: Sat, 22 Jul 2006 15:03:36 +0100

Hi Steve,

My comments are below (warning: long answers because it is an interesting
topic :-) ).

-----Original Message-----
From: Steve Armstrong
Sent: Saturday, July 22, 2006 12:55 AM

I have thrown together some bits on how I believe a Vulnerability test
should be undertaken, ensuring that the risks are assessed based upon
the network configuration, data movement profile and basic design of why
the network exists at all.

I still believe it is different to the OSSTMM, OWASP and NSA based
methodologies, and if I get a confirmation from these lists that my
thinking is correct, I will develop this further, with diagrams, flow
charts and templates.


I looked at your proposed methodology and liked it. I also have a few

* Include decision rules.
You have set an orderly sequence of tasks to reach the goals of penetration
testing, but some definitions of the word 'methodology' mention that it
consists not only of tasks, but also of decision rules. Although it doesn't
seem to be compulsory, I think it would be useful to at least include what
someone should do if any of the tasks cannot be performed (e.g. if you can't
get the interviews, if you don't get any information from the client, as in
a zero-knowledge style pentest, or if you can't be sure that you have
identified all security barriers), is the information at each stage required
to proceed to the next stage/perform a successful pentest?

* Use Information instead of data.
Data is a form of representation for raw information, but I believe that
isn't the best way to define your goal in a pentest; use information instead
(see for example
Information provides you with knowledge whereas data provides you only with
representation of information (which is meaningless without context). The
context that I suggest you to focus on (to avoid diving into non-essential
things) is the business process. I.e. you don't care directly if a system
has bugs which lead it to crash; you do care indirectly if this affects
information stored or processed, as required and defined by the business

Also, to get information you don't necessarily need data stored within a
system (i.e the system you want to test). For example, to see if a system is
alive you don't necessarily look into the system and get that information
from data stored there. Instead, an attacker can perform an action and
depending on the results determine if the system is up or down with a
certain probability of being correct (i.e. the attacker creates new data and
derive useful information from it, if you want to see it that way). 

This means that an attacker can generate information from actions that do
not necessarily generate data in the system being tested (strictly speaking,
this would be data stored in the environment and not in the systems).
Another example: An attacker discovers a pattern where each time that a
certain internal system communicates with a certain external system someone
is leaving with the payroll money in a bag from the office to the factory,
with a certain probability. It doesn't matter if the data sent in that
communication is encrypted with a robust algorithm (the message probably
says just something like "money is on its way to the factory"), what leaks
information is the pattern identified from observing the environment. Many
pentesters gather this information intuitively during a so called
"information gathering stage", but many methodologies tend to imply that
this information is always stored somewhere in the form of data.

In addition, some define information as 'processed data'. If you use that
definition you are also covering the functionality of the systems that
processes business information (yes, many pentesters forget about that fact
and don't analyse the implications of modifying the functionality of the
systems). You already include processes in your model but don't specify
clearly the relationship between information, processes and technological

You can always twist your definition so that you use 'data', by considering
environmental data and assuming things like all applications that process
business information have their (legal) state sequences and functionality
described in a data file stored somewhere. But as you know, that detailed
data file usually doesn't exist :-). It is much easier to work with
information since businesses usually know what information they should put
as input and what output they should get, and based on that, they identify
problems (they rarely verify to detail that the system works as expected
based on a functional analysis, many times not even the developer does).

* Include system inputs, outputs, communication channels and dependencies.
Information is stored, created, destroyed and processed in systems, so all
the possible ways to get into the system are potential attack vectors; they
are vulnerable once you confirm that there is reasonably easy a way through
these points to illegally modify information (and the most important
vulnerabilities are those affecting business information). The other places
where you can affect business information are communication channels (when
information is transmitted) and anywhere else where this information is
stored: other systems and any human beings that also process information
(here is where social engineering is important). Note that by communication
channels we mean all and of any type, not only network. Transmission through
printed or visual (e.g. monitor) media is also important, and again, many
pentesters forget this fact.

All systems receive inputs and produce outputs of some sort, and as
suggested above, you might be interested in inputs and outputs in the form
of business information (i.e. you don't care about things like mouse
movements, keyboard inputs and video output, you care about information
related to the business process that gets in and out from the system,
independently of the I/O device). In this way it is easier to identify
attack vectors. For example, unless you get a scenario like the one
described above where business information is stored in the form of audio,
and a recognizable pattern can be identified, you probably won't care about
testing I/O audio devices unless there is a dependency.

Dependencies are also critical, not only between the components of a system
but also between systems that lay in the path of business information
processing. E.g. you identify that the only possible input/output into/from
a system is through a pair of TCP ports (assume everything else in the
network is filtered and all other physical ways to access this information
has be checked already). You test the security of the system on those
potential attack vectors and find they are ok, but then you review the
dependencies and find that the input for this system is the output from
another, insecure system: Game over. It is a common error as well to test
systems independently of each other without looking at their relationships
(in respect to business information flow).

* Extend the scope of your model.
As it is now, it seems to me that your methodology is limited to a
particular type of pentesting (well, you kind of said that, it is for a
particular need). But a good model can be general and still apply to
different needs. If you design your methodology on information instead of
data and consider inputs, outputs, comm. channels and dependencies you get a
general model that can be applied to network pentesting, application
pentesting, hardware pentesting, and it can be local or remote or anything
you want. Then you are only restricted by the scope of the engagement, and
not by the scope of your model.

* Quality assurance requirements
Don't just stop with the procedures to perform a pentest, guide your readers
with measurable standards so that they know that what they are doing is
being done well and is acceptable. If you have a formal model you should be
able to produce repeatable steps that can be measured, and that's what makes
a good methodology. You want something where you can compare the results,
and if 2 different pentesters assess identical networks/systems at the same
time using this methodology, the same set of tools, the same knowledge base
and have the same skills, there should be practically no difference between
their results. In other more real scenario, a second pentester reviewing the
results from a previous pentester should be able to reach the same results.
Thus, strict documentation requirements are important.

Don't base your quality suggestions only on your experience or rules of
thumbs (more on this later), use also a formal and logic approach to assess
what needs to be covered and how deep you have to go. You will find that it
is nearly impossible to perform a "complete" pentest for some sizes of
infrastructure, but again, that limitation should be a problem of the
engagement scope, not of the scope of the model.

I have derived this not because I believe these methodologies are
lacking, but that I believe they fulfil different needs.

Anyway , please let me know your thoughts, public or otherwise.

Yes I understand there is not much meat on it, but I am still confirming
if my thoughts are different from other methodologies.

You are not the only one that feels that we still have to improve something
in respect to pentest methodologies, but that's also the reason why many are
still evolving and are not complete (some might never reach completion
because of the design that ties them to technological aspects and other
information that quickly becomes obsolete). But nevertheless it is always a
good idea to contact directly the developers of such methodologies and ask
their points of view (no need to reinvent everything).  

Many methodologies in information security (not only of pentesting) lack
formal definition; they usually tend to include personal experiences, rules
of thumb and so called "best practices".

A good methodology defined formally should allow the pentester to know that
what he is doing is the right thing in any environment (and that is most
common, with different companies, pentests usually are radically different).
For example, instead of giving a list of instructions that depend on the
environment and technology such as: enumerate network targets, identify open
ports, identify services running on open ports, identify vulnerabilities,
execute exploits... you can say something like: Identify business processes,
identify information used within business process, identify systems where
this info. is stored and processed, identify communication channels for this
information, identify end points of these channels and the dependencies
between all elements of the business process network. This should guide any
pentester to at least identify all that needs to be assessed, provided they
get enough information. Whether they are proficient at assessing the
security at each potential attack vector is another story.

Advantages of a non-technical based methodology:
* Obviously, you don't depend on the technology - God knows if the computers
of tomorrow will have network ports or what communication options we will be
available. Also, you might not be considering other important devices such
as backup tapes and USB memory sticks as part of the communication channels
or storage. So your methodology becomes resistant against obsolescence.

* You don't restrict the scope of the methodology - What if the scope
doesn't define a remote, network bases assessment, but an assessment of a
specific application in a specific system? You should be able to apply the
same methodology so that you don't end with 10+ methodologies for different
scenarios and possibly still lack N methodologies for M unknown scenarios.

* You let the pentester know, at each stage, that every action he makes is
useful and justified - The pentester will also be able to justify, using
business terms that the client understands (i.e. referring to business
information), why he/she includes certain activities within their assessment
and disregard others, instead of saying things like "I run a port scan at
the beginning just because it is a best practice and everyone does it". 

* You don't risk assessing things where the impact is very low in terms of
the business (even if the security impact to that particular resource is
high) - E.g. a payment processing company won't care much if you identify N
ways to compromise a completely isolated web server that only displays a
static, informational webpage. But it will surely care if the
vulnerabilities are found in systems or media on which payment information
is stored, processed or transmitted. You force the pentester to gather as
much information on the business and by doing so you save money and
resources for you and your client by prioritizing on the things that matter
most to your client. 

* You can identify vulnerabilities based on the business process design more
easily - Not all vulnerabilities nor security solutions are technical in
nature. Many times the design of the information business flow itself is the
problem (and you won't find technical controls for every conceivable
vulnerability). In this case you don't expect that you methodology includes
all relevant technical and non-technical controls that need to be tested,
you just follow the information flow and identify all potential attack
vectors with their technical and non-technical vulnerabilities.

If you produce a formal methodology with proofs that it works with your
definitions and your scope I don't think it is a matter of whether we like
it or not. It either works or doesn't :-), and I'm not sure there is proof
that the current methodologies work perfectly (even with their stated
scopes) because there is no formal analysis (we just feel there is still
something wrong with them, and we might be right). After you propose a
verifiable, formal methodology we can discuss whether we like it or not,
whether it is easy/difficult to apply, whether it can be applied in a
reasonable amount of time and so on, i.e. all the things we tend to discuss
before discussing what matters most: How are you sure that your proposed
methodology works? Because you can formally prove it or because some people
that tried it so far say it is ok?.

Some final thoughts: 
I don't buy that idea that a pentester must emulate the sometimes irrational
and chaotic way of thinking and acting of hackers; that's simply not
professional. Besides, there is no point having a methodology in this case,
it simply doesn't exist for hackers for any methodology claiming to be "the
hackers" methodology there will be some hackers that will disagree following
it; hackers are not restricted by methodologies, only by their interests,
desires and resources, but then they don't need to demonstrate results
within a limited time and with certain quality requirements ;-). 

But that's not only a problem of pentesting, many other areas of information
security (with some notable exceptions, like cryptography) still promote the
"best practices" way of thinking (where there is a "one way" to do things
without even proving that it works and showing why). I don't believe that
any security professional should simply do things just because the industry
says so, or because some hackers claim to work this or that way.

As professionals we don't guess, we better know exactly what we are doing,
and even when facing uncertainty we should be able to measure it given
sufficient information. E.g. "I tested as best as I could this attack vector
for this piece of information but I identified another N attack vectors
which I couldn't test due to the limited scope of the engagement. Therefore,
I attest that, to the best of my knowledge and based on my skills and
experience, this piece of information is reasonably secure against attacks
using attack vector X, but I also stress that this only assures reasonable
security for 1 out of N possible attack vectors and cannot therefore
conclude that this information, in general, is secure".

Bruce Schneier recently published an article on security certifications that
measure "standard" knowledge(
http://www.schneier.com/blog/archives/2006/07/security_certif.html) where he
recognizes that they are valuable for "some" information security
activities. He specifically says that 2 of these activities: designing and
evaluating security systems are not included in this set, and I agree.
Moreover, a good pentester will also need many non-technical skills (e.g.
communication skills).

This is the reason why I believe that both the methodologies and training
for pentesting should be more formal and not just based on other's people
experience and suggestions. We need professionals that can assess networks
in a reproducible, measurable and justifiable way, and that requires formal
methodologies that in turn require better prepared professionals (i.e.
people who know what they are doing and can demonstrate it on a report that
someone with non-technical background can read and understand).

This is just my opinion, and as such many can agree or disagree (or better
yet, prove me right or wrong with formal methods) ;-).


Omar Herrera

This list is sponsored by: Norwich University

The NSA has designated Norwich University a center of Academic Excellence 
in Information Security. Our program offers unparalleled Infosec management 
education and the case study affords you unmatched consulting experience. 
Using interactive e-Learning technology, you can earn this esteemed degree, 
without disrupting your career or home life.


  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]