Home page logo
/

pauldotcom logo PaulDotCom mailing list archives

Re: Test Effort Estimation
From: Ryan Dewhurst <ryandewhurst () gmail com>
Date: Fri, 16 Nov 2012 16:44:57 +0100

Thanks all for the information!

I was thinking along the lines of 'anything can be measured' as Josh
said, but Josh made a great point about 'not sure that everything can
be predicted'.

On Mon, Oct 29, 2012 at 6:02 PM, Josh More <jmore () starmind org> wrote:
This same issue has been cycling around in the Agile community for years.

I do believe that anything can be measured... but I'm not sure that
everything can be predicted. There are a lot of variables in the mix:
skill of tester, motivation of tester, time available, tools
available, position on the attack/defense cycle, rules of engagement,
etc.

As a result, after arguing back and forth with people for the last
five years or so, I have completely abandoned the idea of metrics with
a goal of full coverage.

Instead, I frame the discussion around the value of the data and the
value of continued service and compare that to the cost of an
assessment.  We figure out how much they want to spend and I convert
that into hours.  They then get that much analysis.

Not every client goes for it, but really, since most engagements I've
done turn up more findings than any organization can truly address
before it's time for the next assessment, narrowing the coverage makes
a lot more sense (assuming that known exploration gaps are documented
so they may be hit in the next cycle).

-Josh More


On Sun, Oct 28, 2012 at 6:45 PM, Ryan Dewhurst <ryandewhurst () gmail com> wrote:
Hi,

I was wondering how to make Test Effort Estimation more efficient on
my black box web app tests. I think this is easier to do when doing
white box tests because you have a good metric, Lines of Coce (LOC),
but in black box testing a metric might be less easy to find.

What I normally do and I expect most other people do is give an
estimation based on past experiences, but in my opinion this can be
time consuming and sometimes inaccurate. Time consuming because you
have to manually view each application to be tested to mentally
compare it. Inaccurate because I'm human and on that particular day I
might be feeling *really* motivated and under-estimate the amount of
time (effort) needed or vise-versa. This I feel can lead to inaccuracy
and wasted time.

Another approach is to try and find a metric to use, that metric could
then be quantified into man hours.

A reasonable metric (by far not perfect) I can think of when doing a
typical black box web app test (using automated tools and manual
interaction) is the amount of unique dynamic pages the application
has. This can normally and quite easily be obtained.

Let's say it takes 1 man hour to test 10 pages. (plucking these
numbers out of the air)

If an app has 100 unique pages, the Test Effort Estimation would be 10
man hours.

So my questions are:

Do you think there are better metrics to use other than number of unique pages?
Do you think there are better ways to do Test Effort Estimation on
black box web application tests?
How many man hours do you think it should typically take to test 1 unique page?

I think it is an interesting topic which hasn't been discussed much as
far as I could tell.

Ryan
_______________________________________________
Pauldotcom mailing list
Pauldotcom () mail pauldotcom com
http://mail.pauldotcom.com/cgi-bin/mailman/listinfo/pauldotcom
Main Web Site: http://pauldotcom.com
_______________________________________________
Pauldotcom mailing list
Pauldotcom () mail pauldotcom com
http://mail.pauldotcom.com/cgi-bin/mailman/listinfo/pauldotcom
Main Web Site: http://pauldotcom.com
_______________________________________________
Pauldotcom mailing list
Pauldotcom () mail pauldotcom com
http://mail.pauldotcom.com/cgi-bin/mailman/listinfo/pauldotcom
Main Web Site: http://pauldotcom.com


  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]
AlienVault