mailing list archives
Re: SQL Injection
From: Alex Russell <alex () netWindows org>
Date: Wed, 16 Jun 2004 10:21:05 -0700
-----BEGIN PGP SIGNED MESSAGE-----
Just catching up on my list reading (which I've been pretty remiss
with lately). Hopefully I'm not jumping in too late here.
On Tuesday 15 June 2004 7:04 am, David Cameron wrote:
> Exactly. I think that Alex Russell first started talking about
> "boundary validation"  (although I think that Sverre Huseby
> was talking about the concept previously),
So I think that when I started talking about it, I always made sure to
note that it's nothing new and certainly not something I somehow
> which refers to
> making sure that content inappropriate for the "service" on the
> other side of the boundary is appropriately filtered. This could
> be done on either side of the boundary, of course, but is not
> restricted to "input filtering".
This is something that, in a perfect world, might be done on both
sides of the boundary, since each boundary side will know what the
other side "is". Regardless, each self-contained subsystem will need
to filter outgoing data to remove whatever escaping (etc.) it's
filtering may have introduced on the inbound side of things, so it
does need to be done on both sides for any subsystem that adopts it.
This isn't to say that doing it only on the inbound is bad, but
rather that you might mangle the data beyond beleif if you don't.
Boundary filtering is an excellent concept which aids in good
application design. However it can add to the complexity in other
I think there has always been some agreement on this, but defense in
depth is always more complex when viewed at large. OTOH, it
simplifies the analysis path for determining the threats against any
individual component in the system, and components can be secured on
a one-by-one basis (which is work you'd be doing to come up with a
secure whole anyway).
As I pointed out in the referenced discussion it pushes the
error checking further from the source of the error.
But cross-boundary error transmission is something that all web apps
struggle with today anyway. Consider the case of an ODBC exception.
What happens in most apps today? Do things get logged centrally and
in a reasonable fashion? Does the user find out about the error
first? What should my app layer DO when it gets one of these? Many
times, these interactions are completely unspecified which leads to
all kinds of information gathering vulns.
But to go it one better, consider an app that DOES log this stuff
centrally and I get said ODBC exception, where does it get handled
from? Not usually the DB layer, since that's a good place to put
catch-all exception handling. More likely, it'll happen as close to
the edges of the HTTP transaction as the app container will allow
(global or catch-all exception handling routines are very common).
In addition to
that, in some cases the source of the error might be the only place
where this problem may be solved. I talked about this as a problem
only in an asynchronous environment, I think it has wider
application though. I'll try to explain using an example.
Suppose you have a web application that goes something like this
Front End(1) Bus Logic(2) DB(3)
ASP.Net -------> MSMQ ------> SQL Server
<------- COM <-------
Applying Boundary filtering, SQL injection and basically data that
would be filtered between 2 and 3. Taking the example further.
Suppose you have a comments which is represented at the web app as
a textbox and in the database as a VarChar(2000) column. Consider
the case where someone enters 2001 characters into the field. As
far as the web application is concerned, text is text and how much
text should be allowed is a concern for the database. If the update
is sent to an asynchronous MSMQ process, what action should be
taken if it fails at the boundary? The solution proposed by Alex
be to drop the relevant message and log the issue for later.
Or mangle the message to fit the accepted policy and tell a source
system about it. That would be a per-filter policy.
However from the user perspective the comment has been updated,
there haven't been any errors, despite the fact that the comment
hasn't in fact been updated.
I've often considered this question and i'm still somewhat undecided
about it. On the one hand, your system knows what happened, can tell
you about it later, and you haven't mangled any data. Should your
source system be told that something really really bad happened?
What's the protocol for that? Does it open you up to information
disclosure vulns? I've thought it might be an option that your filter
could support, but then I go back on it when I start thinking about
the ammount of work required just to tune your filters.
Now consider that there may be further tiers between 2 and 3. The
problem then becomes more difficult again. I believe is better to
ensure that "exceptions" are passed back up the tree until the
reach a place where they can be dealt with (in the example, all the
way back to the user). In an asynchronous environment this may need
to be implemented using some sort of call back system (verifying
the update). I think this error handling "chain" is a common
structure in OO systems.
I guess these days my thinking is that exception passing is different
(but related) to the filtering task, but much more akin to a "dumb
logger". In the situation you describe, interposing more layers also
has the implied meaning that the information that would be
encapsulated in the base exception becomes less and less pertinent
(i.e., I only really use the exception as status information since
it's value decreases with distance). All the front end cares about is
if something happened or not, not necessarialy WHAT happened, aside
from broad descriptions.
My filter may report any kind of status it wants to, but to do that it
has to be able to gererate arbitrary exceptions and ensure that those
don't do something nasty themselves, so they are necessiarly
information-poor. In the case where i'm getting input I don't like
and/or want, the front end should have ALREADY checked the length of
the input (in it's input filter for the HTTP POST/GET) and reported
the problem there. In the case where we get all the way back to the
DB layer before this becomes a problem, one might hope that the DB
will first and foremost try to keep itself whole, and then as a
secondary responsibility, tell someone about what happened. It seems
entirely reasonable to have the filtering system generate 'filter
exceptions' at each boundary, but then you have to have some protocol
for encoding/decoding these as useful entities. I've started thinking
of logging as only the most rich ("trusted") recipient of status
information from a filter, and a throw exception would be another
kind that might include much less information, but still indicate
that something went pear-shaped. Filters have to be really smart to
handle this kind of distinction, and I haven't written that use this
generic concept of a "listener" yet, but I intend to.
alex () burstlib org BD10 7AFC 87F6 63F9 1691 83FA 9884 3A15 AFC9 61B7
alex () netWindows org F687 1964 1EF6 453E 9BD0 5148 A15D 1D43 AB92 9A46
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (Darwin)
-----END PGP SIGNATURE-----
Re: SQL Injection Steven M. Christey (Jun 11)
Re: SQL Injection Frank Knobbe (Jun 16)
Re: SQL Injection Jeff Williams (Jun 17)
Re: SQL Injection Frank Knobbe (Jun 17)
Re: SQL Injection Frank Knobbe (Jun 29)
RE: SQL Injection Mutallip Ablimit (Jun 29)
Re: SQL Injection gcb33 (Jun 29)
Re: SQL Injection Alex Russell (Jun 17)
Re: SQL Injection Jeff Williams (Jun 14)
Re: SQL Injection Stephen de Vries (Jun 17)
- Re: encryption over the web, (continued)