Home page logo

nmap-dev logo Nmap Development mailing list archives

Re: Ncat with ssl using 100% cpu (PATCH)
From: David Fifield <david () bamsoftware com>
Date: Fri, 19 Jun 2009 12:00:59 -0600

On Thu, Jun 04, 2009 at 05:30:46PM -0600, David Fifield wrote:
On Tue, May 19, 2009 at 04:30:32PM -0300, el draco wrote:
Hi everyone, i was testing ncat a little bit and found that under
certain conditions it uses all of my cpu.

I'm using:
Kubuntu 8.10
Kernel 2.6.27-14-generic SMP
openssl 0.9.8g-10
libssl-dev 0.9.8g-10
Nmap 4.85BETA9, svn rev. 13330

Test case 1:

a) ncat -l 8000 --ssl
b) ncat localhost 8000 --ssl

So far so good, and now we type anything on the CLIENT like 'test'

Now ncat client is using 100% of cpu.

Thanks. I can reproduce this. There used to be a similar problem for
non-SSL connections, but it was fixed in Ncat. From some investigation,
it appears that this problem is inside Nsock. Somehow select is always
returning true in some situations.

The problem is this code:

    /* Decrement the count of waiting writes on this IOD. When it hits 0 we
       remove it from the descriptor lists. */
    assert(iod->writesd_count >= 0);
    if (!iod->ssl && iod->writesd_count == 0) {
      FD_CLR(iod->sd, &ms->mioi.fds_master_w);
      FD_CLR(iod->sd, &ms->mioi.fds_results_w);
    } else if (iod->ssl && iod->events_pending <= 1) {
    /* Exception: If this is an SSL socket and there is another
       pending event (such as a read), it might actually be waiting
       on a write so we can't clear in that case */
      FD_CLR(iod->sd, &ms->mioi.fds_master_r);
      FD_CLR(iod->sd, &ms->mioi.fds_results_r);
      FD_CLR(iod->sd, &ms->mioi.fds_master_w);
      FD_CLR(iod->sd, &ms->mioi.fds_results_w);

When the write is finished, Nsock normally checks if there are any other
writes pending, and if not, it removes the descriptor from the set of
descriptors it is watching. If we were to keep watching the descriptor
it would always be ready to write and use up all the CPU, being
repeatedly selected but not handled.

In the SSL case, it is possible that a read event can require a network
write, because of how the protocol works. The code correctly notes this
and avoids clearing the descriptor bit if a read is pending. However, it
should be more careful and only clearing the bit when a read event
actually requires a network write at that moment. Ncat in client mode
always has a pending read event from the network, so write bits never
get cleared. (The exception is immediately after a successful network
read, when the count of reads momentarily drops to zero before the next
read is scheduled. Then it's possible to clear the write bit, and that's
why it stopped using 100% CPU after reading from the server.)

I attached a patch that works for me but I want others to review it. It
provides functions (socket_count_*) that automatically keep the file
descriptor sets in sync with the number of pending reads and writes.
They make sure that when a count is zero the bit is unset, and when a
count is nonzero the corresponding bit is set. When an SSL read requires
a write, the read count is decremented and the write count is
       socket_count_read_dec(iod, ms);
       socket_count_write_inc(iod, ms);
But when the write count is zero, the corresponding socket descriptor is
removed from the select set so it won't be repeatedly selected.

Any thoughts on the concept or implementation?

David Fifield

Attachment: nsock-count.diff

Sent through the nmap-dev mailing list
Archived at http://SecLists.Org

  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]