Home page logo
/

oss-sec logo oss-sec mailing list archives

Re: CVE Request: OFED 1.5.2 /proc/net/sdpstats reading local denial of service/crash
From: Marcus Meissner <meissner () suse de>
Date: Wed, 7 Sep 2011 10:23:01 +0200

On Wed, Sep 07, 2011 at 09:39:21AM +0200, Petr Matousek wrote:
On Tue, Sep 06, 2011 at 11:40:43PM +0200, Marcus Meissner wrote:
One of our customers reported an issue in the "ib_sdp" module in the
ofa_kernel package of the Open Fabrics OFED Infiband driverstack, version
1.5.2 (and potentially older, I did not check in detail, at least 1.4.2
does not have it).

Module is drivers/infiniband/ulp/sdp/ib_sdp.ko

/proc/net/sdpstats is user readable (S_IRUGO | S_IWUGO), so it can be
triggered by users on machines with infiniband stack.

While there is report of stack corruption and overflow on process (cat
/proc/net/sdpstats) exit ("Thread overran stack, or stack corrupted"),
I can't see where it actually comes from but perhaps the per_cpu vs
single variable printing does something to the stack and not just reads
over arrays.

#define __sdpstats_seq_hist_pcpu(seq, msg, hist) ({             \
        u32 h[NR_CPUS];                                         \
        unsigned int __i;                                       \
        memset(h, 0, sizeof(h));                                \

NR_CPUS can be big (4096 on RHEL6 () x86_64) and the array is located on
the stack.
 
ofed 1.5.3.2 has a different stat printing algorith according to our developer,
so it no longer is affected.

The array ^^^ is no longer allocated from the stack but via vmalloc().

Patch below. Please assign a CVE.

Please use CVE-2011-3345.

Thanks!

So the issue is not actually the wrong array iterator, but that there
is a implicit too huge stack usage caused by the helper.

Ciao, Marcus


  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]