Date: Fri, 30 Apr 1999 11:07:20 +0300 From: Sergey V. Kolychev To: BUGTRAQ@netspace.org Subject: Buffer overflow in ftpd and locate bug Hi. I had problem with locate from findutils-4.1.24.rpm from Redhat-5.1 It segfaults if we have huge directory at incoming ftp which created by exploits for ftpd realpath hole. My ftpd is patched. Those exploits ,i think, should not afraid me, but if updatedb puts to locate database that directory then locate segfaults. ( getline.c 104 row by gdb ) I guess it can be used for running arbitrary commands if root runs locate. I had look to latest Redhat-6.0 findutils-4.1.31.rpm but it still based on findutils-4.1 as well as findutils-4.1.24 and haven't any patches from redhat concerning subject and I am sure it stiil vulnerable. ----------------------Alchevsk Linux User Group----------------------- I don't call, I don't cry , I don't sorry. All will gone like a white appletreeses's smoke... (S.Esenin) http://www.ic.al.lg.ua/~ksv | e-mail: ksv@gw.al.lg.ua PGP key & Geekcode: finger ksv@gw.al.lg.ua ---------------------------------------------------------------------------------------- ___ _ _______ \ \ | | ___ ___ _ __\__ / | \ | | / _ \ |__ \ | '_/ / / | \| || __/ / _ | | | / /__ | |\ | \___| \___,_\|_| /_____/ \| \_| f a c t o r y 99 -=[ Date: Tue Feb 23 19:30:32 1999 -=[ Text: Long pathnames on Linux (...and maybe others) There are others programs vulnerable with long file/directories names. We found in `locate' another buffer overflow with pathnames. Creating a very long directory and subdirectorys (about 9k of total lenght) and running `updatedb' script the long directory will be in database of locate then just typing: "locate AAAAAA" you will get a Segmentation fault. And debugging: $ gdb locate . . . (gdb) r AAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA...etc... Program received signal SIGSEGV, Segmentation fault. 0x80497b9 in getstr (lineptr=0xbffffac8, n=0xbffffac4, stream=0x804ba08, terminator=0 '\000', offset=65551) at getline.c:104 104 *read_pos++ = c; (gdb) So the wrong thing is the `offset' var, the function `getstr' read s from a stream until a `terminator' was not found. This functions write the data readed in buffer lineptr+offset. The initial size of buffer is 1026, buf getstr realloc if necessary, but it doesn't check if buffer+offset is out of bound. So offset+65551 will point to outside of buffer. We don't known if is possible to overflow the stack but if yes a regular can create a directory with a shellcode that copies a shell setuid to her home directory inside the name and wait until `updatedb' isn't run (normaly it's called by crontab periodically) Then if root run locate ... the shellcode you have the uid 0. Workaround: Normally updatedb is runned by nobody nogroup (by crontab) and the permissons of `locatedb' is 644 (rw-r--r--) you can change owner and permissions of `locate' to this user. chown nobody.nogroup /usr/bin/locate chown 4711 /usr/bin/locate So if the user try to exploit he will get a shell setuid nobody. ...it's not the end, we found another problem in `rm -r' when I try to delete the big directory (created in test of wuftp and locate). The "rm" looks like to recursively enter in subdirectories to start `rmdir'ing from the end of path (rmdir doesn't remove non-empty directories), I think that 'rm' have a predefined buffer size of current directory, when it's full I think it stop the recursive function and try to delete the next directory and fails with: "Directory not empty". The `mc' (midinight commander) have the same problem but it doesn't show anything, simplely don't remove the directory. I can only remove the long pathname if I change to mmmmmmmm to the midle of the path and 'rm -r' from there, and back to original path and remove the rest. Others tested and problematic programs: mc: When try enter in directorys (hiting ENTER) it show a warn, but you still can hit enter in the directory and mc Segmentation fault mcedit: Segmentation fault if a long pathname is give as argument. joe: Open the argument passed, but Segmentation fault when ^C to exit vche: (Hex editor) Show "Can't open AAAA...AAA-> Press any key..." when any key is pressed: Segmentation fault pico: Segmentation fault if a long pathname is give as argument. pine -F: Segmentation fault if a long pathname is give as argument. pine -f: Show: 'Problem detected: "Received abort signal".' if a long pathname is give as argument vi: Strange result if passed a long path, it show "Error reading back from tmp file!" and after pressing ^C it show: "//foo: unrecoverable -- header trashed" which: Just a Segmentation fault if a long pathname is give as argument. ed: Segmentation fault if a long pathname is give as argument. SURE, these programs are NOT set uid, it's only paranoia >8) Sorry if the english of this message isn't good. We don't speak english :) Tested on: Linux 2.2.1-i486 / slackware 3.6 / find_utils 4.1 Tests by: tgo, sh1. ----------------------------------------------------------------------- tgo@nearz.org psych0byte@nearz.org www.nearz.org drkraptor@nearz.org sh1@nearz.org revenge@nearz.org bahamas@anti-ms.uground.org ----------------------------------------------------------------------- ---------------------------------------------------------------------------------------- Date: Sun, 2 May 1999 20:37:35 CEST From: Przemyslaw Frasunek To: BUGTRAQ@netspace.org Subject: Re: Buffer overflow in ftpd and locate bug > I had problem with locate from findutils-4.1.24.rpm from Redhat-5.1 > It segfaults if we have huge directory at incoming ftp which created > by exploits for ftpd realpath hole. My ftpd is patched. Those exploits > ,i think, should not afraid me, but if updatedb puts to locate database > that directory then locate segfaults. ( getline.c 104 row by gdb ) > I guess it can be used for running arbitrary commands if root runs locate. I've noticed a similar problem with /usr/bin/find on FreeBSD. By creating _very_ long and deep directory structure it's possible to segfault /usr/bin/find (it's also used in /etc/periodic scripts, which runs on root). Example: I'm creating a directory structure with 300 subdirectories, each 255 chars length (source in attachment, also it's possible to do it via ftpd, because it calls mkdir() and chdir()). lagoon:venglin:/tmp/jc> find example > /dev/null Segmentation fault (core dumped) Gdb shows, that functions puts() was overflowed, when it tried to print a very long path. Also other system tools (rm, ls) has big problems with such directory structures. -- * Fido: 2:480/124 ** WWW: lagoon.freebsd.org.pl/~venglin ** GSM:48-601-383657 * * Inet: venglin@lagoon.freebsd.org.pl ** PGP:D48684904685DF43EA93AFA13BE170BF * #include #include #include #include #define DUMP 0x41 main(int argc, char *argv[]) { char buf[255]; int i = 0; if (argc < 3) { fprintf(stderr, "usage: %s \n", argv[0]); exit(1); } if(chdir(argv[1])) { fprintf(stderr, "error in chdir(): %s\n", strerror(errno)); exit(1); } memset(buf, DUMP, 255); for(i=0;i<(atoi(argv[2]))-1;i++) { if(mkdir(buf, (S_IRWXU | S_IRWXG | S_IRWXO))) { fprintf(stderr, "error in mkdir() after %d iterations: %s\n", i, strerror(errno)); exit(1); } if(chdir(buf)) { fprintf(stderr, "error in chdir() after %d iterations: %s\n", i, strerror(errno)); exit(1); } } exit(0); } ---------------------------------------------------------------------------------------- Date: Mon, 3 May 1999 22:41:09 +1000 From: Neale Banks To: BUGTRAQ@netspace.org Subject: Re: Possible Linuxconf Vulnerability On Sat, 1 May 1999, Desync wrote: [...] > Obviously, someone would have to remove clock for this to occur. Which > would conclude that either A) you had incorrect permissions for clock B) > they had allready used some means of another true exploit to cause other > program to misbehave. No, this is not "obvious". Maybe OpenLinux, like Debian, doesn't have a /sbin/clock? Debian has a /sbin/hwclock, which I suspect has the functionality Linuxconf is looking for. The "problem" may well be Linuxconf _presuming_ the existence of /sbin/clock. > If someone really wanted to do some damage with physical access to a > machine, popping a rescue disk set into the drive and rebooting with the > reset switch would do fine. Agreed: there is much to be said for the assertion "physical access == game over". Regards, Neale. ---------------------------------------------------------------------------------------- Date: Mon, 3 May 1999 16:27:34 -0700 From: Crispin Cowan To: BUGTRAQ@netspace.org Subject: Re: Buffer overflow in ftpd and locate bug "[tgo]" wrote: > On 23 February I send to bugtraq a comment about this problem > (ignored by aleph1 ? hehe :) > http://www.nearz.org/new/lynx/text/1999/FEB-Pathnames Probably because I posted this "locate" vulnerability to Bugtraq in September 1998: * http://www.geek-girl.com/bugtraq/1998_3/0867.html * http://www.geek-girl.com/bugtraq/1998_3/0873.html However, the "rm" probelm on tgo's page is new to me. Crispin ----- Crispin Cowan, Research Assistant Professor of Computer Science, OGI NEW: Protect Your Linux Host with StackGuard'd Programs :FREE http://www.cse.ogi.edu/DISC/projects/immunix/StackGuard/ Support Justice: Boycott Windows 98 ---------------------------------------------------------------------------------------- Date: Tue, 4 May 1999 11:43:25 +0700 From: Eugeny Kuzakov To: BUGTRAQ@netspace.org Subject: Re: Buffer overflow in ftpd and locate bug On Sun, 2 May 1999, Przemyslaw Frasunek wrote: > Example: > > I'm creating a directory structure with 300 subdirectories, each > 255 chars length (source in attachment, also it's possible to do it > via ftpd, because it calls mkdir() and chdir()). I tryed it under 2.2-stable. /usr/bin/find -- yes, core dumpes. /bin/rm can not delete this tree...8-[ ] I don't know how to remove it.... -- Best wishes, Eugeny Kuzakov Laboratory 321 ( Omsk, Russia ) kev@lab321.ru ICQ#: 5885106 ---------------------------------------------------------------------------------------- Date: Fri, 7 May 1999 01:31:22 -0400 From: Andrew Pitman To: BUGTRAQ@netspace.org Subject: Re: Buffer overflow in ftpd and locate bug Eugeny, Don't panic!! I'm CC'ing this to Bugtraq in case some aren't aware of this (very simple) solution: use rm -d (as superuser) on the top-level directory. Then run fsck to free the unreferenced inodes below it in the 'tree'. Andrew -- "The wonderful thing about standards is that there are so many to choose from." (Andrew S. Tanenbaum) ------------------------------+---------------------------------- Andrew Pitman | Management Information Systems, Unix System Administrator/ | Technology Operations Support Webmaster | at Rowan University ------------------------------+---------------------------------- On Tue, 4 May 1999, Eugeny Kuzakov wrote: > On Sun, 2 May 1999, Przemyslaw Frasunek wrote: > > > Example: > > > > I'm creating a directory structure with 300 subdirectories, each > > 255 chars length (source in attachment, also it's possible to do it > > via ftpd, because it calls mkdir() and chdir()). > I tryed it under 2.2-stable. > /usr/bin/find -- yes, core dumpes. > /bin/rm can not delete this tree...8-[ ] > I don't know how to remove it.... > > -- > Best wishes, Eugeny Kuzakov > Laboratory 321 ( Omsk, Russia ) > kev@lab321.ru > ICQ#: 5885106 >