Re: LINUX is obsolete Linux Inside
[Prev][Next][Index][Thread]

Re: LINUX is obsolete



 In article <12595@star.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:
 >
 >I was in the U.S. for a couple of weeks, so I haven't commented much on
 >LINUX (not that I would have said much had I been around), but for what 
 >it is worth, I have a couple of comments now.
 >
 >As most of you know, for me MINIX is a hobby, something that I do in the
 >evening when I get bored writing books and there are no major wars,
 >revolutions, or senate hearings being televised live on CNN.  My real
 >job is a professor and researcher in the area of operating systems.
 >
 >As a result of my occupation, I think I know a bit about where operating
 >are going in the next decade or so.  Two aspects stand out:
 >
 >1. MICROKERNEL VS MONOLITHIC SYSTEM
 >   Most older operating systems are monolithic, that is, the whole operating
 >   system is a single a.out file that runs in 'kernel mode.'  This binary
 >   contains the process management, memory management, file system and the
 >   rest. Examples of such systems are UNIX, MS-DOS, VMS, MVS, OS/360, 
 >   MULTICS, and many more.
 >
 >   The alternative is a microkernel-based system, in which most of the OS
 >   runs as separate processes, mostly outside the kernel.  They communicate
 >   by message passing.  The kernel's job is to handle the message passing,
 >   interrupt handling, low-level process management, and possibly the I/O.
 >   Examples of this design are the RC4000, Amoeba, Chorus, Mach, and the
 >   not-yet-released Windows/NT.
 >
 >   While I could go into a long story here about the relative merits of the
 >   two designs, suffice it to say that among the people who actually design
 >   operating systems, the debate is essentially over.  Microkernels have won.
 >   The only real argument for monolithic systems was performance, and there
 >   is now enough evidence showing that microkernel systems can be just as
 >   fast as monolithic systems (e.g., Rick Rashid has published papers comparing
 >   Mach 3.0 to monolithic systems) that it is now all over but the shoutin`.
 
 Of course, there are some things that are best left to the kernel, be it
 micro or monolithic.  Like things that require playing with the process'
 stack, e.g. signal handling.  Like memory allocation.  Things like that.
 
 The microkernel design is probably a win, all in all, over a monolithic
 design, but it depends on what you put in the kernel and what you leave
 out.
 
 >   MINIX is a microkernel-based system.  The file system and memory management
 >   are separate processes, running outside the kernel.  The I/O drivers are
 >   also separate processes (in the kernel, but only because the brain-dead
 >   nature of the Intel CPUs makes that difficult to do otherwise).  
 
 Minix is a microkernel design, of sorts.  The problem is that it gives special
 priveleges to mm and fs, when there shouldn't be any (at least for fs).  It
 also fails to integrate most of the functionality of mm in the kernel itself,
 and this makes things like signal handling and memory allocation *really*
 ugly.  If you did these things in the kernel itself, then signal handling
 would be as simple as setting a virtual interrupt vector and causing the
 signalled process to receive that interrupt (with the complication that
 system calls might have to be terminated.  Which means that a message would
 have to be sent to every process that is servicing the process' system call,
 if any.  It's considerations like these that make the monolithic kernel
 design appealing).
 
 The *entire* system call interface in Minix needs to be rethought.  As it
 stands right now, the file system is not just a file system, it's also a
 system-call server.  That functionality needs to be separated out in order
 to facilitate a multiple file system architecture.  Message passing is
 probably the right way to go about making the call and waiting for it, but
 the message should go to a system call server, not the file system itself.
 
 In order to handle all the special caveats of the Unix API, you end up writing
 a monolithic "kernel" even if you're using a microkernel base.  You end up
 with something called a "server", and an example is the BSD server that runs
 under Mach.
 
 And, in any case, the message-passing in Minix needs to be completely redone.
 As it is, it's a kludge.  I've been giving this some thought, but I haven't
 had time to do anything with what I've thought of so far.  Suffice it to say
 that the proper way to do message-passing is probably with message ports
 (both public and private), with the various visible parts of the operating
 system having public message ports.  Chances are, that ends up being the
 system call server only, though this will, of course, depend on the goals
 of the design.
 
 >   LINUX is
 >   a monolithic style system.  This is a giant step back into the 1970s.
 >   That is like taking an existing, working C program and rewriting it in
 >   BASIC.  To me, writing a monolithic system in 1991 is a truly poor idea.
 
 Depends on the design criteria, as you should know.  If your goal is to
 design a Unix workalike that is relatively simple and relatively small,
 then a monolithic design is probably the right approach for the job, because
 unless you're designing for really backwards hardware, the problems of
 things like interrupted system calls, memory allocation within the kernel
 (so you don't have to statically allocate *everything* in your OS), signal
 handling, etc. all go away (or are at least minimized) if you use a
 monolithic design.  If you want the ability to bring up and take down
 file systems, add and remove device drivers, etc., all at runtime, then
 a microkernel approach is the right solution.
 
 Frankly, I happen to like the idea of removable device drivers and such,
 so I tend to favor the microkernel approach as a general rule.
 
 >2. PORTABILITY
 >   Once upon a time there was the 4004 CPU.  When it grew up it became an
 >   8008.  Then it underwent plastic surgery and became the 8080.  It begat
 >   the 8086, which begat the 8088, which begat the 80286, which begat the
 >   80386, which begat the 80486, and so on unto the N-th generation.  In
 >   the meantime, RISC chips happened, and some of them are running at over
 >   100 MIPS.  Speeds of 200 MIPS and more are likely in the coming years.
 >   These things are not going to suddenly vanish.  What is going to happen
 >   is that they will gradually take over from the 80x86 line.  They will
 >   run old MS-DOS programs by interpreting the 80386 in software.  (I even
 >   wrote my own IBM PC simulator in C, which you can get by FTP from
 >   ftp.cs.vu.nl =  192.31.231.42 in dir minix/simulator.)  I think it is a
 >   gross error to design an OS for any specific architecture, since that is
 >   not going to be around all that long.
 
 Again, look at the design criteria.  If portability isn't an issue, then
 why worry about it?  While LINUX suffers from lack of portability, portability
 was obviously never much of a consideration for its author, who explicitly
 stated that it was written as an exercise in learning about the 386
 architecture.
 
 And, in any case, while MINIX is portable in the sense that most of the code
 can be ported to other platforms, it *still* suffers from the limitations of
 the original target machine that drove the walk down the design decision tree.
 The message passing is a kludge because the 8088 is slow.  The kernel doesn't
 do memory allocation (thus not allowing FS and the drivers to get away with
 using a malloc library or some such, and thus causing everyone to have to
 statically allocate everything), probably due to some other limitation of
 the 8088.  The very idea of using "clicks" is obviously the result of the
 segmented architecture of the 8088.  The file system size is too limited
 (theoretically fixed in 1.6, but now you have *two* file system formats to
 contend with.  If having the file system as a separate process is such a
 big win, then why don't we have two file system servers, eh?  Why simply
 extend the existing Minix file system instead of implementing BSD's FFS
 or some other high-performance file system?  It's not that I'm greedy
 or anything... :-).
 
 >   MINIX was designed to be reasonably portable, and has been ported from the
 >   Intel line to the 680x0 (Atari, Amiga, Macintosh), SPARC, and NS32016.
 >   LINUX is tied fairly closely to the 80x86.  Not the way to go.
 
 All in all, I tend to agree.
 
 >Don`t get me wrong, I am not unhappy with LINUX.  It will get all the people
 >who want to turn MINIX in BSD UNIX off my back.  But in all honesty, I would
 >suggest that people who want a **MODERN** "free" OS look around for a 
 >microkernel-based, portable OS, like maybe GNU or something like that.
 
 Yeah, right.  Point me someplace where I can get a free "modern" OS and I'll
 gladly investigate.  But the GNU OS is currently vaporware, and as far as I'm
 concerned it will be for a LOOOOONG time to come.
 
 Any other players?  BSD 4.4 is a monolithic architecture, so by your
 definition it's out.  Mach is free, but the BSD server isn't (AT&T code,
 you know), and in any case, isn't the BSD server something you'd consider
 to be a monolithic design???
 
 Really.  Why do you think LINUX is as popular as it is?  The answer is
 simple, of course: because it's the *only* free Unix workalike OS in
 existence.  BSD doesn't qualify (yet).  Minix doesn't qualify.  XINU
 isn't even in the running.  GNU's OS is vaporware, and probably will
 be for a long time, so *by definition* it's not in the running.  Any
 other players?  I haven't heard of any...
 
 >Andy Tanenbaum (ast@cs.vu.nl)
 
 Minix is an excellent piece of work.  A good starting point for anyone who
 wants to learn about operating systems.  But it needs rewriting to make it
 truly elegant and functional.  As it is, there are too many kludges and
 hacks (e.g., the message passing).
 
 				Kevin Brown