🔎 
  
Free For All - How Linux and the Free Software Movement Undercut the High Tech Titans
Peter Wayner (2002-12-22)

8. Outsider

The battle between the University of California at Berkeley's computer science department and AT&T did not reach the court system until 1992, but the friction between the department's devotion to sharing and the corporation's insistence on control started long before.

While the BSD team struggled with lawyers, a free man in Finland began to write his own operating system without any of the legal or institutional encumbrance. At the beginning he said it was a project that probably wouldn't amount to much, but only a few years later people began to joke about “Total World Domination.” A few years after that, they started using the phrase seriously.

In April 1991, Linus Torvalds had a problem. He was a relatively poor university student in Finland who wanted to hack in the guts of a computer operating system. Microsoft's machines at the time were the cheapest around, but they weren't very interesting. The basic Disk Operating System (DOS) essentially let one program control the computer. Windows 3.1 was not much more than a graphical front end to DOS featuring pretty pictures--icons--to represent the files. Torvalds wanted to experiment with a real OS, and that meant UNIX or something that was UNIX-like. These real OSs juggled hundreds of programs at one time and often kept dozens of users happy. Playing with DOS was like practicing basketball shots by yourself. Playing with UNIX was like playing with a team that had 5, 10, maybe as many as 100 people moving around the court in complicated, clockwork patterns.

But UNIX machines cost a relative fortune. The high-end customers requested the OS, so generally only high-end machines came with it. A poor university student in Finland didn't have the money for a topnotch Sun Sparc station. He could only afford a basic PC, which came with the 386 processor. This was a top-of-the-line PC at the time, but it still wasn't particularly exciting. A few companies made a version of UNIX for this low-end machine, but they charged for it.

In June 1991, soon after Torvalds 3 started his little science project, the Computer Systems Research Group at Berkeley released what they thought was their completely unencumbered version of BSD UNIX known as Network Release 2. Several projects emerged to port this to the 386, and the project evolved to become the FreeBSD and NetBSD versions of today. Torvalds has often said that he might never have started Linux if he had known that he could just download a more complete OS from Berkeley.

But Torvalds didn't know about BSD at the time, and he's lucky he didn't. Berkeley was soon snowed under by the lawsuit with AT&T claiming that the university was somehow shipping AT&T's intellectual property. Development of the BSD system came to a screeching halt as programmers realized that AT&T could shut them down at any time if Berkeley was found guilty of giving away source code that AT&T owned.

If he couldn't afford to buy a UNIX machine, he would write his own version. He would make it POSIX-compatible, a standard for UNIX designers, so others would be able to use it. Minix was another UNIXlike OS that a professor, Andrew Tanenbaum, wrote for students to experiment with the guts of an OS. Torvalds initially considered using Minix as a platform. Tanenbaum included the source code to his project, but he charged for the package. It was like a textbook for students around the world.

Torvalds looked at the price of Minix ($150) and thought it was too much. Richard Stallman's GNU General Public License had taken root in Torvalds's brain, and he saw the limitations in charging for software. GNU had also produced a wide variety of tools and utility programs that he could use on his machine. Minix was controlled by Tanenbaum, albeit with a much looser hand than many of the other companies at the time.

People could add their own features to Minix and some did. They did get a copy of the source code for $150. But few changes made their way back into Minix. Tanenbaum wanted to keep it simple and grew frustrated with the many people who, as he wrote back then, “want to turn Minix into BSD UNIX.”

So Torvalds started writing his own tiny operating system for this 386. It wasn't going to be anything special. It wasn't going to topple AT&T or the burgeoning Microsoft. It was just going to be a fun experiment in writing a computer operating system that was all his. He wrote in January 1992,“ Many things should have been done more portably if it would have been a real project. I'm not making overly many excuses about it though: it was a design decision, and last April when I started the thing, I didn't think anybody would actually want to use it.”

Still, Torvalds had high ambitions. He was writing a toy, but he wanted it to have many, if not all, of the features found in full-strength UNIX versions on the market. On July 3, he started wondering how to accomplish this and placed a posting on the USENET newsgroup comp.os.minix, writing:

Hello netlanders, Due to a project I'm working on (in minix), I'm interested in the posix standard definition. Could somebody please point me to a (preferably) machine-readable format of the latest posix rules? Ftp-sites would be nice.

Torvalds's question was pretty simple. When he wrote the message in 1991, UNIX was one of the major operating systems in the world. The project that started at AT&T and Berkeley was shipping on computers from IBM, Sun, Apple, and most manufacturers of higher-powered machines known as workstations. Wall Street banks and scientists loved the more powerful machines, and they loved the simplicity and hackability of UNIX machines. In an attempt to unify the marketplace, computer manufacturers created a way to standardize UNIX and called it POSIX. POSIX ensured that each UNIX machine would behave in a standardized way.

Torvalds worked quickly. By September he was posting notes to the group with the subject line “What would you like to see most in Minix?” He was adding features to his clone, and he wanted to take a poll about where he should add next.

Torvalds already had some good news to report. “I've currently ported bash(1.08) and GCC(1.40), and things seem to work. This implies that I'll get something practical within a few months,” he said.

At first glance, he was making astounding progress. He created a working system with a compiler in less than half a year. But he also had the advantage of borrowing from the GNU project. Stallman's GNU project group had already written a compiler (GCC) and a nice text user interface (bash). Torvalds just grabbed these because he could. He was standing on the shoulders of the giants who had come before him.

The core of an OS is often called the “kernel,” which is one of the strange words floating around the world of computers. When people are being proper, they note that Linus Torvalds was creating the Linux kernel in 1991. Most of the other software, like the desktop, the utilities, the editors, the web browsers, the games, the compilers, and practically everything else, was written by other folks. If you measure this in disk space, more than 95 percent of the code in an average distribution lies outside the kernel. If you measure it by user interaction, most people using Linux or BSD don't even know that there's a kernel in there. The buttons they click, the websites they visit, and the printing they do are all controlled by other programs that do the work.

Of course, measuring the importance of the kernel this way is stupid. The kernel is sort of the combination of the mail room, boiler room, kitchen, and laundry room for a computer. It's responsible for keeping the data flowing between the hard drives, the memory, the printers, the video screen, and any other part that happens to be attached to the computer.

In many respects, a well-written kernel is like a fine hotel. The guests check in, they're given a room, and then they can order whatever they need from room service and a smoothly oiled concierge staff. Is this new job going to take an extra 10 megabytes of disk space? No problem, sir. Right away, sir. We'll be right up with it. Ideally, the software won't even know that other software is running in a separate room. If that other program is a loud rock-and-roll MP3 playing tool, the other software won't realize that when it crashes and burns up its own room. The hotel just cruises right along, taking care of business.

In 1991, Torvalds had a short list of features he wanted to add to the kernel. The Internet was still a small network linking universities and some advanced labs, and so networking was a small concern. He was only aiming at the 386, so he could rely on some of the special features that weren't available on other chips. High-end graphics hardware cards were still pretty expensive, so he concentrated on a text-only interface. He would later fix all of these problems with the help of the people on the Linux kernel mailing list, but for now he could avoid them.

Still, hacking the kernel means anticipating what other programmers might do to ruin things. You don't know if someone's going to try to snag all 128 megabytes of RAM available. You don't know if someone's going to hook up a strange old daisy-wheel printer and try to dump a PostScript file down its throat. You don't know if someone's going to create an endless loop that's going to write random numbers all over the memory. Stupid programmers and dumb users do these things every day, and you've got to be ready for it. The kernel of the OS has to keep things flowing smoothly between all the different parts of the system. If one goes bad because of a sloppy bit of code, the kernel needs to cut it off like a limb that's getting gangrene. If one job starts heating up, the kernel needs to try to give it all the resources it can so the user will be happy. The kernel hacker needs to keep all of these things straight.

Creating an operating system like this is no easy job. Many of the commercial systems crash frequently for no perceptible reason, and most of the public just takes it. 4 Many people somehow assume that it must be their fault that the program failed. In reality, it's probably the kernel's. Or more precisely, it's the kernel designer's fault for not anticipating what could go wrong.

By the mid-1970s, companies and computer scientists were already experimenting with many different ways to create workable operating systems. While the computers of the day weren't very powerful by modern standards, the programmers created operating systems that let tens if not hundreds of people use a machine simultaneously. The OS would keep the different tasks straight and make sure that no user could interfere with another.

As people designed more and more operating systems, they quickly realized that there was one tough question: how big should it be? Some people argued that the OS should be as big as possible and come complete with all the features that someone might want to use. Others countered with stripped-down designs that came with a small core of the OS surrounded by thousands of little programs that did the same thing.

To some extent, the debate is more about semantics than reality. A user wants the computer to be able to list the different files stored in one directory. It doesn't matter if the question is answered by a big operating system that handles everything or a little operating system that uses a program to find the answer. The job still needs to be done, and many of the instructions are the same. It's just a question of whether the instructions are labeled the “operating system” or an ancillary program.

But the debate is also one about design. Programmers, teachers, and the Lego company all love to believe that any problem can be solved by breaking it down into small parts that can be assembled to create the whole. Every programmer wants to turn the design of an operating system into thousands of little problems that can be solved individually. This dream usually lasts until someone begins to assemble the parts and discovers that they don't work together as perfectly as they should.

When Torvalds started crafting the Linux kernel, he decided he was going to create a bigger, more integrated version that he called a “monolithic kernel.” This was something of a bold move because the academic community was entranced with what they called “microkernels.” The difference is partly semantic and partly real, but it can be summarized by analogy with businesses. Some companies try to build large, smoothly integr the steps of production. Others try to create smaller operations that subcontract much of the production work to other companies. One is big, monolithic, and all-encompassing, while the other is smaller, fragmented, and heterogeneous. It's not uncommon to find two companies in the same industry taking different approaches and thinking they're doing the right thing.

The design of an operating system often boils down to the same decision. Do we want to build a monolithic core that handles all the juggling internally, or do we want a smaller, more fragmented model that should be more flexible as long as the parts interact correctly?

In time, the OS world started referring to this core as the kernel of the operating system. People who wanted to create big OSs with many features wrote monolithic kernels. Their ideological enemies who wanted to break the OS into hundreds of small programs running on a small core wrote microkernels. Some of the most extreme folks labeled their work a nanokernel because they thought it did even less and thus was even more pure than those bloated microkernels.

The word “kernel” is a bit confusing for most people because they often use it to mean a fragment of an object or a small fraction. An extreme argument may have a kernel of truth to it. A disaster movie always gives the characters and the audience a kernel of hope to which to cling.

Mathematicians use the word a bit differently and emphasize the word's ability to let a small part define a larger concept. Technically, a kernel of a function f is the set of values, x<sub>1</sub>, x<sub>2</sub>, . . . x<sub>n</sub> such that f(x<sub>i</sub>)=1, or whatever the identity element happens to be. The action of the kernel of a function does a good job of defining how the function behaves with all the other elements. The algebraists study a kernel of a function because it reveals the overall behavior. 5

The OS designers use the word in the same way. If they define the kernel correctly, then the behavior of the rest of the OS will follow. The small part of the code defines the behavior of the entire computer. If the kernel does one thing well, the entire computer will do it well. If it does one thing badly, then everything will suffer.

Many computer users often notice this effect without realizing why it ated operations where one company controls all exists. Most Macintosh computers, for instance, can be sluggish at times because the OS does not do a good job juggling the workload between processes. The kernel of the OS has not been completely overhauled since the early days when the machines ran one program at a time. This sluggishness will persist for a bit longer until Macintosh releases a new version known as MacOS X. This will be based on the Mach kernel, a version developed at Carnegie-Mellon University and released as open source software. Steve Jobs adopted it when he went to NeXT, a company that was eventually folded back into Apple. This kernel does a much better job of juggling different tasks because it uses preemptive multitasking instead of cooperative multitasking. The original version of the MacOS let each program decide when and if it was going to give up control of the computer to let other programs run. This low-rent version of juggling was called cooperative multitasking, but it failed when some program in the hotel failed to cooperate. Most software developers obeyed the rules, but mistakes would still occur. Bad programs would lock up the machine. Preemptive multitasking takes this power away from the individual programs. It swaps control from program to program without asking permission. One pig of a program can't slow down the entire machine. When the new MacOS X kernel starts offering preemptive multitasking, the users should notice less sluggish behavior and more consistent performance.

Torvalds plunged in and created a monolithic kernel. This made it easier to tweak all the strange interactions between the programs. Sure, a microkernel built around a clean, message-passing architecture was an elegant way to construct the guts of an OS, but it had its problems. There was no easy way to deal with special exceptions. Let's say you want a web server to run very quickly on your machine. That means you need to treat messages coming into the computer from the Internet with exceptional speed. You need to ship them with the equivalent of special delivery or FedEx. You need to create a special exception for them. Tacking these exceptions onto a clean microkernel starts to make it look bad. The design starts to get cluttered and less elegant. After a few special exceptions are added, the microkernel can start to get confused.

Torvalds's monolithic kernel did not have the elegance or the simplicity of a microkernel OS like Minix or Mach, but it was easier to hack. New tweaks to speed up certain features were relatively easy to add. There was no need to come up with an entirely new architecture for the message-passing system. The downside was that the guts could grow remarkably byzantine, like the bureaucracy of a big company.

In the past, this complexity hurt the success of proprietary operating systems. The complexity produced bugs because no one could understand it. Torvalds's system, however, came with all the source code, making it much easier for application programmers to find out what was causing their glitch. To carry the corporate bureaucracy metaphor a bit further, the source code acted like the omniscient secretary who is able to explain everything to a harried executive. This perfect knowledge reduced the cost of complexity.

By the beginning of 1992, Linux was no longer a Finnish student's part-time hobby. Several influential programmers became interested in the code. It was free and relatively usable. It ran much of the GNU code, and that made it a neat, inexpensive way to experiment with some excellent tools. More and more people downloaded the system, and a significant fraction started reporting bugs and suggestions to Torvalds. He rolled them back in and the project snowballed.

8.1. A Hobby Begets a Project that Begets a Movement

On the face of it, Torvalds's decision to create an OS wasn't extraordinary. Millions of college-age students decide that they can do anything if they just put in a bit more elbow grease. The college theater departments, newspapers, and humor magazines all started with this impulse, and the notion isn't limited to college students. Millions of adults run Little League teams, build model railroads, lobby the local government to create parks, and take on thousands of projects big and small in their spare time.

Every great idea has a leader who can produce a system to sustain it. Every small-town lot had kids playing baseball, but a few guys organized a Little League program that standardized the rules and the competition. Every small town had people campaigning for parks, but one small group created the Sierra Club, which fights for parks throughout the world.

This talent for organizing the work of others is a rare commodity, and Torvalds had a knack for it. He was gracious about sharing his system with the world and he never lorded it over anyone. His messages were filled with jokes and self-deprecating humor, most of which were carefully marked with smiley faces (:-)) to make sure that the message was clear. If he wrote something pointed, he would apologize for being a “hothead.” He was always gracious in giving credit to others and noted that much of Linux was just a clone of UNIX. All of this made him easy to read and thus influential.

His greatest trick, though, was his decision to avoid the mantle of power. He wrote in 1992, “Here's my standing on 'keeping control,' in 2 words (three?): I won't. The only control I've effectively been keeping on Linux is that I know it better than anybody else.”

He pointed out that his control was only an illusion that was caused by the fact that he did a good job maintaining the system. “I've made my changes available to ftp-sites etc. Those have become effectively official releases, and I don't expect this to change for some time: not because I feel I have some moral right to it, but because I haven't heard too many complaints.”

As he added new features to his OS, he shipped new copies frequently. The Internet made this easy to do. He would just pop a new version up on a server and post a notice for all to read: come download the latest version.

He made it clear that people could vote to depose him at any time. “If people feel I do a bad job, they can do it themselves.” They could just take all of his Linux code and start their own version using Torvalds's work as a foundation.

Anyone could break off from Torvalds's project because Torvalds decided to ship the source code to his project under Richard Stallman's GNU General Public License, or GPL. In the beginning, he issued it with a more restrictive license that prohibited any “commercial” use, but eventually moved to the GNU license. This was a crucial decision because it cemented a promise with anyone who spent a few minutes playing with his toy operating system for the 386. It stated that all of the source code that Torvalds or anyone else wrote would be freely accessible and shared with everyone. This decision was a double-edged sword for the community. Everyone could take the software for free,

but if they started circulating some new software built with the code, they would have to donate their changes back to the project. It was like flypaper. Anyone who started working with the project grew attached to it. They couldn't run off into their own corner. Some programmers joke that this flypaper license is like sex. If you make one mistake by hooking up with a project protected by GPL, you pay for it forever. If you ever ship a version of the project, you must include all of the source code. It can be distributed freely forever.

While some people complained about the sticky nature of the GPL, enough saw it as a virtue. They liked Torvalds's source code, and they liked the fact that the GPL made them full partners in the project. Anyone could donate their time and be sure it wasn't going to disappear. The source code became a body of work held in common trust for everyone. No one could rope it off, fence it in, or take control.

In time, Torvalds's pet science project and hacking hobby grew as more people got interested in playing with the guts of machines. The price was right, and idle curiosity could be powerful. Some wondered what a guy in Finland could do with a 386 machine. Others wondered if it was really as usable as the big machines from commercial companies. Others wondered if it was powerful enough to solve some problems in the lab. Still others just wanted to tinker. All of these folks gave it a try, and some even began to contribute to the project.

Torvalds's burgeoning kernel dovetailed nicely with the tools that the GNU project created. All of the work by Stallman and his disciples could be easily ported to work with the operating system core that Torvalds was now calling Linux. This was the power of freely distributable source code. Anyone could make a connection, and someone invariably did. Soon, much of the GNU code began running on Linux. These tools made it easier to create more new programs, and the snowball began to roll.

Many people feel that Linus Torvalds's true act of genius was in coming up with a flexible and responsive system for letting his toy OS grow and change. He released new versions often, and he encouraged everyone to test them with him. In the past, many open source developers using the GNU GPL had only shipped new versions at major landmarks in development, acting a bit like the commercial developers. After they released version 1.0, they would hole up in their basements until they had added enough new features to justify version 2.0.

Torvalds avoided this perfectionism and shared frequently. If he fixed a bug on Monday, then he would roll out a new version that afternoon. It's not strange to have two or three new versions hit the Internet each week. This was a bit more work for Torvalds, but it also made it much easier for others to become involved. They could watch what he was doing and make their own suggestions.

This freedom also attracted others to the party. They knew that Linux would always be theirs, too. They could write neat features and plug them into the Linux kernel without worrying that Torvalds would yank the rug out from under them. The GPL was a contract that lasted long into the future. It was a promise that bound them together.

The Linux kernel also succeeded because it was written from the ground up for the PC platform. When the Berkeley UNIX hackers were porting BSD to the PC platform, they weren't able to make it fit perfectly. They were taking a piece of software crafted for older computers like the VAX, and shaving off corners and rewriting sections until it ran on the PC.

Alan Cox pointed out to me, "The early BSD stuff was by UNIX people for UNIX people. You needed a calculator and familiarity with BSD UNIX on big machines (or a lot of reading) to install it. You also couldn't share a disk between DOS/Windows and 386BSD or the early branches off it.

“Nowadays FreeBSD understands DOS partitions and can share a disk, but at the time BSD was scary to install,” he continued.

The BSD also took certain pieces of hardware for granted. Early versions of BSD required a 387, a numerical coprocessor that would speed up the execution of floating point numbers. Cox remembers that the price (about $100) was just too much for his budget. At that time, the free software world was a very lean organization.

Torvalds's operating system plugged a crucial hole in the world of free source software and made it possible for someone to run a computer without paying anyone for a license. Richard Stallman had dreamed of this day, and Torvalds came up with the last major piece of the puzzle.

8.2. A Different Kind of Trial

During the early months of Torvalds's work, the BSD group was stuck in a legal swamp. While the BSD team was involved with secret settlement talks and secret depositions, Linus Torvalds was happily writing code and sharing it with the world on the Net. His life wasn't all peaches and cream, but all of his hassles were open. Professor Andy Tanenbaum, a fairly well-respected and famous computer scientist, got in a long, extended debate with Torvalds over the structure of Linux. He looked down at Linux and claimed that Linux would have been worth two F's in his class because of its design. This led to a big flame war that was every bit as nasty as the fight between Berkeley and AT&T's USL. In fact, to the average observer it was even nastier. Torvalds returned Tanenbaum's fire with strong words like “fiasco,” “brain-damages,” and “suck.” He brushed off the bad grades by pointing out that Albert Einstein supposedly got bad grades in math and physics. The highpriced lawyers working for AT&T and Berkeley probably used very expensive and polite words to try and hide the shivs they were trying to stick in each other's back. Torvalds and Tanenbaum pulled out each other's virtual hair like a squawkfest on the Jerry Springer show.

But Torvalds's flame war with Tanenbaum occurred in the open in an Internet newsgroup. Other folks could read it, think about it, add their two cents' worth, and even take sides. It was a wide-open debate that uncovered many flaws in the original versions of Linux and Tanenbaum's Minix. They forced Torvalds to think deeply about what he wanted to do with Linux and consider its flaws. He had to listen to the arguments of a critic and a number of his peers on the Net and then come up with arguments as to why his Linux kernel didn't suck too badly.

This open fight had a very different effect from the one going on in the legal system. Developers and UNIX hackers avoided the various free versions of BSD because of the legal cloud. If a judge decided that AT&T and USL were right, everyone would have to abandon their work on the platform. While the CSRG worked hard to get free, judges don't always make the choices we want.

The fight between Torvalds and Tanenbaum, however, drew people into the project. Other programmers like David Miller, Ted T'so, and Peter da Silva chimed in with their opinions. At the time, they were just interested bystanders. In time, they became part of the Linux brain trust. Soon they were contributing source code that ran on Linux. The argument's excitement forced them to look at Torvalds's toy OS and try to decide whether his defense made any sense. Today, David Miller is one of the biggest contributors to the Linux kernel. Many of the original debaters became major contributors to the foundations of Linux.

This fight drew folks in and kept them involved. It showed that Torvalds was serious about the project and willing to think about its limitations. More important, it exposed these limitations and inspired other folks on the Net to step forward and try to fix them. Everyone could read the arguments and jump in. Even now, you can dig up the archives of this battle and read in excruciating detail what people were thinking and doing. The AT&T/USL-versus-Berkeley fight is still sealed.

To this day, all of the devotees of the various BSDs grit their teeth when they hear about Linux. They think that FreeBSD, NetBSD, and OpenBSD are better, and they have good reasons for these beliefs. They know they were out the door first with a complete running system. But Linux is on the cover of the magazines. All of the great technically unwashed are now starting to use “Linux” as a synonym for free software. If AT&T never sued, the BSD teams would be the ones reaping the glory. They would be the ones to whom Microsoft turned when it needed a plausible competitor. They would be more famous.

But that's crying over spilled milk. The Berkeley CSRG lived a life of relative luxury in their world made fat with big corporate and government donations. They took the cash, and it was only a matter of time before someone called them on it. Yes, they won in the end, but it came too late. Torvalds was already out of the gate and attracting more disciples.

McKusick says, “If you plot the installation base of Linux and BSD over the last five years, you'll see that they're both in exponential growth. But BSD's about eighteen to twenty months behind. That's about how long it took between Net Release 2 and the unencumbered 4.4BSD-Lite. That's about how long it took for the court system to do its job.”

 3. Everyone in the community, including many who don't know him, refers to him by his first name. The rules of style prevent me from using that in something as proper as a book.

 4. Microsoft now acknowledges the existence of a bug in the tens of millions of copies of Windows 95 and Windows 98 that will cause your computer to 'stop responding (hang)'--you know, what you call crash--after exactly 49 days, 17 hours, 2 minutes, and 47.296 seconds of continuous operation . . . . Why 49.7? days? Because computers aren't counting the days. They're counting the milliseconds. One counter begins when Windows starts up; when it gets to 232 milliseconds--which happens to be 49.7 days--well, that's the biggest number this counter can handle. And instead of gracefully rolling over and starting again at zero, it manages to bring the entire operating system to a halt."--James Gleick in the New York Times.

 5. The kernel of f(x)=x<sub>2</sub> is (-1, 1) and it illustrates how the function has two branches.



License: Free For All is Licensed under a Creative Commons License. This License permits non-commercial use of this work, so long as attribution is given. For more information about the license, visit https://creativecommons.org/licenses/by-nc/1.0/


≅ SiSU Spine ፨ (object numbering & object search)

(web 1993, object numbering 1997, object search 2002 ...) 2024