Introduction to Operating Systems:History The 1960s
The 1960s
The systems of the 1960s were also batch-processing systems, but they used the computer’s resources more efficiently by running several jobs at once. Systems included many peripheral devices such as card readers, card punches, printers, tape drives and disk drives. Any one job rarely used all the system’s resources efficiently. A typical job would use the processor for a certain period of time before perform- ing an input/output (I/O) operation on one of the system’s peripheral devices. At this point, the processor would remain idle while the job waited for the I/O operation to complete.
The systems of the 1960s improved resource utilization by allowing one job to use the processor while other jobs used peripheral devices. In fact, running a mix- ture of diverse jobs—some jobs that mainly used the processor (called processor- bound jobs or compute-bound jobs) and some jobs that mainly used peripheral devices (called I/O-bound jobs)—appeared to be the best way to optimize resource utilization. With these observations in mind, operating systems designers developed multiprogramming systems that managed several jobs at once.10, 11, 12 In a multiprogramming environment, the operating system rapidly switches the processor from job to job, keeping several jobs advancing while also keeping peripheral devices in use.A system’s degree of multiprogramming (also called its level of multiprogramming) indicates how many jobs can be managed at once. Thus, operating systems evolved from managing one job to managing several jobs at a time.
In multiprogrammed computing systems, resource sharing is one of the primary goals. When resources are shared among a set of processes, each process maintaining exclusive control over particular resources allocated to it, a process may be made to wait for a resource that never becomes available. If this occurs, that process will be unable to complete its task, perhaps requiring the user to restart it, losing all work that the process had accomplished to that point. In Chapter 7, Dead- lock and Indefinite Postponement, we discuss how operating systems can deal with such problems.
Normally, users of the 1960s were not present at the computing facility when their jobs were run. Jobs were submitted on punched cards or computer tapes and remained on input tables until the system’s human operator could load them into the computer for execution. Often, a user’s job would sit for hours or even days before it could be processed. The slightest error in a program, even a missing period or comma, would “bomb” the job, at which point the (often frustrated) user would correct the error, resubmit the job and once again wait hours or days for the next attempt at execution. Software development in that environment was painstakingly slow.
In 1964, IBM announced its System/360 family of computers (“360” refers to all points on a compass to denote universal applicability).13, 14, 15, 16 The various 360 computer models were designed to be hardware compatible, to use the OS/360 operating system and to offer greater computer power as the user moved upward in the series.17 Over the years, IBM evolved its 360 architecture to the 370 series18, 19 and, more recently, the 390 series20 and the zSeries.21
More advanced operating systems were developed to service multiple interactive users at once. Interactive users communicate with their jobs during execution. In the 1960s, users interacted with the computer via “dumb terminals” (i.e., devices that supplied a user interface but no processor power) which were online (i.e., directly attached to the computer via an active connection). Because the user was present and interacting with it, the computer system needed to respond quickly to user requests; otherwise, user productivity could suffer. As we discuss in the Operating Systems Thinking feature, Relative Value of Human and Computer Resources, increased productivity has become an important goal for computers because human resources are extremely expensive compared to computer resources. Timesharing systems were developed to support simultaneous interactive users.22 Many of the timesharing systems of the 1960s were multimode systems that supported batch-processing as well as real-time applications (such as industrial process control systems).23 Real-time systems attempt to supply a response within a certain bounded time period. For example,a measurement from a petroleum refinery indicating that temperatures are too high might demand immediate attention to
Operating Systems Thinking
Relative Value of Human and Computer Resources
In 1965, reasonably experienced programmers were earning about $4 per hour. Computer time on mainframe computers (which were far less powerful than today's desktop machines) was commonly rented for $500 or more per hour—and that was in 1965 dollars which, because of inflation, would be comparable to thousands of dollars in today's currency! Today, you can buy a top-of-the-line, enormously powerful desktop computer for what it cost to rent a far less powerful mainframe computer for one hour 40 years ago! As the cost of computing has plummeted, the cost of man-hours has risen to the point that today human resources are far more expensive than computing resources.
Computer hardware, operating systems and software applications are all designed to leverage people’s time, to help improve efficiency and productivity.A classic example of this was the advent of timesharing systems in the 1960s in which these interactive systems (with almost immediate response times) often enabled programmers to become far more productive than was possible with the batch-processing systems response times of hours or even days. Another classic example was the advent of the graphical user interface (GUI) originally developed at the Xerox Palo Alto Research Center (PARC) in the 1970s. With cheaper and more powerful computing, and with the relative cost of people-time rising rapidly compared to that of computing, operating systems design- ers must provide capabilities that favor the human over the machine, exactly the opposite of what early operating systems did.
avert an explosion. The resources of a real-time system are often heavily underutilized—it is more important for such systems to respond quickly than it is for them to use their resources efficiently. Servicing both batch and real-time jobs meant that operating systems had to distinguish between types of users and provide each with an appropriate level of service. Batch-processing jobs could suffer reasonable delays, whereas interactive applications demanded a higher level of service and real-time systems demanded extremely high levels of service.
The key timesharing systems development efforts of this period included the CTSS (Compatible Time-Sharing System)24, 25 developed by MIT, the TSS (Time Sharing System)26 developed by IBM, the Multics system27 developed at MIT, GE and Bell Laboratories as the successor to CTSS and the CP/CMS (Control Pro- gram/Conversational Monitor System)—which eventually evolved into IBM’s VM (Virtual Machine) operating system—developed by IBM’s Cambridge Scientific Center.28, 29 These systems were designed to perform basic interactive computing tasks for individuals, but their real value proved to be the manner in which they shared programs and data and demonstrated the value of interactive computing in program development environments.
The designers of the Multics system were the first to use the term process to describe a program in execution in the context of operating systems. In many cases, users submitted jobs containing multiple processes that could execute concurrently. In Chapter 3, Process Concepts, we discuss how multiprogrammed operating sys- tems manage multiple processes at once.
In general, concurrent processes execute independently, but multiprogrammed systems enable multiple processes to cooperate to perform a common task. In Chapter 5, Asynchronous Concurrent Execution, and Chapter 6, Concur- rent Programming, we discuss how processes coordinate and synchronize activities and how operating systems support this capability. We show many examples of con- current programs, some expressed generally in pseudocode and some in the popular Java™ programming language.
Turnaround time—the time between submission of a job and the return of its results—was reduced to minutes or even seconds. The programmer no longer needed to wait hours or days to correct even the simplest errors. The programmer could enter a program, compile it, receive a list of syntax errors, correct them immediately, recompile and continue this cycle until the program was free of syntax errors. Then the program could be executed, debugged, corrected and completed with similar time savings.
The value of timesharing systems in support of program development was demonstrated when MIT, GE and Bell Laboratories used the CTSS system to develop its own successor, Multics. Multics was notable for being the first major operating system written primarily in a high-level language (EPL—modeled after IBM’s PL/1) instead of an assembly language. The designers of UNIX learned from this experience; they created the high-level language C specifically to implement UNIX. A family of UNIX-based operating systems, including Linux and Berkeley Software Distribution (BSD) UNIX, have evolved from the original system created by Dennis Ritchie and Ken Thompson at Bell Laboratories in the late 1960s (see the Biographical Note, Ken Thompson and Dennis Ritchie).
TSS, Multics and CP/CMS all incorporated virtual memory, which we discuss in detail in Chapter 10, Virtual Memory Organization, and Chapter 11, Virtual Memory Management. In systems with virtual memory, programs are able to address more memory locations than are actually provided in main memory, also called real memory or physical memory.30, 31 (Real memory is discussed in Chapter 9, Real Memory Organization and Management.) Virtual memory systems help remove much of the burden of memory management from programmers, free- ing them to concentrate on application development.
Biographical Note
Ken Thompson and Dennis Ritchie
Ken Thompson and Dennis Ritchie are well known in the field of operating systems for their development of the UNIX operating system and the C programming language. They have received several awards and recognition for their accomplishments, including the ACM Turing Award, the National Medal of Technology, the NEC C&C Prize, the IEEE Emmanuel Piore Award, the IEEE Hamming Medal, induction into the United States National Academy of Engineering and the Bell Labs National Fellowship.32 Ken Thompson attended the University of California at Berkeley, where he earned a B.S. and M.S. in Computer Science, graduating in 1966.33 After college Thompson worked at Bell Labs, where he eventually joined Dennis Ritchie on the Multics project.34 While working on that project, Thompson created the B language that led to Ritchie’s C language.35 The Multics project eventually led to the creation of the UNIX operating system in 1969. Thompson continued to develop UNIX through the early 1970s, rewriting it in Ritchie’s C programming language.36 After Thompson completed UNIX, he made news again in 1980 with Belle. Belle was a chess-playing computer designed by Thompson and Joe Condon that won the World Computing Chess Championship. Thompson worked as a professor at the University of California at Berkeley and at the University of Sydney, Australia. He continued to work at Bell Labs until he retired in 2000.37 Dennis Ritchie attended Har- vard University, earning a Bachelor’s degree in Physics and a Ph.D. in Mathematics. Ritchie went on to work at Bell Labs, where he joined Thompson on the Multics project in 1968. Ritchie is most recognized for his C language, which he completed in 1972.38 Ritchie added some extra capabilities to Thompson’s B language and changed the syntax to make it easier to use. Ritchie still works for Bell Labs and continues to work with operating systems.39 Within the past 10 years he has created two new operating systems, Plan 9 and Inferno.40 The Plan 9 system is designed for com- munication and production quality. 41Inferno is a system intended for advanced networking.42
Once loaded into main memory, programs could execute quickly; however, main memory was far too expensive to contain large numbers of programs at once. Before the 1960s, jobs were largely loaded into memory using punched cards or tape,a tedious and time-consuming task, during which the system could not be used to execute jobs. The systems of the 1960s incorporated devices that reduced system idle time by storing large amounts of rewritable data on relatively inexpensive magnetic storage media such as tapes, disks and drums. Although hard disks enabled relatively fast access to programs and data compared to tape, they were significantly slower than main memory. In Chapter 12, Disk Performance Optimization, we dis- cuss how operating systems can manage disk input/output requests to improve performance. In Chapter 13, File and Database Systems, we discuss how operating systems organize data into named collections called files and manage space on storage devices such as disks. We also discuss how operating systems protect data from access by unauthorized users and prevent data from being lost when system failures or other catastrophic events occur.
Self Review
1. How did interactive computing and its improvement in turnaround time increase programmer productivity?
2. What new concept did TSS, Multics and CP/CMS all incorporate? Why was it so helpful for programmers?
Ans: 1) The time between submission of a job and the return of its results was reduced from hours or days to minutes or even seconds. This enabled programmers to interactively enter, compile and edit programs until their syntax errors were eliminated, then use a similar cycle to test and debug their programs. 2) TSS, Multics, and CP/CMS all incorporated virtual memory. Virtual memory allows applications access to more memory than is physically available on the system. This allows programmers to develop larger, more powerful applications. Also, virtual memory systems remove much of the memory management burden from the programmer.
Comments
Post a Comment