The concept of the user had been developing for some time. Initially it was obvious who was doing what, since only one person had access to the computer at a time. With operators, and then monitor and batch systems the system had to keep more and more information about users.
System programs were already monitoring and recording such information. It was kept safe by the security developments in memory and the structure of processors - whereby the processor could operate at different privilege levels.
However over the period of ten years or so when these developments had been taking place, the cost of hardware had been reducing dramatically. Instead of processing being the single largest cost the work of humans using or programming the computer was seen as a more expensive item.
Remember that in our wonderful, frighteningly efficient batch processing system the machine is working as fast as it can, at least for a good chunk of the day. Whereas how has life changed for the poor programmer?
Well, it hasn't really changed that much since he or she was kicked off the machine by the introduction of the computer operators.
By the 1970's computers were relatively cheap. Now the people were the most expensive part of our computing facility. Which means we have to make them more efficient (hopefully not quite in the same way we made the processor more efficient).
Programmers used to dream about being able to type directly into a computer and being able to call the compiler and the linker and then running their program. Just like the good old days.
The peculiar thing was that basically the technology to do this was already there. There was a bit of a hitch for a while when programmers had to be attached to computers via teletypewriters - sort of automatic typewriters. But even this was so preferable to waiting for the output from the batch system that the users didn't care.
In fact even in batch systems some work was being done interactively. The operators for example controlled the machine directly. The operators' terminal was usually referred to as the console (a term you still occasionally hear).
Even ordinary users were sometimes allowed to do some interactive work. In the Computer Centre of the University of Auckland, there was a strange terminal near where people submitted their jobs. The users could issue requests for information on their work and the computer would display where their jobs were and the state they were in.
You can see that the progression from this to more advanced interaction was one of degree more than one of kind.
A terminal to type at and receive output from the computer, obviously. What should this terminal look like to the computer. Well, it is going to supply input, in a very similar way to a card reader, except it will be a bit slower. Similarly from the output side it looks like a printer (originally it was a printer).
Even the early cathode ray terminals looked like printers to the computer. They could only add characters on to the right of characters already present on the line and only scroll text up off the top of the screen.
Well our operating systems certainly know how to handle card readers and printers. There are some slight differences already alluded to with the business of placing characters at random position on the screen. An input difference was being able to correct lines of input before they were processed. We don't always type what we mean to type. Eventually these differences amongst others were going to mean that terminals would have to be handled in special ways (unlike most other peripheral devices).
The concept of logging on to the computer system was new. The operating system had already developed the concept of the user. Now the user had to identify himself or herself to the operating system, via the terminal. A program was associated with each terminal (some systems only allowed the one program, others allowed multiple programs). This was different to the limited form of interaction, operators had with batch systems. The operating system did not need to verify the rights or privileges of the operators, only authorised people were supposed to have access in the first place.
The most fundamental change lay in how to decide which job should run next. In our nice orderly batch system the operating system had plenty of information provided by the users as to what resources each job would require. This information was then used to schedule the jobs according to the principle of maximising the use of the processor.
With an interactive system, we could still demand that people estimate their required resources. However we have no control over when these people come along and want immediate computing resources.
The number of processes in the system at any point in time, increased dramatically. The reason for this is that we still want to be able to provide computing resources for about the same number of people who were using the batch system. Most of the time, when users are dealing with a computer interactively the computer does not have to do much work for each user. A perfect example of this is word processing. From the processors point of view, nothing much is happening. Even when the user is typing frantically, the speed of data being processed is minimal compared to the speed of the processor.
What is the system supposed to do when a user requests a resource which is unavailable, either because it is physically unavailable or because some other user is using it? This is a new problem. With batch systems, the operating system always knew in advance what resources a program required, the program would only be scheduled when its resources were available.
So the operating system had to have a way of indicating to the user that the resource was unavailable. Of course for some resources e.g. the printers, output had to be queued anyway. Data for printers was spooled. Users then required some method of being able to interrogate such queues in order to see how their work had progressed.
It was common for some jobs to be submitted to be processed whenever the resources became available. Thus some of the concepts of batch processing were carried over into the new environments. Some computer installations had batch processing machines attached to the interactive machines. Jobs were then submitted to the batch machine from the interactive one. There are remnants of this even in systems such as UNIX, the at command and crontab.
If someone wanted to do something they shouldn't on a batch system, they didn't know until their job came back to them whether it had been successful or not. The chances of getting caught were high.
Under an interactive system, criminals could tell immediately whether they had been successful. The operating system had to be more vigilant because if some attack didn't work, the criminal could immediately try something else (all in an unpredictable way).
With several people using the computer at once, the idea of being able to communicate from one user to another became a reality. This took different forms.
If two people are working on the computer at the same time, it must be possible for messages to be passed from one to the other. The system must provide this, because we don't want someone else to have unprotected access to our terminal. They could mess up what appears on our screen. Obviously we need a way of starting and terminating such "conversations". We also need a way of being able to say, "leave me alone", I don't want to be interrupted. UNIX has the talk command. (I have never used this.)
Extra requirements are ways of finding out who else is using the computer at this time, in order to find out if the person I want to talk to is around. Since each person has to have some system identification, such as a login name, this can be used to indicate who the messages are to go to. We sometimes want to find out who else is using the system when something goes wrong, who is eating up all the processing time, for example.
As with phone calls, it is very likely that the person you want to talk to is unavailable at the moment. In this case we would like to be able to leave them messages which they can read, either when they return, or when they decide to read their messages. Of course this is the start of electronic mail.
As the ideas of networking were just starting to be developed, mail was only between users on the same machine.
The timesharing systems were not originally designed to be easy to use. This trend has continued in this type of system for a long time, largely due to the helpful influence of UNIX (irony?).
Understandably computer experts tend to have a different view of making a system useful from other users. Computer experts tend to carry large amounts of information about the system around in their heads. This means they like to do clever things, they want the system to be flexible. In fact I think many experts regard a system as good, if it gives them maximum control. UNIX is the perfect example of this. If you take any of the management commands and read its manual page you see multiple options. An interesting exercise is to look at early manual pages of the same command. Instead of each command getting easier and simpler to use, the opposite has happened as UNIX has matured.
From the users perspective it could look as though changing from a timesharing interactive environment to a personal computer was hardly a change at all. After all the timesharing system was designed to give the user the illusion of being the only person using the machine. This was taken to it absolute extreme by IBM's VM system. Whereby it really does look like you have your own machine and you have absolute control over it. But of course it wasn't a minimal change from the point of view of operating systems.
Because the first microcomputers were very small and slow, no one thought of trying to put scaled down versions of real operating systems on them. This was a severe lack of foresight (well some tried, but unfortunately they weren't very successful).
What we had for the first 10 years of personal computers were monitor systems once again, augmented with direct control from the user.
In particular the whole concept of security was missing from the first few generations of operating systems for personal computers. The personal computer was seen as one users isolated computing system. If the user wanted to trash all the files on her machine, that was her choice. A corollary of this was that if something went wrong with a program and it caused damage, it was only damaging the work of one person.
There were differences, mostly in the application programs which ran on the systems,rather than in the operating systems. The programs were easier to use, and gave greater assistance to the users. They tried to catch errors and return helpful messages. The application programs took over some of the tasks which had progressed to the operating system.
Microcomputers then followed a course very similar to that gone through with larger computers. When it was decided that even one user could benefit from running more than one program at a time, multiprogramming was added to microcomputer systems. Protected memory then had to be added to ensure that maverick programs were restricted in the damage they could do.
Some developments had been made in the area of the user interface. This didn't coincide with personal computers. Rather it came about with the arrival of high resolution graphical display devices. Until this time, all user interactions were carried out using fixed font text character sets. The fonts were really fixed - in display device ROMs for example.
The Macintosh wasn't the first computer with a Graphical User Interface, but it was the first such computer intended to be sold to individuals, rather than to companies. It is interesting to note that both the Macintosh and X-windows were released (or developed) in 1984.
Some people would argue whether GUIs were really part of the operating system. Certainly on a UNIX system the X-windows client/server architecture is all provided at the level of user programs. It is not part of the kernel. Whereas the MacOS doesn't make sense without the user interface, it is the defining feature of the system.
In order to get more work done, our computers had more than one processor placed inside them. Now true simultaneous processing could be done on several jobs. This had profound effects on the design of operating systems. Nothing particular new had to be added, apart from keeping track of which jobs were running on which processors. However the internals of the operating systems had to be completely rethought.
Taking UNIX as an example, large chunks of the traditional UNIX kernel was non-reentrant. This meant that only one process could be running in kernel mode at a time. This was fine if you only had one processor, however it was an impossible design if you had several.
So systems had to be redesigned to incorporate ways of ensuring that several processes could safely run in the kernel not only without interfering with each other, but not slowing each other down unnecessarily.
Probably the most significant change in computer systems (and hence operating systems) over the last decade has been the growth of networking, originally LANs and now the Internet.
Part of the problem with personal computers was that they were isolated from each other. With a traditional time-sharing system, the same data could be referred to by different people. LANs and file servers provided this to personal computers. Connecting computers raises all sorts of interesting problems from the operating system point of view.
Originally, network connections were tack-ons. The network just looked like another device. Later, the network was the major device (all files were stored somewhere else) and true network operating systems were developed.
In some ways things haven't changed that much since the first time-sharing systems. Operating systems have to manage memory, processes (possibly processors), files, devices, and users. But arising out of the demands of software engineering and a trend to small is beautiful (similar to the temporary trend to RISC processors) the design of the operating systems themselves has changed. In the first three decades of operating system history, more and more functions seemed to be getting thrown in to the central part of the operating system, known as the kernel. This is now known as the "monolithic kernel". There are now two different trends away from this.
Probably the most significant trend is towards smaller kernels, known as "microkernels". In such systems a lot of the functionality traditionally assigned to the kernel is now carried out by user level processes. For example the file system doesn't have to be part of the kernel. All requests can be via messages to the file system which is a user level program with permission to talk to the disk devices. This way different file systems can be experimented with on the same operating system. It is even possible to provide memory management via server processes.
The other trend goes the other way - into the hardware. Many of the other tasks of operating systems are migrating into microcode, such as dispatching processes.
A consequence of vast amounts of code which are the same in different programs, brought about by the complexity of today's programming environments such as GUIs, is that we would like to be able to share this code. In order for these "libraries" to be sharable, the operating system must know about them.
Such libraries form a middle ground between kernel code and ordinary unshared user level code. This ties in with the move towards smaller kernels.
There is another point of view which should be mentioned when looking at the history of operating systems. When a company got larger, the computing requirements got larger and bigger computers were needed. Until the mid 1960's this meant rewriting programs and learning all about the bigger machines different operating system. OS/360 from IBM changed all that. The idea was to have one operating system (in fact one machine architecture) available over a wide range of computers. This became very popular for obvious reasons. These days we still see the importance of this as manufacturers of computers and operating systems try to have open systems, whereby programs can be moved to different machines with different underlying operating system facilities but the interface to the programs is uniform. This is where things like the POSIX standards come in.
It is a good idea. However, if we take UNIX as our example, there are several different competing "standards" pushed for by different manufacturers. There will always be better ways of doing things, why can't I do them that way.
Standards are confining. The problem is weighing up the possible improvements with the potential advantage of software which works over a vast variety of machines.
However, standards are essential. We really do want our machines talking to each other. We really do want to be able to read and edit documents created by someone else on a different type of machine, with a different program.
Things such as Apple's OpenDoc, whereby the documents become the basic operating system unit, rather than the specific application programs are powerful steps in a new direction.
Java as a programming environment and virtual machine provides
another way of coping with different machines. Not only does this
give us a platform independent programming language, there is
JavaOS, an operating system which provides the bare functionality
to run Java programs on a wide range of hardware, it will be interesting
to see how this catches on.