Protection and security - part 3

Capabilities

If we look along the rows of the access control matrix we see the access rights associated with each protection domain. This information is very hard to collect if we access control lists because it is spread all over the system. If we consider every process running within its own protection domain, we can think of storing a list of access rights with the process. On the face of things this may seem rather dangerous, since the process could alter its list of rights. As long as the rights can only be altered or moved around by the protection system this method works quite nicely. In this case the rights are referred to as "capabilities".

Each subject carries its own list of capabilities and when it wants to access an object it passes the required capability to the object (or the system). A capability looks like a password, however it is one which the user can't change. Another other way of thinking about capabilities is as the hidden name or pointer to the routine which performs the access on the object.

Capabilities carry the information as to what access rights they permit. e.g. I am a capability to open this file and read it (but not modify it). Now if some subject has a capability it can pass that capability to another subject (gaining an advantage of passwords). However unlike passwords, capabilities can't be guessed or forged.

Copying

There are several questions which we need to ask in order to understand capabilities properly. First off, how do we stop someone else copying our capabilities. The simple answer is that on a simple non-networked machine we can rely on ordinary memory protection to stop others reading our capabilities. On a networked system, we will have to do better. If someone can tap into our connection they will theoretically be able to get copies of our capabilities. Encryption plays an important role here as we will see.

Forging

To stop anyone forging a capability we give only the operating system means to create capabilities. When a subject creates a new object the system creates an owner capability and gives it to the subject. Possessing an owner capability usually means you can do anything you want with the object.

When a subject wants to access an object it must pass the capability to the system. What is stopping another program from making up its own capability and passing it on?

The system can use special hardware, whereby capabilities have an extra bit, indicating that they are capabilities and user level programs cannot set this bit. We must also insist that such capabilities can not be altered in such a way as to add access rights. One way to do this is that any user level attempt to manipulate any words in the system always clears the capability bit. Hardware which provides this extra bit to indicate capabilities usually can type a wider variety of data by adding extra bits.

Another approach is to not actually allow subjects to look after their own capabilities. Instead the system keeps a table for every subject with that subjects capabilities stored in it. Think of it as a system protected area of the process's address space. Then this table can only be modified by the system. When the subject wants to access some object the system checks this table to see if it has the required capability.

Capabilities may still be passed around from one subject to another but only via the mediating operating system.

This model of capabilities means that the users don't need to know about capabilities at all. It also means that a user can't directly give a capability to someone else, all such actions have to go through the operating system.

Encryption

This is great, except for networked environments where the object we want to use is on a different machine and our request has to go over the wire. In this case we have to be more careful because a process could tye to forge a capability. We have to use an encryption method such as employed by Andy Tanenbaum's Amoeba system.

The capability consists of 72 bits of object specification (server + object actually), 8 bits of access control (what things the capability means you can do) and the whole 80 bits is passed through a secret encryption algorithm to give an extra 48 bits of check.

When the operating system performs a task it checks the encrypted bits to make sure the capability is authentic.

Passing and reducing

With any of these methods the capability leaves the user able to pass capabilities on (just like passwords) but with added security. It is also possible to pass on a reduced capability. You may give someone the capability to read one of your files but not to change it for example.

Removing

One of the major problems with capabilities is removing already given privileges. The only effective way to do this is to change the required capability so that old ones no longer work. This is not completely satisfactory because until all new legitimate subjects receive the new capability they are locked out from accessing the object.

Other methods include periodically removing capabilities for all subjects. The new capability is given when the next access is requested.

See the text book for more methods.

A similar problem occurs with old capabilities hanging around unused on a system for a long time. The object they refer to may have substantially altered over time but they still have access to it.

Locks and keys

With access control lists revocation of rights is trivial, but information about the rights of each protection domain is scattered throughout the system.

With capabilities revocation is a bother but the information for each protection domain is kept conveniently together.

A middle approach uses locks and keys. Each protection domain has a list of keys (like capabilities). Each object has a list of locks. When a process wants to access a resource, the correct key has to be sent to the object.

The protection information for each protection domain is stored together, the list of keys. And it is easy to revoke privileges by deleting the corresponding lock. The same access rights can have multiple locks, corresponding to different protection domains. This is commonly used with databases, different fields have different locks.

Hierarchical protection

We have already discussed a completely different type of security. When we looked at the history of operating systems we talked about system calls and how the invention of processor states made operating system and other program memory spaces safe from unwanted alteration or access.

By the way, the area around system calls has to be carefully checked, many security breaches have occurred because system call parameters haven't been checked thoroughly. More about this later.

A generalisation of this scheme is sometimes known as hierarchical protection.

Of course it is possible to have more than 2 such domains (user and system). Many processors actually devote hardware to provide 4 or more protection levels. If the only overlap between levels is complete coverage, i.e. more privileged levels always include all privileges of less privileged levels then we have a hierarchical protection system.

Since this onion skin approach is similar to the way some operating systems are designed this method is particularly appropriate to those systems.

A disadvantage with such a hierachical method is that it gives more privileges than necessary to programs. i.e. It breaks the "need to know" rule.

This is partly because it solves another problem. It becomes necessary when writing programs for others to use, that the program needs to be able to do things which the person using it would not be allowed to do themselves e.g. a program which allows someone to access a special group of files in a protected way. By temporarily changing the protection level of the running program, the required access can be allowed. Of course this is not the only way this access can be provided.

UNIX memory security

You can think of UNIX memory management security as being a trivial form of hierarchical security. The kernel is part of the address space of all processes. While running in the kernel all memory can be accessed. When running in user mode (a higher level of protection domain) only user memory within the process can be accessed.

Security kernels

As I have already mentioned designing security into a system from the beginning is much better than adding it on later.

Many secure operating systems have been designed with lots of emphasis on the kernel. The kernels are kept small, making it easier to verify the design and the intended security.

Functions which are focussed on are:

Hacking

The term "hacking" means different things to different people. Originally (and preferably, in my opinion) a "hack" was a particularly clever piece of programming. The "hacker's" club was a prestiguous group of designers and programmers formed in the late 70's.

Unfortunately the term has since come to mean attempting to break in to computer systems, usually via modem. People who do this are known as "hackers", I prefer to call such people "illegal intruders" to remove any vestiges of glamour from the pursuit.

Worms and viruses

Some security problems are caused, not by someone wanting to read or surreptitiously alter files but merely to cause problems to the legitimate users.

Worms and viruses fit in this category. These programs spread across networks and via infected files. Worms and viruses are different from other programs which attack systems because the make copies of themselves and try to move to onto other systems.

We can try to prevent viruses and worms via a variety of methods, not allowing unauthorised changes to programs, only installing software which we are sure is virus free. Or we can detect viruses, either at work, trying to add themselves to bits of code which will be run at a later stage, or at rest, when code has been modified. For the later we need to what the code should be. Virus detection programs commonly used checksums like CRCs to see if a program has been altered.

The famous Internet Worm of 1988 employed a collection of techniques we have already seen in order to propagate over UNIX systems. Use of poor passwords (it had a table of common ones, and did simple transformations on known user info), bugs in software such as forcing a program to leave data on the stack by overrunning an input buffer, "sendmail" installed with the debug option on, allowing commands to be sent to other machines.

Trojan horses

We have already mentioned one of the best known Trojan horses, replacing the usual login screen with a dummy screen to catch other people's passwords. In general a Trojan horse is usually regarded as a program which purports to do something useful (or fun or novel) and when it is run it does something else. It may in fact do what the user thinks it is supposed to do but unbeknownst to the user it does something else as well, such as copying, corrupting or deleting files or other information.

A sure way of fostering Trojan horses is to give utility programs more rights than they really need. In a UNIX system (as in many others) when you invoke a program written by someone else, that program takes on your privileges. That means you have given it access to all your files. Do you really trust all the commands on your UNIX system?

UNIX real and effective users

Probabably the second biggest security problem maker on UNIX systems (after poor passwords) is the set uid system. To provide the capability to privileged information in a controlled way UNIX permits programmers to allow their programs to have the programmer's privileges rather than the privileges of the user who runs them.

The way this is done is by setting one of the permission bits, known as the setuid bit. In the same way there is a setgroupid bit.

Every process running under UNIX has two users (and two groups): the real and the effective user. The real user is the person who invoked the process, directly or indirectly. The original effective user is the person (frequently root) who owns the program (and turned the setuid bit on). Then when the process is running it has access to the privileges of the program's owner. In fact it is possible for a process to change its privileges back and forward from the owner of the program to the real user via the setuid system call. Of course root can set the effective user id to anyone.

This system is used at login. The login process is owned by root. After the program has checked the login name and password, the effective user is changed to the new person and the shell is started up under their ownership.

Similarly, mere mortals do not have the privileges to create directories (since they are special types of files). Instead the mkdir program is setuid with root permission. It creates the directory and then changes the owner to be the caller.

Why is this such a security problem?

If a setuid program is left without being write protected another user can write over the top of the program with a program of their liking and the program has all of the privileges of the first program's owner.

Less glaring holes can also lead to problems.

Such as when a setuid program doesn't take enough care as to what is allowed when the process is acting under the owner's privileges. Imagine a program which has a shell escape mechanism, whereby the user can invoke a shell command from within the program. If the setuid program forgets to change the effective user back to the real user before invoking the shell command, the real user has the owner's privileges. If the owner happened to be root, the real user has full privileges. Don't laugh, this was a real bug in early versions of some UNIX utilities.

Going the other way, instead of getting access rights of other users from their setuid programs it is also possible to get other users' access rights by creating your own setuid programs in a complicated way. If a program, with you as owner can be saved in a directory which someone else will execute, such as in someone else's search path, using the name of a common command, then when they execute the program, it takes on their privileges. If this program then saves a setuid program in a directory accessible to you, then you have a writable setuid program with someone else's privileges.

Of course if you can make the system run such a program you can get super user privileges. One way to make the system run one of your programs is to replace an occasionally called system program. Of course to do this requires another security lapse (usually of the first setuid kind).

Other system functional flaws

In a dialup or network situation if the line is lost then the session must be terminated. Otherwise it may be possible for someone else to take over a "floating" session.

Not clearing up junk can lead to security breaches. A common one is with file systems which don't zero deleted files. Then when someone gets allocated the blocks they can scan them for information previously owned by someone else.

System calls and parameter checking

When talking about system calls I mentioned that you have to be careful about checking parameters because of possible security breaches. We will spend a little time looking at this.

Let's look at a "write" file system call. What do we need to ensure no security breaches occur with a write? For the sake of discussion we will say the write call looks like this write(filename, data_address, length).

The obvious answer is that we must check that the program has write access privileges to the file filename. 3 out of 10 for checking this.

A check many novice programmers would forget to make is on the data and the length parameters. What do we need to check about the data field for example?

The program must have read access to the memory from data_address up to data_address + length - 1. At first this seems a little peculiar, after all what damage can be done if they don't have read access?

When the system call is being executed the processor is more than likely going to be in kernel or supervisor mode, which means that it will have access to whatever part of memory it wants. If there was no check on whether the requesting process could read the memory it is trying to write to a file it could read an area of restricted memory, carefully storing the restricted information into a file it can read later.

A similar scenario is true for reading a file. In this case the destination of the read has to be checked to make sure that the program is not trying to deposit false information into an area it is not allowed to touch.

There are other potential problems with system calls. e.g. When the parameters are checked by the system call but not used immediately, or left in a place where they can be changed after they have been checked. This method has been exploited in several security breaches. As has leaving junk on the stack by passing more parameters than the system call required. This junk might force the system call to return somewhere it shouldn't for example.

These are all examples of "implicit trust" security flaws.

Super users

One of the major security problems is allowing some person to have complete control over the system, the so called "super user". Of course if an intruder gets super user access, they can do what they like (until they are caught).

Is it really necessary to have super user access to an operating system? If not devise a method which allows you to get rid of it and still maintain the system.

Back to the lecture index

On to the next lecture