next up previous contents
Next: 10 References Up: Unix Communication Facilities Previous: 8 Making Unreliable Communication

9 Conclusion

This chapter
  • summarizes the results derived in this document
  • briefly discusses which additional facilities exist
  • describes possible improvements
  • gives an overview of the future of UNIX communication facilities

9.1 Results

The UNIX operating system provides several interprocess communication facilities. Some IPC facilities are restricted to processes which are executed on the same computer system. Most described IPC facilities are designed to transport data (pipes, FIFOs, message queues, and shared memory). Others synchronize processes (signals and semaphores). Except for shared memory, the use of all local interprocess communication facilities which transport data is very similar. Two interfaces to networking allow processes to cross system boundaries and make it possible for them to communicate via networks. Networking adds a lot of complexity to interprocess communication.

The performance measurements were the part of this document with the most work for the author. The performance of all more important IPC facilities was measured and compared for three different systems. The results allow recommendations which IPC facilities to use for different purposes.

The rest of this chapter gives an overview of the IPC facilities not covered, describes shortcomings of this document, and gives a look into the UNIX communication future.

9.2 Additional IPC Facilities

Due to space and time not all available UNIX IPC facilities could be investigated in detail. The most important facilities are covered briefly.

9.2.1 Memory Mapped I/O

Strictly speaking memory mapped I/O is not an interprocess communication facility. It is described for completeness only. With memory mapped I/O a file on a disk is mapped into a buffer in memory. If bytes are read in the buffer, the corresponding bytes are read from the file, and if bytes are modified, the corresponding bytes are automatically written to the file. Table 27 shows system calls to use memory mapped I/O.

System Call Description
mmap() map pages of memory
munmap() unmap pages of memory
Table 27: System Calls for Memory Mapped I/O

9.2.2 STREAMS

STREAMS were introduced into UNIX by System V Release 3. They are a general way to interface communication drivers in the kernel. A STREAM provides a full-duplex path between a user process and a device driver, which can be either talk to hardware or be a ``pseudo device driver''. A STREAM consists of a stream head, eventually some processing modules, and a device driver. Processing modules can be pushed onto a STREAM. The exchange of data is performed using messages. Figure 49 shows a STREAM with one processing module. Table 28 describes system calls which can be used with STREAMS in addition to the normal read(), write(), open() and close() system calls.

figure1921
Figure 49: A STREAM with one processing Module

System Call Description
ioctl() performs various operations on STREAMS
getmsg() read messages from a STREAM head
getpmsg() same as getmsg(), but with additional priority
putmsg() write messages to a STREAM head
putpmsg() same as putmsg(), but with additional priority
poll() returns number of ready file descriptors
Table 28: System Calls for STREAMs

STREAMS provide a more general and flexible way than normal UNIX I/O to combine and use device drivers by user processes. STREAMS are used as an example to implement the Transport Layer Interface. Additional information about STREAMS can be found in [Bach86, Chapter 10.4] and [Stev92, Chapter 12.4].

9.2.3 OSF DCE

OSF's Distributed Computing Environment is becoming more and more popular. DCE support can be added to a large number of operating systems, including UNIX. The target for DCE are business applications. DCE provides services and tools that support the creation, use, and maintenance of distributed applications in a heterogeneous computing environment [OSF DCE]. The benefits of DCE can be categorized into its support of distributed applications; the integration of its components with each other; DCE's relationship to its platforms; its support for data sharing, and DCE's interaction with the world outside of DCE [OSF DCE]. From the programming point of view the supports of threads and RPC is interesting. The services include a distributed file service, time service, security service and directory service.

A good overview of DCE is given in [OSF DCE].

9.3 Possible Improvements

Although this document covers a wide range of UNIX communication facilities, the following improvements are possible:

9.4 The Future of UNIX Communication Facilities

With the development of so-called microkernels (kernels which try to be as small as possible) like Mach, Spring, Grashopper and others the demand for fast and effective local IPC facilities is increasing. Some results of IPC performance measurements on microkernels are stated in [Lied95].

More and more computers get connected via networks. Therefore the need for fast and effective distributed IPC facilities is growing rapidly. The development and standardization of a reliable connectionless protocol or a ``leaner'' TCP protocol would be desirable. T/TCP (TCP with extensions for transactions, described in [Stev94] and [RFC 1379]) is a step in the right direction, especially with the increasing number of small data transfers in the Internet due the increased popularity of the World Wide Web.

9.4.1 IPng/IPv6

The Internet Protocol IPv4 has one serious problem: the address space is too small. Therefore a new Version 6 of the internet protocol is currently being developed, called IPng for Internet Protocol next generation. Major changes from IPv4 to IPng are (after [Hin95] and [RFC 1883], see also Figure 50):

figure1974
Figure 50: The basic IPng Header

The conversion from IPv4 to IPv6 is designed to be very smooth. It provides a platform for new internet functionality that will be required in the near future.

IPng is specified in [RFC 1883]. A more detailed overview of IPng can be found in [Hin95]. The WWW home page for IPng development is here

9.4.2 C++ Classes for IPC

The use of C++ classes can reduce the complexity of interprocess communication from the programmers point of view and allow a consistent interface for different IPC facilities. They also can make it easier to port applications between different platforms if appropriate implementations exist.

Two freely available collection of classes are:

Socket++ provides 32 classes which make the use of sockets and pipes easier. The C++ iostream model is used for communication. Socket++ supports the UNIX platform. ACE however, is a huge package that is very powerful. It provides more than 50 C++ classes for IPC which support sockets, TLI, pipes, Sun RPC and CORBA. ACE implementations exist for most UNIX versions, Windows 3.1 and Windows NT.

9.5 A Final Word

This document has given an overview of the more important UNIX communication facilities, especially interprocess communication. All facilities described are illustrated with example programs. The theory of networking was explained. One example for a protocol suite, TCP/IP, the most important one for UNIX interprocessor communication, was described. The performance of all discussed interprocess communication facilities was measured and compared on three different computer systems. The overhead of making communication safe in spite of unreliable services was discussed.

It is therefore believed by the author that the original aim of this individual project ``investigate the different mechanisms for inter-process and inter-processor communication offered by the UNIX operating system'' is achieved.

It is hoped by the author that the reader has enjoyed reading this document. For the author the creation of this document was a substantial piece of interesting work.


next up previous contents
Next: 10 References Up: Unix Communication Facilities Previous: 8 Making Unreliable Communication

Gerhard Müller