New Document
Communication in unix system

Based on appearance, a UNIX application has sole command of the underlying host. It has ready and free access to the processor, its memory is sacrosanct, and attached devices serve the application's every whim. But true to the maxim "Appearances can be deceiving," such sovereignty is a clever illusion. A UNIX system runs any number of applications simultaneously, sharing its finite physical resources judiciously among all. Processor capacity is doled out in slices, application images are constantly shuffled in and out of real memory, and device access is driven by demand and policed by access rights. Although your shell prompt blinks attentively, a UNIX machine teems with activity.

Interprocess communication in UNIX:
File Data is written to and read from a typical UNIX file. Any number of processes can interoperate. LocalSharing large data sets
PipeData is transferred between two processes using dedicated file descriptors. Communication occurs only between a parent and child process. Local Simple data sharing, such as producer and consumer
Named pipeData is exchanged between processes via dedicated file descriptors. Communication can occur between any two peer processes on the same host. Local Producer and consumer, or command-and-control, as demonstrated with MySQL server and its command-line query utility
SignalAn interrupt alerts the application to a specific condition.Local Cannot transfer data in a signal, so mostly useful for process management
Shared memoryInformation is shared by reading and writing from a common segment of memory.Local Cooperative work of any kind, especially if security is required.
Socket After special setup, data is transferred using common input/output operations.Local or remoteNetwork services such as FTP, ssh, and the Apache Web Server

As mentioned above, each technique suits a particular need. Assuming that coordination between multiple processes is roughly equally intricate, each approach has advantages and disadvantages:

  • Sharing data via a common UNIX file is simple, because it uses familiar file operations. However, sharing data via the file system is inherently slow, because disk input and output operations cannot match the expediency of memory. Further, it is difficult to coordinate reads and writes via a file only. Ultimately, saving sensitive data in a file is not secure, because root and other privileged users can access the information. In a sense, files are best used when viewed as read-only or write-only.

  • The pipe and named pipe are also simple mechanisms. Both use two standard file descriptors on each end of the connection—one exclusive to read and another exclusive to write operations. A pipe, though, can only be used between a parent and child process, not between two arbitrary processes. The named pipe addresses the latter shortcoming and is an excellent choice for data exchange on the same system. However, neither a pipe nor a named pipe provides random access, because each operates as a first-in, first-out (FIFO) device.

  • A signal cannot transfer data from one process to another. In general, signals should only be used to communicate exceptional conditions between one process and another.

  • Shared memory is well suited to larger collections of data and, because it uses memory, grants fast, random access. Shared memory is slightly more complicated to implement but is otherwise an excellent choice for intra host collaboration between multiple processes.

  • A socket functions much like a named pipe but can span hosts. Local sockets (also called UNIX sockets) are restricted to local (same host) connectivity. Inet and Inet6 sockets, which use the IPv4 and IPv6 protocols, respectively, accept remote connections (and local connections via the local machine's Internet addressing). The socket is the obvious choice for any networking application, such as distributed processing or a web browser. Coding is a little more complicated than with named pipes, but the pattern is well established and well documented in any UNIX network programming book.

Ignoring inter host applications, let's look at shared memory for interprocess communication on the same host.

How shared memory works

As its name implies, shared memory makes a segment of memory accessible to more than one process. Special system calls, or requests to the UNIX kernel, allocate and free the memory and set permissions; common read and write operations put and get data from the region.

A sample application

below shows a small shared memory example. (The code is derived from John Fusco's book, The Linux Programmer's Toolbox, ISBN 0132198576, published by Prentice Hall Professional, March 2007, and used with the permission of the publisher.) The code implements a parent and child process that communicates via a shared memory segment. :


void error_and_die(const char *msg) {

int main(int argc, char *argv[]) {
  int r;

  const char *memname = "sample";
  const size_t region_size = sysconf(_SC_PAGE_SIZE);

  int fd = shm_open(memname, O_CREAT | O_TRUNC | O_RDWR, 0666);
  if (fd == -1)

  r = ftruncate(fd, region_size);
  if (r != 0)

  void *ptr = mmap(0, region_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
  if (ptr == MAP_FAILED)

  pid_t pid = fork();

  if (pid == 0) {
    u_long *d = (u_long *) ptr;
    *d = 0xdbeebee;
  else {
    int status;
    waitpid(pid, &status, 0);
    printf("child wrote %#lx\n", *(u_long *) ptr);

  r = munmap(ptr, region_size);
  if (r != 0)

  r = shm_unlink(memname);
  if (r != 0)

  return 0;
Previous                                                                                                                                                       Next

Back to Top