Tuesday, January 15, 2008

EXERCISE 1-6

1.A deadlock is a situation wherein two or more competing actions are waiting for the other to finish, and thus neither ever does.
A Starvation is similar in effect to deadlock. Two or more programs become deadlocked together, when each of them wait for a resource occupied by another program in the same set. On the other hand, one or more programs are in starvation, when each of them is waiting for resources that are occupied by programs, that may or may not be in the same set that are starving. Moreover, in a deadlock, no program in the set changes its state.
A race is a synchronozation problem between two processes vying for the same resources.

2. Real-life example of deadlock is the traffic.
Real-life example of starvation is in the staircase, when two people meet at the opposing side. The staircase can be accomodated by one person.
Real-life example of Race is when two people arrived at the same time on the same door.

3. Four Necessary Conditions for Deadlock:
The presence of deadlock in a systems is characterized by these four necessary conditions. The term necessary means that if there is deadlock then all four must be present.
a. Mutual exclusive resource access - A resource acquired is held exclusively, i.e., it is not shared by other processes.
b. No preemption - A process' resources cannot be taken away from it. Only the process can give up its resources.
c. Hold and Wait - A process has some resources and is blocked requesting more.
d. Circularity - This means that there is a circular chain of two or more processes in which the resources needed by one process are held by the next process.

4. Algorithm for prevention of deadlock and starvation:public boolean tryAcquire( int n0, int n1, ... ) { if ( for all i: ni ≤ availi ) { // successful acquisition availi -= ni for all i; return true; // indicate success } else return false; // indicate failure}init) Semaphore s = new Semaphore(1,1);Thread A Thread B-------- --------s.acquire(1,0); s.acquire(0,1);s.acquire(0,1); s.acquire(1,0);Thread B--------while(true) {s.acquire(0,1);if ( s.tryAcquire(1,0) ) // if second acquisition succeedsbreak; // leave the loopelse {s.release(0,1); // release what is heldsleep( SOME_AMOUNT); // pause a bit before trying again}}run action s.value--- ------ -------(1,1)A s.acquire(1,0) (0,1)B s.acquire(0,1) (0,0)A s.acquire(0,1) A blocks on secondB s.tryAcquire(1,0) => falseB s.release(0,1) (0,1)A s.acquire(0,1) (0,0) A succeeds on second

5.a. Deadlock can be occurred. When the bridge is destroyed.
b. When there are no traffic lights.
c. To prevent deadlock make all the road to be one-way.

6.a. This is not a deadlocked.
b. There is no blocked processes.
c. P2 can freely request on R1 and R2.
d. P1 can freely request on R1 and R2.e. Both P1 and P2 have requested R2.1. P1 will wait after the request of P2.2. P2 will wait after the request of P1.

Thursday, December 13, 2007

Capter 3: Memory Management

Linux

kernel is a Unix-like operating system kernel. It is the namesake of the Linux family of operating systems. Released under the GNU General Public License (GPL) and developed by contributors worldwide, Linux is one of the most prominent examples of free software / open source.[3]

The Linux Kernel was initially conceived and assembled by Linus Torvalds in 1991. Early on, the Minix community contributed code and ideas to the Linux kernel. At the time, the GNU Project had created many of the components required for a free software operating system, but its own kernel, GNU Hurd, was incomplete and unavailable. The BSD operating system had not yet freed itself from legal encumbrances. This meant that despite the limited functionality of the early versions, Linux rapidly accumulated developers and users who adopted code from those projects for use with the new operating system.[4] Today the Linux kernel has received contributions from thousands of programmers.

Linux Memory Management

The Linux memory manager implements demand paging with a copy-on-write strategy relying on the 386's paging support. A process acquires its page tables from its parent (during a fork()) with the entries marked as read-only or swapped. Then, if the process tries to write to that memory space, and the page is a copy-on-write page, it is copied, and the page is marked read-write. An exec() results in the reading in of a page or so from the executable. The process then faults in any other pages it needs.

Each process has a page directory which means it can access 1 KB of page tables pointing to 1 MB of 4 KB pages which is 4 GB of memory. A process' page directory is initialized during a fork by copy_page_tables(). The idle process has its page directory initialized during the initialization sequence.

Each user process has a local descriptor table that contains a code segment and data-stack segment. These user segments extend from 0 to 3 GB (0xc0000000). In user space, linear addresses and logical addresses are identical.

On the 80386, linear address run from 0GB to 4GB. A linear address points to a particular memory location within this space. A linear address is not a physical address--it is a virtual address. A logical address consists of a selector and an offset. The selector points to a segment and the offset tells how far into that segment the address is located)

The kernel code and data segments are privileged segments defined in the global descriptor table and extend from 3 GB to 4 GB. The swapper page directory (swapper_page_dir is set up so that logical addresses and physical addresses are identical in kernel space.

The space above 3 GB appears in a process' page directory as pointers to kernel page tables. This space is invisible to the process in user mode but the mapping becomes relevant when privileged mode is entered, for example, to handle a system call. Supervisor mode is entered within the context of the current process so address translation occurs with respect to the process' page directory but using kernel segments. This is identically the mapping produced by using the swapper_pg_dir and kernel segments as both page directories use the same page tables in this space. Only task[0] (the idle task, sometimes called the swapper task for historical reasons, even though it has nothing to do with swapping in the Linux implementation) uses the swapper_pg_dir directly.

  • The user process' segment_base = 0x00, page_dir private to the process.
  • user process makes a system call: segment_base=0xc0000000 page_dir = same user page_dir.
  • swapper_pg_dir contains a mapping for all physical pages from 0xc0000000 to 0xc0000000 + end_mem, so the first 768 entries in swapper_pg_dir are 0's, and then there are 4 or more that point to kernel page tables.
  • The user page directories have the same entries as swapper_pg_dir above 768. The first 768 entries map the user space.
The upshot is that whenever the linear address is above 0xc0000000 everything uses the same kernel page tables.

The user stack sits at the top of the user data segment and grows down. The kernel stack is not a pretty data structure or segment that I can point to with a ``yon lies the kernel stack.'' A kernel_stack_frame (a page) is associated with each newly created process and is used whenever the kernel operates within the context of that process. Bad things would happen if the kernel stack were to grow below its current stack frame.

User pages can be stolen or swapped. A user page is one that is mapped below 3 GB in a user page table. This region does not contain page directories or page tables. Only dirty pages are swapped.

Minor alterations are needed in some places (tests for process memory limits comes to mind) to provide support for programmer defined segments.

Wednesday, November 21, 2007

Research Topics #1

Operating system

From Wikipedia, the free encyclopedia

Jump to: navigation, search

An operating system (OS) is the software that manages the sharing of the resources of a computer and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system. At the foundation of all system software, an operating system performs basic tasks such as controlling and allocating memory, prioritizing system requests, controlling input and output devices, facilitating networking and managing file systems. Most operating systems come with an application that provides a user interface for managing the operating system, such as a command line interpreter or graphical user interface. The operating system forms a platform for other system software and for application software.


Process management

Every program running on a computer, be it a service or an application, is a process. As long as a von Neumann architecture is used to build computers, only one process per CPU can be run at a time. Older microcomputer OSes such as MS-DOS did not attempt to bypass this limit, with the exception of interrupt processing, and only one process could be run under them (although DOS itself featured TSR as a very partial and not too easy to use solution). Mainframe operating systems have had multitasking capabilities since the early 1960s. Modern operating systems enable concurrent execution of many processes at once via multitasking even with one CPU. Process management is an operating system's way of dealing with running multiple processes. On the most fundamental of computers (those containing one processor with one core) multitasking is done by simply switching processes quickly. Depending on the operating system, as more processes run, either each time slice will become smaller or there will be a longer delay before each process is given a chance to run. Process management involves computing and distributing CPU time as well as other resources. Most operating systems allow a process to be assigned a priority which affects its allocation of CPU time. Interactive operating systems also employ some level of feedback in which the task with which the user is working receives higher priority. Interrupt driven processes will normally run at a very high priority. In many systems there is a background process, such as the System Idle Process in Windows, which will run when no other process is waiting for the CPU.

Memory management

Current computer architectures arrange the computer's memory in a hierarchical manner, starting from the fastest registers, CPU cache, random access memory and disk storage. An operating system's memory manager coordinates the use of these various types of memory by tracking which one is available, which is to be allocated or deallocated and how to move data between them. This activity, usually referred to as virtual memory management, increases the amount of memory available for each process by making the disk storage seem like main memory. There is a speed penalty associated with using disks or other slower storage as memory – if running processes require significantly more RAM than is available, the system may start thrashing. This can happen either because one process requires a large amount of RAM or because two or more processes compete for a larger amount of memory than is available. This then leads to constant transfer of each process's data to slower storage.

Another important part of memory management is managing virtual addresses. If multiple processes are in memory at once, they must be prevented from interfering with each other's memory (unless there is an explicit request to utilise shared memory). This is achieved by having separate address spaces. Each process sees the whole virtual address space, typically from address 0 up to the maximum size of virtual memory, as uniquely assigned to it. The operating system maintains a page table that match virtual addresses to physical addresses. These memory allocations are tracked so that when a process terminates, all memory used by that process can be made available for other processes.

The operating system can also write inactive memory pages to secondary storage. This process is called "paging" or "swapping" – the terminology varies between operating systems.

It is also typical for operating systems to employ otherwise unused physical memory as a page cache; requests for data from a slower device can be retained in memory to improve performance. The operating system can also pre-load the in-memory cache with data that may be requested by the user in the near future; SuperFetch is an example of this.

Disk and file systems

All operating systems include support for a variety of file systems.

Modern file systems comprise a hierarchy of directories. While the idea is conceptually similar across all general-purpose file systems, some differences in implementation exist. Two noticeable examples of this are the character used to separate directories, and case sensitivity.

Unix demarcates its path components with a slash (/), a convention followed by operating systems that emulated it or at least its concept of hierarchical directories, such as Linux, Amiga OS and Mac OS X. MS-DOS also emulated this feature, but had already also adopted the CP/M convention of using slashes for additional options to commands, so instead used the backslash (\) as its component separator. Microsoft Windows continues with this convention; Japanese editions of Windows use ¥, and Korean editions use ₩.[1] Versions of Mac OS prior to OS X use a colon (:) for a path separator. RISC OS uses a period (.).

Unix and Unix-like operating systems allow for any character in file names other than the slash and NUL characters (including line feed (LF) and other control characters). Unix file names are case sensitive, which allows multiple files to be created with names that differ only in case. By contrast, Microsoft Windows file names are not case sensitive by default. Windows also has a larger set of punctuation characters that are not allowed in file names.

File systems may provide journaling, which provides safe recovery in the event of a system crash. A journaled file system writes information twice: first to the journal, which is a log of file system operations, then to its proper place in the ordinary file system. In the event of a crash, the system can recover to a consistent state by replaying a portion of the journal. In contrast, non-journaled file systems typically need to be examined in their entirety by a utility such as fsck or chkdsk. Soft updates is an alternative to journalling that avoids the redundant writes by carefully ordering the update operations. Log-structured file systems and ZFS also differ from traditional journaled file systems in that they avoid inconsistencies by always writing new copies of the data, eschewing in-place updates.

Many Linux distributions support some or all of ext2, ext3, ReiserFS, Reiser4, GFS, GFS2, OCFS, OCFS2, and NILFS. Linux also has full support for XFS and JFS, along with the FAT file systems, and NTFS.

Microsoft Windows includes support for FAT12, FAT16, FAT32, and NTFS. The NTFS file system is the most efficient and reliable of the four Windows file systems, and as of Windows Vista, is the only file system which the operating system can be installed on. Windows Embedded CE 6.0 introduced ExFAT, a file system suitable for flash drives.

Mac OS X supports HFS+ as its primary file system, and it supports several other file systems as well, including FAT16, FAT32, NTFS and ZFS.

Common to all these (and other) operating systems is support for file systems typically found on removable media. FAT12 is the file system most commonly found on floppy discs. ISO 9660 and Universal Disk Format are two common formats that target Compact Discs and DVDs, respectively. Mount Rainier is a newer extension to UDF supported by Linux 2.6 kernels and Windows Vista that facilitates rewriting to DVDs in the same fashion as what has been possible with floppy disks.

Networking

Most current operating systems are capable of using the TCP/IP networking protocols. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections.

Many operating systems also support one or more vendor-specific legacy networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access.