Operating System Concepts

Abraham Silberschatz, Peter B. Galvin, Greg Gagne

Mentioned 5

Operating System Concepts, now in its ninth edition, continues to provide a solid theoretical foundation for understanding operating systems. The ninth edition has been thoroughly updated to include contemporary examples of how operating systems function. The text includes content to bridge the gap between concepts and actual implementations. End-of-chapter problems, exercises, review questions, and programming exercises help to further reinforce important concepts. A new Virtual Machine provides interactive exercises to help engage students with the material.

More on Amazon.com

Mentioned in questions and answers.

This is a paragraph from Operating System Concepts, 9th edition by Silberschatz et al:

The percentage of times that the page number of interest is found in the TLB is called the hit ratio. An 80-percent hit ratio, for example, means that we find the desired page number in the TLB 80 percent of the time. If it takes 100 nanoseconds to access memory, then a mapped-memory access takes 100 nanoseconds when the page number is in the TLB. If we fail to find the page number in the TLB then we must first access memory for the page table and frame number (100 nanoseconds) and then access the desired byte in memory (100 nanoseconds), for a total of 200 nanoseconds. (We are assuming that a page-table lookup takes only one memory access, but it can take more, as we shall see.) To find the effective memory-access time, we weight the case by its probability: effective access time = 0.80 × 100 + 0.20 × 200 = 120 nanoseconds

but in the 8th edition of the same book enter image description here

I'm confused with the

effective access time

Can someone explain it for me?

I was reading Virtual Memory from Operating System Concepts by Galvin and came across a statement, it says:

"We can think of LRU strategy as the optimal page-replacement algorithm looking backward in time, rather than forward."

Then on the other line, it says:

"If we let Sr be the reverse of a reference string S, then the page-fault rate for the OPT algorithm(Optimal Page Replacement) on S is the same as the page-fault rate for the OPT algorithm on Sr. Similarly, the page-fault rate for the LRU algorithm(Least recently Used) on S is the same as the page-fault rate for the LRU algorithm on Sr."

So if LRU is OPT looking backward in time then how come on a string and it's reverse page faults are same, because according to my understanding of the first statement: if page fault on a string S by LRU is x and and by OPT is y, then page faults on it's reverse string Sr by LRU and OPT must by y and x respectively.

Why they must be same everytime for every string set and it's reverse?

LRU: Least Recently Used Page Replacement

OPT: Optimal Page Replacement

I want to cover the equivalent of a typical CS undergrad course in material, so I'm making a list of books to cover the typical topics. I've split the list into topics that, from the research I did, I think are compulsory and optional. I would like some help to confirm if the topics are split correctly, and if the books are of the correct level. Also, please let me know if I left out any important topics, or if any are beyond undergrad level.

Thank you for your time!

Edit regarding the on hold status: I do not believe this question is off-topic as I am not asking for a recommendation for books - I am asking if the topics I have listed are indicative of a typical CS course, and if any important topics are missing. The books links are only there in case the books I have chosen are not correct for the topic, and can be removed if necessary.


COMPULSORY

Operating Systems: Operating System Concepts

Networks: Computer Networking: A Top-Down Approach

Discrete Mathematics: Concrete Mathematics

Data Structures and Algorithms: Introduction to Algorithms

Computer Architecture: Computer Systems: A Programmer's Perspective

Automata Theory: Introduction to the Theory of Computation

Compilers: Engineering a Compiler was recommended to me over the dragon book.

Database Theory: An Introduction to Database Systems

Programming Language Concepts and Design: Programming Language Pragmatics

OPTIONAL

Cryptography: Cryptography Engineering: Design Principles and Practical Applications

Functional Programming: Learn You a Haskell for Great Good!

Artificial Intelligence: Artificial Intelligence: A Modern Approach

Computer Graphics: Real-Time Rendering

Your list is very good on the subjects directly related to computer science. However, it is light on math. In my own B.Sc. in Computer Science I also had a ton of calculus, linear algebra, algebra (groups, rings, etc), statistics, analytic geometry and numerical analysis. Some applications of computer science rely heavily on those:

  • Machine learning depends on lots of linear algebra, calculus, and statistics;
  • Computer graphics depends a lot on analytic geometry and linear algebra;
  • Scientific computation relies on calculus and numerical analysis.

I never used much of the ton of Algebra I had, but I hear it is important for cryptography. :-)

For a programmer developing more regular applications your list is very good, but for those interested in these more specialized areas (which are still pretty important), these subjects are vital.

An operating system (OS) is a basic software whose role is to be an abstract layer between software requisitions for resources and the hardware available. The basic components of an operational system can be listed as:

  • Boot loader

    Although some may say it is not part of the OS, it's the starting point where the hardware after doing booting routines transfers the control to a small procedure that will bring up the entire system

  • User interface

    Can be graphical or text-based, is the central point of communication between the user and the OS

  • Kernel

    The core of the OS that manages all the resources of the hardware according to the requisitions. Kernels can be either a micro kernel or a monolithic kernel. Both types include the following functionality:

    • Process management (scheduling, multitasking, pseudo-parallelism, and so on)
    • Memory (and virtual memory) management
    • Inter-process communications (IPC)
    • Interrupt management

    Monolithic kernels include these additional features:

    • File system and disk access organization
    • Device management (with the aid of device drivers, plug-and-play routines, dynamic modules, and so on)

These features are not included directly in a micro-kernel, but are instead implemented in tasks. One example of a fairly widely used micro-kernel is QNX. As well, many hypervisors are micro kernel designs. A major argument for micro-kernels is that their small size makes them easier to analyze and more secure.Tanenbaum

Most well known operating systems are monolithic. In fact, the majority of commercial and Open source OS's are monolithic. Generally they allow faster hardware response.

Book : Operating System Concepts by Abraham Silberschatz

See also: .

I am studying the solution to First Readers - Writers Problem from the book Operating System Concepts (9th edition) and it describes:

First readers–writers problem, requires that no reader be kept waiting unless a writer has already obtained permission to use the shared object. In other words, no reader should wait for other readers to finish simply because a writer is waiting.

And what I have understood from this is that if there is any reader running and a writer comes; then that writer would be blocked until reader has completed. But during the completion of first reader if there comes another reader (or multiple readers), then that (those) reader(s) will be given priority over writer.

First of all please correct me if I am wrong here. But if am right, then what I have understood is that, the code for Reader does not guarantee this.

Code for reader-writer is given below:

//data structures
semaphore rw_mutex = 1;
semaphore mutex = 1;
int read_count = 0;

//Code for writer
do {
   wait(rw_mutex);
   . . .
   /* writing is performed */
   . . .
   signal(rw_mutex);
} while (true);

//Code for reader
do {
   wait(mutex);            //Line 01
   read_count++;

   if (read_count == 1)
      wait(rw_mutex);      //Line 02

   signal(mutex);
   . . .
   /* reading is performed */
   . . .

   wait(mutex);            //Line 03
   read_count--;

   if (read_count == 0)
      signal(rw_mutex);    //Line 04

   signal(mutex);
} while (true);

Now suppose following sequence of events occurs:

  • Suppose first reader comes and it blocks the writer on the line mentioned as Line 02 in comments
  • Then a writer comes which is waiting for rw_mutex
  • Then first reader is executing the code line mentioned as Line 03 in comments in code and it has locked the mutex semaphore
  • At the same time second reader comes and it starts waiting on line mentioned as Line 01 in comments in code
  • Now when the first reader executes Line 04; it releases the lock on rw_mutex and the writer which was waiting in its while loop is unlocked now and starts executing
  • The second reader will be unlocked when the first reader executes the line after Line 04 that signals mutex semaphore

Now if we see the overall flow, then the writer runs before the second reader. So does the code logic work the same as described above?

Please correct me if I am wrong.