USIT103-Operating-System-munotes

Page 1

1UNIT I1INTRODUCTIONUnit Structure1.0Objectives1.1Introduction1.2What is operating system1.2.1 Definition1.2.2 The Operating System as an Extended Machine1.2.3 The Operating System as a Resource Manager1.3History of operating system1.3.1 First Generation OS1.3.2 Second Generation OS1.3.3 Third Generation OS1.3.4 Fourth Generation OS1.3.5 Fifth Generation OS1.4Computer hardware1.4.1 Processor1.4.2 Memory1.4.3 Disk1.4.4 Booting of system1.5Let us Sum Up1.6List of Reference1.7Bibliography1.8Unit EndQuestions1.0 OBJECTIVESThe objective of the chapter is as follow•To get familiar with core component of operating systems•To understand the different generation of operating system•To understand the different functionality of system1.1 INTRODUCTION•Operating systems provides a clear, best and simple view of thecomputer to the users.munotes.in

Page 2

2•Operating system performs thefunction of resource handling anddistributing the resources to the different part of the system.•It is the intermediary between users and the computer system andprovide a level of abstraction due to which complicated details can bekept hidden from the user.1.2 WHAT IS OPERATING SYSTEM1.2.1 Definition:•Operating System is a system software which acts as an intermediarybetween user and hardware.•Operating System keeps the complicated details of the hardwarehidden from the user and provides user with easy and simple interface.•It performs functions which involves allocation of resources efficientlybetween user program, file system, Input Output device.
Figure 1. Abstract view of Operating systemReference: Modern Operating system, Fourth edition,Andrew S. Tanenbaum,Herbert BosExplanation of Figure 1:•The hardware components lies at the bottom of the diagram. It isconsidered as the most crucial part of computer system. To protectthe hardware from direct access, it is kept at the lowest level ofhierarchy.Hardwarecomponentsincludes circuits, input outputdevice, monitor etc•Operating system runs in the kernel mode of the system wherein theOS gets an access to all hardware and can execute all machineinstructions. Other part of the system runs in user mode.
2•Operating system performs thefunction of resource handling anddistributing the resources to the different part of the system.•It is the intermediary between users and the computer system andprovide a level of abstraction due to which complicated details can bekept hidden from the user.1.2 WHAT IS OPERATING SYSTEM1.2.1 Definition:•Operating System is a system software which acts as an intermediarybetween user and hardware.•Operating System keeps the complicated details of the hardwarehidden from the user and provides user with easy and simple interface.•It performs functions which involves allocation of resources efficientlybetween user program, file system, Input Output device.
Figure 1. Abstract view of Operating systemReference: Modern Operating system, Fourth edition,Andrew S. Tanenbaum,Herbert BosExplanation of Figure 1:•The hardware components lies at the bottom of the diagram. It isconsidered as the most crucial part of computer system. To protectthe hardware from direct access, it is kept at the lowest level ofhierarchy.Hardwarecomponentsincludes circuits, input outputdevice, monitor etc•Operating system runs in the kernel mode of the system wherein theOS gets an access to all hardware and can execute all machineinstructions. Other part of the system runs in user mode.
2•Operating system performs thefunction of resource handling anddistributing the resources to the different part of the system.•It is the intermediary between users and the computer system andprovide a level of abstraction due to which complicated details can bekept hidden from the user.1.2 WHAT IS OPERATING SYSTEM1.2.1 Definition:•Operating System is a system software which acts as an intermediarybetween user and hardware.•Operating System keeps the complicated details of the hardwarehidden from the user and provides user with easy and simple interface.•It performs functions which involves allocation of resources efficientlybetween user program, file system, Input Output device.
Figure 1. Abstract view of Operating systemReference: Modern Operating system, Fourth edition,Andrew S. Tanenbaum,Herbert BosExplanation of Figure 1:•The hardware components lies at the bottom of the diagram. It isconsidered as the most crucial part of computer system. To protectthe hardware from direct access, it is kept at the lowest level ofhierarchy.Hardwarecomponentsincludes circuits, input outputdevice, monitor etc•Operating system runs in the kernel mode of the system wherein theOS gets an access to all hardware and can execute all machineinstructions. Other part of the system runs in user mode.
munotes.in

Page 3

31.2.2The Operating System as an Extended Machine:•The structure of computers system at the machine-language level iscomplicated to program, especially for input/output. Programmersdon’t deal with hardware, so a level of abstraction is supposed to bemaintained.•Operating systems provides layer of abstraction for using disks: files.•Abstraction allows a programs to create, write, and read files,without having to deal with the messy details of how the hardwareactually works•Abstraction is the key tomanaging all the complexity.•Good abstractions turn a nearly impossible task into two manageableones.•The first is defining and implementing the abstractions.•The second is using these abstractions to solve the problem at hand.•operating system primarilyprovides abstractions to applicationprograms in a top-down view•E.g.: It is much easier to deal with photos, emails, songs, and Webpages than with the details of these files on SATA (or other) disks.1.2.3The Operating System as a Resource Manager:•Modern computers consist of processors, memories, timers, disks,mice, network interfaces, printers, and a wide variety of other devices.•In the bottom-up view, the operating system provides for an orderlyand controlled allocation of the processors, memories, and I/O devicesamong the various programs.•Operating system allows multiple programs to be in memory and runat the same time.•Resource management includes multiplexing (sharing) resources intwodifferent ways: in time and in space.•In time multiplexed, different programs takes turns using CPU. Firstone of them gets to use the resource, then the another, and so on.•E.g.: Sharing the printer. When multiple print jobs are queued up forprinting on a single printer, a decision has to be made aboutwhich oneis to be printed next•In space multiplexing, Instead of the customers taking turns, each onegets part of the resource.•E.g.: main memory is divided up among several running programs, soeach one can be resident at the same time.munotes.in

Page 4

41.3 HISTORYOF OPERATING SYSTEMEnglish mathematician Charles Babbage (1792–1871) developed the firsttrue digital computer which was purely mechanical, and the technology of hisday could not produce1.3.1 First Generation OS:•First generation were also known asVacuum Tube•Single group of people were responsible for creating, building,programming, operating, and maintenance of each machine•Programming was done by connecting the electrical circuit onplugboard with thousands of cables•Programmer used to sign up for a block of time using the signup sheeton the wall then come down to the machine room, insert his or herplugboard into the computer, and spend the next few hours hoping thatnone of the 20,000 or so vacuum tubes would burn out during the run.1.3.2SecondGeneration OS:•Second Generation computers were also known as Transistors andBatch Systems.•Computer in this era was reliable and manufactured for thepurpose of selling it to the customers like government agencies oruniversities.•Separate groups wereformed for working ondesigning, buildingand coding aspects of computer•Computers were known as mainframes and were kept in separaterooms.•Separate machines were build for calculation and for input/output.•Programs were known as job. Jobs were entered ingroups called asbatch•Second-generation computers were used for scientific and engineeringcalculations of physics and engineering.Year1955-65Programming languageFortran, assemblerOperating systemIBM’s operatingsystem FMHardwareTransistors and Batch Systems,punchcard, magnetic tape1.3.3Third Generation OS:•Third Generation computers were known ICs and Multiprogramming•Maintaining two computers were not easy so IBM introduced its firstcomputer names System 360 made byusing Integrated circuitmunotes.in

Page 5

5•The main purpose of this generation was all software, including theoperating system, OS/360, worked on all models•Important feature identified in this generation wasmultiprogrammingwhere in when one job was waiting for I/O tocomplete, another jobcould be using the CPU. This way maximum utilization of CPU couldbe achieved•Spooling that has an ability to read jobs from cards onto the disk•Time sharing, which allocates the CPU in turns to number of users•Third generation computers were used for Large scientificcalculations and massive commercial data-processing runs1.3.4Fourth Generation OS:•Fourth generation computers were also known as Personal computers•Extremely small size computers could be created using microchipswhich made it possible for a single individual to have his ownpersonal computer•Companies like Intel and IBM started creating OS for their respectiveCPU•User friendly GUI were built for general purpose usage•Microsoft came up with different versions ofWindows•Network operating systems and distributed systems became popular inthis era•In network operating system, users log in to remote machines andcopy files from one machine to another.•A distributed operating system, is composed of multipleprocessorsbut appears to its users as a single uniprocessor unitYear1980–PresentProgramming languageHigh level programming languageOperating systemDOS, Windows, UNIX, FreeBSDHardwareLSI(large Scale Integration) circuit,chips, transistorsComputersIBM 4341, DEC 10,STAR 1001.3.5Fifth Generation OS:•Fifth generation was also known as Mobile computer, made byusing Ultra Large Scale Integrated Chips•New operating systems like symbians, Blackberry OS, iOS ,Androidbecame popular in themarket•Devices become more portable and smaller in size•Artificial intelligence is used on a large scale to construct a devicemunotes.in

Page 6

6which uses natural language processing for analysis of input.•Computers in this era were capable of self learningYear1990–PresentProgramming languageHigh level programming languageOperating systemiOS, Android, Symbians,RIMHardwareUltra large scale integrated chipComputersHandheld devices, wearable devices, PDA,Smart phone1.4.1 Processor:•CPU is themost vital part of the computer. The instructions arefetched from the memory and executed by CPU using fetch-decode-execute•CPUs contains registers inside to hold key variables and temporarydata.•Special registers called as program counter containsmemoryaddress of the next instruction to be fetched. Program StatusWord contains the condition code bits•The Intel Pentium 4 introduced multithreading orhyperthreading to the x86 processor, allowing the CPU to holdthe state of two different threads andthen switch back and forthin nanosecond.•A GPU is a processor with thousands of tiny cores which arevery good for many small computations done in parallel likerendering polygons in graphics applications1.4.2 Memory:•The basic expectations from thememory is its speed, storage andperformance but a single memory is not capable of fulfilling the same•The memory system is based on hierarchy of layers.•Registers inside the CPU forms the top layer in the hierarchy whichgives quick access to data.•Cachememory is next in the hierarchy. Most heavily used data are keptinside the cache for high speed access using cache hit and cache miss.•Two types of cache are present in the system depending upon themanufacturing company cache L1 and L2•The cache that isalways inside the CPU is L1 which enters decodedinstructions inside the CPU•L2 cache holds megabytes of memory words which were recently usedThe difference between the L1 and L2 caches lies in the timing.•Main memory comes next in the hierarchy also known as RAM. CacheMiss request goes inside the main memory formunotes.in

Page 7

7Figure 2. Memory hierarchyReference:ModernOperating system, Fourth edition, Andrew S.Tanenbaum, Herbert
Figure 3. Disk StructureReference:Modern Operating system, Fourth edition, Andrew S.Tanenbaum, Herbert Bos•Disk is a mechanical device capable of storage which is cheaperand larger than RAM.•The only problem is that the time to randomly access data on it isslower.•A disk consistsmetal plats that rotate at 5400, 7200, 10,800 RPM ormore.•An arm pivots over the plats from the corner•Each of the heads can read an annular region called a track.•Together, all the tracks for a given arm position form a cylinder.•Each track is dividedinto some number of sectors, typically 512 bytesper sector.1.4.3Booting the computer:•Process of loading the kernel is known as booting the system
munotes.in

Page 8

8•Parent board consist of a program called as BIOS(Basic Input OutputSystem)•BIOS starts with its responsibilityof checking RAM, basic devicesand PCI buses as soon as the system is booted. It scans and checksthe response of the devices.•Once the initial check is done, BIOS starts the boot device from thehard disk•First section of the boot device is read into thememory and executed•The secondary boot loader present inside the sector is read inside thememory•The loader reads the operating system and starts it1.5 LET US SUM UP•Operating System is a system software which acts as anintermediary between user andhardware•The Operating System acts as an Extended Machine by providing levelof abstraction.•Operating System is responsible for Resource allocation•Five generations of computer operating systemshave evolved.•Different components of hardware interact withoperating systemwhich in turn interacts with the other applications1.6 LIST OF REFERENCEModern Operating system, Fourth edition, Andrew S.Tanenbaum, Herbert Boshttps://www.geeksforgeeks.org/generations-of-computer/1.7 BIBLIOGRAPHYModern Operating System by galvin1.8 UNIT ENDQUESTIONS1.Explain Third Generation operating system2.Define Operating System. Explain the role of OSas an extendedmachine.3.Write a short noteon the fifthgeneration Operating System.4.With a suitablediagram, explainthe structureof the diskdrive.5.Explain the process of Booting in computer6.Define Operating System. How it can be used as a resource manager.*****munotes.in

Page 9

92OPERATINGSYSTEM CONCEPTUnit Structure2.0Objectives2.1Introduction2.2Different Operating Systems2.2.1Mainframe Operating Systems2.1.1Server Operating Systems2.1.2Multiprocessor Operating System2.1.3Personal Operating Systems2.1.4Handheld Operating System2.1.5Embedded Operating Systems2.1.6Sensor-Node Operating System2.1.7Real-Time Operating Systems2.1.8Smart Card Operating System2.3Operating system Concepts2.4System Calls2.4.1System calls for Process management2.4.2System calls forFile management2.4.3System calls for Directory management2..4.4Windows Win322.4.4API2.5Operating System2.5.1Monolithic System2.5.2Layered System2.5.3Microkernels2.5.4Client Server System2.5.5Exokernel2.6Let us Sum Up2.6List ofReference2.7Bibliography2.8Unit End Exercise2.0OBJECTIVES•The objective of the chapter is as followmunotes.in

Page 10

10•To understand the operating system services provided to the users andprocesses,•To understand the various operating system structures.•To describevarious types of operating system.2.1INTRODUCTIONAn operating system provides the environment within whichprograms are executed. It is important to understand the goals of thesystem which will help us to select the algorithm and strategies for thedesigning of the system2.2DIFFERENTOPERATINGSYSTEM2.2.1 Mainframe Operating Systems:•Mainframe operating systems are used in web servers of e commercewebsites or servers dedicated for business-to-business transaction.•The operating systems of Mainframe operating systems are oriented insuch a way that it can handle many jobs simultaneously•Mainframe Operating systems can deal with large amount of inputoutput transaction.•The main services of mainframe operating systems are•to handle batchprocessing of jobs.•to handle transaction processing of multiple request.•timesharing of servers allowing multiple remote users to have anaccess to the server.2.2.2Server Operating Systems:•Server Operating Systems are the ones that runs on the machinewhichare dedicated servers.•Solaris, Linux and Windows are some examples of Server OperatingSystems•Server Operating Systems allows sharing of multiple resources likehardware,files or print services•Web pages are stored on a dedicated server to handle request andresponse.2.2.3 Multiprocessor Operating System:•Multiprocessor Operating Systems are also known as parallelcomputers or multicomputer depending upon how multiple processorsare connected and shared.munotes.in

Page 11

11•These computers have high speedcommunication mechanism withstrong connectivity.•Personal computer are also created and embedded with themultiprocessor technology.•Multiprocessor operating system give high processing speed asmultiple processors into single system.2.2.4 Personal Operating Systems:•Personal operating systems are installed in machines used by commonand large number of users.•They support multiprogramming, running multiple programs likeword, excel, games, and Internet access simultaneously on onemachine.•ExamplesLinux, Windows, Mac.2.2.5 Handheld Operating System:•Handheld operating systems are found in all handheld devices likeSmart phone and tablets. It is also known as Personal Digital Assistant.•The most popular operating systems in today’s market are android andiOS.•These operating systems need high processing processor. It is alsoembedded with different types of sensor.2.2.6 Embedded Operating Systems:•Embedded operating systems are designed for those devices which arenot considered as computers.These operating systems are preinstalledon the devices by the device manufacturer.•All pre installed softwares are in ROM and no changes could be doneto it by the users.•The best example of embedded operating systems is washingmachines, oven etc.2.2.7Real-Time Operating Systems:•Real Time Operating systems have strict time constraints due to whichit is used in applications that are very critical in terms of safety.•Real time operating system are classified into hard real time and softreal time•Hardreal time systems have very stringent time constraints, certainactions should occur at that time only. Components are tightly coupledin hard real time•Soft real time operating system is the one where missing of deadlinessome time will not cause damagemunotes.in

Page 12

122.2.8Smart Card Operating System:•Smart Card Operating Systems runs on smart cards. They containprocessor chip embedded inside the CPU chip.•They have very high processing power and memory constraints•Theseoperating systems can handle single functionlike makingelectronic payment and are license softwares.2.3 OPERATING SYSTEM CONCEPTSOperating Systems concepts deals with process, address space, file,input output devicesProcess:A process is a program which is in execution mode. Eachprocess has an address space. All data related to process is stored in a tablecalled as process table. All details for running the program is contained inthe process. A process can reside in anyoneof the five states in its lifetime. There are background and foreground processes running in thesystems carrying out different functions. These processes communicateswith each other using interprocess communicationAddress space:Computer need a mechanism to distinguish betweenprocess sitting inside the main memory. This is done by allocating heprocess to an address space. computers addresses are 32 or 64 bits, givingan address space of 232 or 264 bytes. Virtual address spaces are playingan important role in dealing with the problem of insufficient memoryspace.Files are the data which the user want to retrieve back from thecomputer. Operating systems is supposed to maintain the data in hard diskand retrieve it from it when ever needed. Operating system arranges filesin the form of hierarchy. The data goes inside the directory.Shell :Shell is the command interpreter for UNIX. Shell become theintermediate between the user on the terminal and the operating systems.Every shell has a terminalfor entering the data and to get the output.Instructions are given in the form of commands to the computer.2.4SYSTEMCALLSIt is a way by which user program request for services from the kernel.System calls provide an interface to the services madeavailable by anoperating system.Step by step explanation of system call mechansim:•A process running a user program in user mode want to execute readinstruction a file, it has to execute a trap instruction to transfer controlto the operating system.munotes.in

Page 13

13•System call read has three parameters: the first one specifying the file,the second one pointing to the buffer, and the third one giving thenumber of bytes to read.•count = read(fd, buffer, nbytes);•the parameters are first pushed onto stack.•libraryprocedure are called in the step 4•The library procedure, possibly written in assembly language, typicallyputs the system-call number in a place where the operating systemexpects it, such as a register (step 5)•Then it executes a TRAP instruction to switch from user mode tokernel mode and start execution at a fixed address within the kernel(step 6).•The kernel code that starts following the TRAP examines the system-call number and then dispatches to the correct system-call handler,usually via a tableof pointers to system-call handlers indexed onsystem-call number (step 7).•At that point the system-call handler runs (step 8).•Once it has completed its work, control may be returned to the user-space library procedure at the instruction following the TRAPinstruction (step 9).•12. This procedure then returns to the user program in the usual wayprocedure calls return (step 10).
Figure 2.1 system call for readReference:Modern Operating system, Fourth edition, Andrew S.Tanenbaum, Herbert Bos2.4.1System calls for Process management:•A system to create a new process or a duplicate process is fork
13•System call read has three parameters: the first one specifying the file,the second one pointing to the buffer, and the third one giving thenumber of bytes to read.•count = read(fd, buffer, nbytes);•the parameters are first pushed onto stack.•libraryprocedure are called in the step 4•The library procedure, possibly written in assembly language, typicallyputs the system-call number in a place where the operating systemexpects it, such as a register (step 5)•Then it executes a TRAP instruction to switch from user mode tokernel mode and start execution at a fixed address within the kernel(step 6).•The kernel code that starts following the TRAP examines the system-call number and then dispatches to the correct system-call handler,usually via a tableof pointers to system-call handlers indexed onsystem-call number (step 7).•At that point the system-call handler runs (step 8).•Once it has completed its work, control may be returned to the user-space library procedure at the instruction following the TRAPinstruction (step 9).•12. This procedure then returns to the user program in the usual wayprocedure calls return (step 10).
Figure 2.1 system call for readReference:Modern Operating system, Fourth edition, Andrew S.Tanenbaum, Herbert Bos2.4.1System calls for Process management:•A system to create a new process or a duplicate process is fork
13•System call read has three parameters: the first one specifying the file,the second one pointing to the buffer, and the third one giving thenumber of bytes to read.•count = read(fd, buffer, nbytes);•the parameters are first pushed onto stack.•libraryprocedure are called in the step 4•The library procedure, possibly written in assembly language, typicallyputs the system-call number in a place where the operating systemexpects it, such as a register (step 5)•Then it executes a TRAP instruction to switch from user mode tokernel mode and start execution at a fixed address within the kernel(step 6).•The kernel code that starts following the TRAP examines the system-call number and then dispatches to the correct system-call handler,usually via a tableof pointers to system-call handlers indexed onsystem-call number (step 7).•At that point the system-call handler runs (step 8).•Once it has completed its work, control may be returned to the user-space library procedure at the instruction following the TRAPinstruction (step 9).•12. This procedure then returns to the user program in the usual wayprocedure calls return (step 10).
Figure 2.1 system call for readReference:Modern Operating system, Fourth edition, Andrew S.Tanenbaum, Herbert Bos2.4.1System calls for Process management:•A system to create a new process or a duplicate process is fork
munotes.in

Page 14

14•The duplicate process will have all data in the file description andregisters common.•The original process is known as parent process and the duplicate isknown as the child process•The fork call returns a value, which is zero in the child and equal to thechild’s PID (Process IDentifier) in the parent.•System calls like exit would request the services for terminating aprocess•Loading of program or changing of the original image with duplicateneeds execution of exec•Pid would help to distinguish between child and parent process•Eg of Process management system calls in Linux•fork: for creating a duplicate process from parent process•wait: process aresupposed to wait for other processes to completetheir work•exec: loads the selected program into the memory•exit: terminates the process2.4.2 System calls for File management:•A file is open using a system call open.•The mode in which the file is supposed to be open is specified usingthe parameter. Parameters also consist of the names of the file to openor a new one to becreated.•The files are closed using the close systems.•Associated with each file is a pointerthat indicates the current positionin the file.•When reading (writing) sequentially, it normally points to the nextbyte to be read (written). The lseek call changes the value of theposition pointer, so that subsequent calls to read or write can beginanywhere in the file.•Lseek has three parameters: the first is the file descriptor for the file,the second is a file position, and the third tells whether the file positionis relative to the beginning of the file, the current position, or the endof the file.•Eg of systems calls for file management•open: for opening the file for reading, writing•close: to close the opened file•read: for reading the data from the file into buffer•write: for writing the data from the buffer into filemunotes.in

Page 15

152.4.3System calls forDirectory management:•mkdir is a system call that creates and empty directories, whereasrmdir removes an empty directories.•link allows the same file to appear under two or more names, often indifferent directories for allowing several members of thesameprogramming team to share a common file, with each of them havingthe file appear in his own directory, possibly under different names.•By executing the mount system call, the USB file system can beattached to the root file system•The mount call makes it possible to integrate removable media into asingle integrated file hierarchy, without having to worry about whichdevice a file is on2.4.5Windows Win32 API:•Window’s program are event driven. An event occurs that calls theprocedure to handle it.Windows functioning is most driven by GUIbased interactions like mouse movement. There are system calls whichare exclusively present windows to deal with GUI and many of thesystems calls which are present in UNIX are missing here.Followingare some of it:•CreateProcess: Creates a new process in Win32•WaitForSingleObject: Waits for a process to exit•ExitProcess: Terminates the execution of process•CreateFile: Opens an existing file or creates a new one2.5 OPERATING SYSTEM2.5.1 Monolithic System:•Inthe monolithic approach the entire operating system runs as a singleprogram in kernel mode•The operating system is written as a collection of procedures, linkedtogether into a single large executable program.•Each procedure in the system is free to call any other process•Being able to call any procedure makes the system very efficient•No information hiding—every procedure is visible to every otherprocedure•E.g.MS DOS and LINUX•This organization suggests a basic structure for the operating system:•Main Function-invokes requested service procedure•Service Procedures-carry out system calls•Utility functions-Help service procedures to perform certain tasksmunotes.in

Page 16

16Disadvantage:•Difficult and complicated structure•A crash in any of these procedures willtake down the entire operatingsystem
Figure 2.2Monolithic StructureReference:Modern Operating system, Fourth edition, Andrew S.Tanenbaum, Herbert Bos2.5.2 Layered System:
Figure 2.2. Layered StrutureReference:Modern Operating system, Fourth edition, Andrew S.Tanenbaum, Herbert Bos•The operating system is organized as a hierarchy of layers, each oneconstructed upon the one below it. The first system constructed in thisway was the THE system. The same concept of layered approach wasalso implemented by MULTICS with concentric rings.The procedures
16Disadvantage:•Difficult and complicated structure•A crash in any of these procedures willtake down the entire operatingsystem
Figure 2.2Monolithic StructureReference:Modern Operating system, Fourth edition, Andrew S.Tanenbaum, Herbert Bos2.5.2 Layered System:
Figure 2.2. Layered StrutureReference:Modern Operating system, Fourth edition, Andrew S.Tanenbaum, Herbert Bos•The operating system is organized as a hierarchy of layers, each oneconstructed upon the one below it. The first system constructed in thisway was the THE system. The same concept of layered approach wasalso implemented by MULTICS with concentric rings.The procedures
16Disadvantage:•Difficult and complicated structure•A crash in any of these procedures willtake down the entire operatingsystem
Figure 2.2Monolithic StructureReference:Modern Operating system, Fourth edition, Andrew S.Tanenbaum, Herbert Bos2.5.2 Layered System:
Figure 2.2. Layered StrutureReference:Modern Operating system, Fourth edition, Andrew S.Tanenbaum, Herbert Bos•The operating system is organized as a hierarchy of layers, each oneconstructed upon the one below it. The first system constructed in thisway was the THE system. The same concept of layered approach wasalso implemented by MULTICS with concentric rings.The procedures
munotes.in

Page 17

17in out rings are supposed to make a system call to access the process inthe inner ring•The diagram reflects the structure of The operating system withfollowing details•Layer 0 dealt with allocation of the processor, switching betweenprocesses when interrupts occurred or timers expired.•Layer 1 did the memory management. It allocated space for processesin main memory.•Layer2 handled communication between each process and the operatorconsole·•Layer 3 took care of managing the I/O devices and buffering theinformation streams·•Layer 4 was where the user programs were found.•Layer 5 : The system operator process was located.•TRAP instruction whose parameters were carefully checked forvalidity before the call was allowed to proceed.2.5.3 :Microkernels:
•Microkernel structure focusses on making the kernel smaller byreducing the non essential components from the kernel. These nonessential components are placed in user space.•The basic idea behind the microkernel design is toachieve highreliability by splitting the operating system up into small, well-definedmodules.•the microkernel—runs in kernel mode.•The main function of microkernel is to provide a communicationfacility between the client program and various services that are alsorunning in user space
17in out rings are supposed to make a system call to access the process inthe inner ring•The diagram reflects the structure of The operating system withfollowing details•Layer 0 dealt with allocation of the processor, switching betweenprocesses when interrupts occurred or timers expired.•Layer 1 did the memory management. It allocated space for processesin main memory.•Layer2 handled communication between each process and the operatorconsole·•Layer 3 took care of managing the I/O devices and buffering theinformation streams·•Layer 4 was where the user programs were found.•Layer 5 : The system operator process was located.•TRAP instruction whose parameters were carefully checked forvalidity before the call was allowed to proceed.2.5.3 :Microkernels:
•Microkernel structure focusses on making the kernel smaller byreducing the non essential components from the kernel. These nonessential components are placed in user space.•The basic idea behind the microkernel design is toachieve highreliability by splitting the operating system up into small, well-definedmodules.•the microkernel—runs in kernel mode.•The main function of microkernel is to provide a communicationfacility between the client program and various services that are alsorunning in user space
17in out rings are supposed to make a system call to access the process inthe inner ring•The diagram reflects the structure of The operating system withfollowing details•Layer 0 dealt with allocation of the processor, switching betweenprocesses when interrupts occurred or timers expired.•Layer 1 did the memory management. It allocated space for processesin main memory.•Layer2 handled communication between each process and the operatorconsole·•Layer 3 took care of managing the I/O devices and buffering theinformation streams·•Layer 4 was where the user programs were found.•Layer 5 : The system operator process was located.•TRAP instruction whose parameters were carefully checked forvalidity before the call was allowed to proceed.2.5.3 :Microkernels:
•Microkernel structure focusses on making the kernel smaller byreducing the non essential components from the kernel. These nonessential components are placed in user space.•The basic idea behind the microkernel design is toachieve highreliability by splitting the operating system up into small, well-definedmodules.•the microkernel—runs in kernel mode.•The main function of microkernel is to provide a communicationfacility between the client program and various services that are alsorunning in user space
munotes.in

Page 18

18•All new services are added to the user space and the kernel don’t needto be modified.•Microkernel provides high security and reliability as most of theservices are running in user space , if a service fails the rest operatingsystem remains untouched.•Disadvantage•Performance decrease due to increased system function overhead2.5.4 Client Server System:
•The servers, each of which provides some service, and the clients,which use these services. This model is knownas the client-servermodel.•Since clients communicate with servers by sending messages, theclients need not know whether the messages are handled locally ontheir own machines, or whether they are sent across a network toservers on a remote machine.•Asfar as the client is concerned,: requests are sent and replies comeback.•Thus the client-server model is an abstraction that can be used for asingle machine or for a network of machines2.5.5 Exokernel:•Exokernel runs in the bottom layer of kernelmode.•Its job is to allocate resources to virtual machines and then checkattempts to use them to make sure no machine is trying to usesomebody else’s resources.•The advantage of the exokernel scheme is that it saves a layer ofmapping whereas the virtual machine monitor must maintain tables toremap disk addresses•The exokernel need only to keep track of which virtual machine•has been assigned which resource
18•All new services are added to the user space and the kernel don’t needto be modified.•Microkernel provides high security and reliability as most of theservices are running in user space , if a service fails the rest operatingsystem remains untouched.•Disadvantage•Performance decrease due to increased system function overhead2.5.4 Client Server System:
•The servers, each of which provides some service, and the clients,which use these services. This model is knownas the client-servermodel.•Since clients communicate with servers by sending messages, theclients need not know whether the messages are handled locally ontheir own machines, or whether they are sent across a network toservers on a remote machine.•Asfar as the client is concerned,: requests are sent and replies comeback.•Thus the client-server model is an abstraction that can be used for asingle machine or for a network of machines2.5.5 Exokernel:•Exokernel runs in the bottom layer of kernelmode.•Its job is to allocate resources to virtual machines and then checkattempts to use them to make sure no machine is trying to usesomebody else’s resources.•The advantage of the exokernel scheme is that it saves a layer ofmapping whereas the virtual machine monitor must maintain tables toremap disk addresses•The exokernel need only to keep track of which virtual machine•has been assigned which resource
18•All new services are added to the user space and the kernel don’t needto be modified.•Microkernel provides high security and reliability as most of theservices are running in user space , if a service fails the rest operatingsystem remains untouched.•Disadvantage•Performance decrease due to increased system function overhead2.5.4 Client Server System:
•The servers, each of which provides some service, and the clients,which use these services. This model is knownas the client-servermodel.•Since clients communicate with servers by sending messages, theclients need not know whether the messages are handled locally ontheir own machines, or whether they are sent across a network toservers on a remote machine.•Asfar as the client is concerned,: requests are sent and replies comeback.•Thus the client-server model is an abstraction that can be used for asingle machine or for a network of machines2.5.5 Exokernel:•Exokernel runs in the bottom layer of kernelmode.•Its job is to allocate resources to virtual machines and then checkattempts to use them to make sure no machine is trying to usesomebody else’s resources.•The advantage of the exokernel scheme is that it saves a layer ofmapping whereas the virtual machine monitor must maintain tables toremap disk addresses•The exokernel need only to keep track of which virtual machine•has been assigned which resource
munotes.in

Page 19

192.5 LET US SUM UP1.Different types of operating systems are used in different types ofmachines depending upon the need of the user. Some of it aremainframe operating system, server operating system, embeddedoperating system, handheld operating system2.System calls explains what the operating system does. Different typesof system calls are used in operating system activities like filemanagement, process creation, directory management.3. The structure of the operating system has evolved with time. Mostcommon ones includes monolithic, layered, microkernel etc2.6 LIST OF REFERENCEModern Operating system, Fourth edition, Andrew S. Tanenbaum, HerbertBos2.7 BIBLIOGRAPHYOperating System concepts, Eighth edition, Silberschatz, Galvin Gagne2.8 UNIT END EXERCISE1.Explain the micro kernel approach of Operating System design2.Explain client-server model3.List various Operating Systems. Explain any two.4.With suitable diagram explain the structure of disk drive.5.What do you mean by system call? Write system calls for directorymanagement.6.List and explain any five system calls used in file management*****munotes.in

Page 20

203PROCESSES AND THREADSUnit Structure3.0Objectives3.1Introduction3.2Process3.2.1Process Creation3.2.2Process Termination3.2.3Process State3.3Threads3.3.1Thread Usage3.3.2ClassicalThread Model3.3.3Implementing thread in User Space3.4.4Implementing thread in Kernel Space3.4.5Hybrid Implementation3.4Interprocess Communication3.4.1Race Condition3.4.2Critical Region3.4.3Mutual Exclusion and busy waiting3.4.4Sleepand Wake up3.4.5Semaphores3.4.6Mutex3.5Scheduling3.5.1First Come First Serve Scheduling3.5.2Shortest Job First Scheduling3.5.3Priority Scheduling3.5.4Round Robin Scheduling3.5.5Multiple Queue3.6Classical IPC problem3.6.1DinningPhilosopher3.6.2Reader Writer3.7Let us Sum Up3.8List of Reference3.9Bibliography3.10Unit EndQuestionsmunotes.in

Page 21

213.0OBJECTIVES•The objective of the chapter is as follow.•To understand process and thread and itsimportance in operatingsystem.•To understand various concepts related to process like scheduling,termination, creation.•To understand interprocess communication in process.3.1 INTRODUCTION•The most important concept of any operating system is process whichis an abstraction of arunning program.•They support the ability to perform concurrent operation even withsingle processorModern computing exists only because of process.•Operating system can make the computer more productive byswitching the CPU between processes3.2PROCESS•Definition : Process is a program in execution•A runningprocess are organized into sequential processes. Everyprocess needs CPU for completing its execution. CPU switches backand forth between these running processes•In any multiprogramming system, the CPU switches from process toprocess quickly, running each for tens or hundreds of milliseconds•A process is an activity of some kind. It has a program, input, output,and a state.oA single processor may be shared among several processes, with somescheduling algorithm being accustomed to determine when to stopwork on one process and service a different one. In contrast, a programis something that may be stored on disk, not doinganythingProcessmemory is divided into four sections:•The text sectioncomprises the compiled program code, read in fromnon-volatile storage when the program is launched.•The data section stores global and static variables, allocated andinitialized prior to executing main.•The heap is used for dynamic memory allocation, and is managed viacalls to new, delete, malloc, free, etc.•The stack is used for local variablesmunotes.in

Page 22

223.2.1 Process Creation:Four principle events cause processes to be created:1. System initialization:•When an operating system is booted, numerousprocesses are created.•Some of these are foreground processes: processes that interact with(human) users and perform work for them.•Others run in the background also called as daemons and notassociated with particular users, but instead have some specificfunction.2. Execution of a process-creation system call by a running process.•A running process will issue system calls to create one or more newprocesses to help it do its job3. A user request to create a new process:•A new process is created byhaving an existing process execute aprocess creation system call•In UNIX, system call to create a new process: fork()•In Windows, CreateProcess(), with 10 parameters handles both processcreation and loading the correct program into the new process.4.Initiation of a batch job:•Users can submit batch jobs to the system.•When the operating system creates a new process and runs the next jobfrom the input queue in it3.2.2Process Termination:Process can be terminated by a call to kill in UNIX orTerminateProcess inwindows Process will be terminated due to following reason Normal exit:•Most processes terminates when they have completed their work andexecutes a system call to exit•This call is exit() in UNIX and ExitProcess in windowsError exit:•The third type of error occurs due to program bug like executing anillegal instruction, referencing•3on-existentmemoryor dividing by zero.Fatal exit:•A termination of a process occurs when it discovers a fatal error.•For example, if a user types thecommandmunotes.in

Page 23

23•cc xyz.c•to compile the program xyz.c and if no such file exists, the compilersimply announces this fact and exits.Killed by another process:A process executes a system call to kill some other process.In UNIX this call is called as kill. Thecorresponding Win32 function isTerminateProcess.3.2.3 Process States:
Figure 3.1Reference:“Operating System Concepts” by Abraham Silberschatz,Greg Gagne, and Peter Baer GalvinProcess model makes it easier to understand what is going oninsidethe system. Some of the processes run programs that carry outcommands typed in by a user other processes are part of the systemprocesses.When a disk interrupt occurs, the system makes a decision to stoprunning the current process and run the diskprocess, which was blockedwaiting for that interrupt.Any process in the system is present in any one of the given statesNew–The process is in the stage of being created.Ready–The process has all the resources available that it needs to run,but the CPU is not currently working on this process’s instructions.Running–The CPU is working on this process’s instructions.Waiting–The process cannot run at the moment, because it is waiting forsome resource to become available or for some event to occur. Forexample the process may be waiting for keyboard input, disk accessrequest, inter-process messages, a timer to go off, or a child process tofinish.Terminated–The process has completed.
23•cc xyz.c•to compile the program xyz.c and if no such file exists, the compilersimply announces this fact and exits.Killed by another process:A process executes a system call to kill some other process.In UNIX this call is called as kill. Thecorresponding Win32 function isTerminateProcess.3.2.3 Process States:
Figure 3.1Reference:“Operating System Concepts” by Abraham Silberschatz,Greg Gagne, and Peter Baer GalvinProcess model makes it easier to understand what is going oninsidethe system. Some of the processes run programs that carry outcommands typed in by a user other processes are part of the systemprocesses.When a disk interrupt occurs, the system makes a decision to stoprunning the current process and run the diskprocess, which was blockedwaiting for that interrupt.Any process in the system is present in any one of the given statesNew–The process is in the stage of being created.Ready–The process has all the resources available that it needs to run,but the CPU is not currently working on this process’s instructions.Running–The CPU is working on this process’s instructions.Waiting–The process cannot run at the moment, because it is waiting forsome resource to become available or for some event to occur. Forexample the process may be waiting for keyboard input, disk accessrequest, inter-process messages, a timer to go off, or a child process tofinish.Terminated–The process has completed.
23•cc xyz.c•to compile the program xyz.c and if no such file exists, the compilersimply announces this fact and exits.Killed by another process:A process executes a system call to kill some other process.In UNIX this call is called as kill. Thecorresponding Win32 function isTerminateProcess.3.2.3 Process States:
Figure 3.1Reference:“Operating System Concepts” by Abraham Silberschatz,Greg Gagne, and Peter Baer GalvinProcess model makes it easier to understand what is going oninsidethe system. Some of the processes run programs that carry outcommands typed in by a user other processes are part of the systemprocesses.When a disk interrupt occurs, the system makes a decision to stoprunning the current process and run the diskprocess, which was blockedwaiting for that interrupt.Any process in the system is present in any one of the given statesNew–The process is in the stage of being created.Ready–The process has all the resources available that it needs to run,but the CPU is not currently working on this process’s instructions.Running–The CPU is working on this process’s instructions.Waiting–The process cannot run at the moment, because it is waiting forsome resource to become available or for some event to occur. Forexample the process may be waiting for keyboard input, disk accessrequest, inter-process messages, a timer to go off, or a child process tofinish.Terminated–The process has completed.
munotes.in

Page 24

243.3 THREAD3.3.1 Thread Usage:▪A thread is a basic unitof CPU utilization, consisting of a programcounter, a stack, and a set of registers.▪A process have a single thread of control–There is one programcounter, and one sequence of instructions that can be carried out at anygiven time▪Decomposing an application into multiple sequential threads that runin quasi-parallel, the programming model becomes simpler▪Thread has an ability to share an address space and all of its dataamong themselves. This ability is essential for certain applications.▪Threads are lighter weight than processes, they are faster to create anddestroy than processes.3.3.2 Classical Thread Model:▪A process contains a number of resources such as address space, openfiles, accounting information, etc.▪In addition to these resources, aprocess has a thread of control, e.g.,program counter, register contents, stack.▪The idea of threads is to permit multiple threads of control to executewithin one process.▪This is often called multithreading and threads are also known aslightweight processes. Since threads in the same process share stateand stack so switching between them is much less expensive thanswitching between separate processes.▪Individual threads within the same process are not completelyindependent but are cooperating and allare from the same process.▪The shared resources makes it easier between threads to use each otherresources.▪A new thread in the same process is created by a library routine likethread_create; similarly thread_exit terminatea a thread.3.3.3 Implementing thread in User Space:▪The entire thread package is kept in the user space and kernel has noknowledge about it.▪Kernel manages ordinary and single threaded processes▪The threads run on top of a run-time system.▪Run time system is a collection ofprocedures that manage threads.▪e.g.pthread create, pthread exit, pthread join, and pthread yield,▪Each process needs to have its own private thread table to keep trackof the threads in that process.munotes.in

Page 25

25▪The thread table keeps a track of each thread’sproperties▪Thread tables are managed by runtime systemAdvantages:▪Can be implemented on the OS that do not support thread and threadare implemented by library▪Requires no modification in the operating system.▪It gives better performance as there is nocontext switching involvedfrom kernel.▪Each process is allowed to have its own customized schedulingalgorithm.Disadvantages•Implementing blocking system calls would cause all thread to stop.•If a thread starts running, no other thread would be able torun unlessthe thread voluntarily leaves the CPU.
Figure 3.23.3.4 Implementing thread in User Kernel:•Kernel manages the thread by keeping a track of all threads bymaintaining a thread table in the system.•When a thread wants to create a new thread ordestroy an existingthread, it makes a kernel call, which then does the creation ordestruction by updating the kernel thread table.•The kernel’s thread table holds each thread’s registers, state, and otherinformation and also maintains the traditional process table to keeptrack of processesAdvantages:•Thread-create and friends are now systems and hence much slower.
25▪The thread table keeps a track of each thread’sproperties▪Thread tables are managed by runtime systemAdvantages:▪Can be implemented on the OS that do not support thread and threadare implemented by library▪Requires no modification in the operating system.▪It gives better performance as there is nocontext switching involvedfrom kernel.▪Each process is allowed to have its own customized schedulingalgorithm.Disadvantages•Implementing blocking system calls would cause all thread to stop.•If a thread starts running, no other thread would be able torun unlessthe thread voluntarily leaves the CPU.
Figure 3.23.3.4 Implementing thread in User Kernel:•Kernel manages the thread by keeping a track of all threads bymaintaining a thread table in the system.•When a thread wants to create a new thread ordestroy an existingthread, it makes a kernel call, which then does the creation ordestruction by updating the kernel thread table.•The kernel’s thread table holds each thread’s registers, state, and otherinformation and also maintains the traditional process table to keeptrack of processesAdvantages:•Thread-create and friends are now systems and hence much slower.
25▪The thread table keeps a track of each thread’sproperties▪Thread tables are managed by runtime systemAdvantages:▪Can be implemented on the OS that do not support thread and threadare implemented by library▪Requires no modification in the operating system.▪It gives better performance as there is nocontext switching involvedfrom kernel.▪Each process is allowed to have its own customized schedulingalgorithm.Disadvantages•Implementing blocking system calls would cause all thread to stop.•If a thread starts running, no other thread would be able torun unlessthe thread voluntarily leaves the CPU.
Figure 3.23.3.4 Implementing thread in User Kernel:•Kernel manages the thread by keeping a track of all threads bymaintaining a thread table in the system.•When a thread wants to create a new thread ordestroy an existingthread, it makes a kernel call, which then does the creation ordestruction by updating the kernel thread table.•The kernel’s thread table holds each thread’s registers, state, and otherinformation and also maintains the traditional process table to keeptrack of processesAdvantages:•Thread-create and friends are now systems and hence much slower.
munotes.in

Page 26

26•A thread that blocks causes no particular problem. The kernel can runanother thread from this process or can run another process.•Similarly a page fault in one thread does not automatically block theother threads in the process.Disadvantages:•Relatively greater cost of creating and destroying threads in the kernel•When a signal comes in then which thread should handle it is aproblem
Figure 3.43.3.5 Hybrid implementation:•Hybrid implementation combines the advantages of userlevel threadswith kernel-level threads. One way is use kernel-level threads and thenmultiplex user-level threads onto some or all of them.•This modelprovides maximum flexibility•The kernel is aware of only the kernel-level threads and schedulesthose.•These user-level threads are created, destroyed, and scheduled like theuser-level threads in a process that runs on an operating systemwithoutmultithreading capability
Figure 3.4
26•A thread that blocks causes no particular problem. The kernel can runanother thread from this process or can run another process.•Similarly a page fault in one thread does not automatically block theother threads in the process.Disadvantages:•Relatively greater cost of creating and destroying threads in the kernel•When a signal comes in then which thread should handle it is aproblem
Figure 3.43.3.5 Hybrid implementation:•Hybrid implementation combines the advantages of userlevel threadswith kernel-level threads. One way is use kernel-level threads and thenmultiplex user-level threads onto some or all of them.•This modelprovides maximum flexibility•The kernel is aware of only the kernel-level threads and schedulesthose.•These user-level threads are created, destroyed, and scheduled like theuser-level threads in a process that runs on an operating systemwithoutmultithreading capability
Figure 3.4
26•A thread that blocks causes no particular problem. The kernel can runanother thread from this process or can run another process.•Similarly a page fault in one thread does not automatically block theother threads in the process.Disadvantages:•Relatively greater cost of creating and destroying threads in the kernel•When a signal comes in then which thread should handle it is aproblem
Figure 3.43.3.5 Hybrid implementation:•Hybrid implementation combines the advantages of userlevel threadswith kernel-level threads. One way is use kernel-level threads and thenmultiplex user-level threads onto some or all of them.•This modelprovides maximum flexibility•The kernel is aware of only the kernel-level threads and schedulesthose.•These user-level threads are created, destroyed, and scheduled like theuser-level threads in a process that runs on an operating systemwithoutmultithreading capability
Figure 3.4
munotes.in

Page 27

273.4INTERPROCESS COMMUNICATION•It is a mechanism that allows the exchange of data between processes•Enables resource and data sharing between the processes withoutinterference.•To provide information about processstatus to other processes.•Three problems which are faced,•How one process can pass information to another?•The second has to do with making sure two or more processes do notget in each other’s way.•The third concerns proper sequencing when dependenciesare present.3.4.1 Race Condition:1.In operating system processes that are working together may sharesome common storage that each one can read and write.2.The shared storage may be in main memory.3.Several processes access and manipulates shareddata simultaneously.4.Final value of shared data depends upon which process finishes last.Fig. shows
Figure 3.51.In the above example, file name is entered in the special spoolerdirectory for printing.2.he printer daemon prints the files andthen removes their names fromthe directory.3.Imagine that our spooler directory has a very large number of slots,numbered 0, 1, 2, …, each one capable of holding a file name.
munotes.in

Page 28

284.two shared variables,5.in–variable pointing to next free slot6.out–variable pointing to next file to be printed.7.Process A reads in and stores the value, 7, next free slot. Just then aclock interrupt occurs and the CPU decides that process switches toprocess B.8.Process B also reads in and gets a 7 in local variable next free slot.9.Process B now continues to run. It stores the name of its file in slot 7and Then it goes off and does other things.10.Eventually, process A runs again, starting from the place it left off.11.It looks at next free slot, finds a 7there, and writes its file name in slot7, erasing the name that process B just put there.3.4.2 Critical Region:Definition:Part of the program where the shared memory is accessed iscalled the critical region or critical sectionRace condition can beavoided by ensuring that no two processes are everin their critical regions at the same time.Following four conditions needed to have a good solution:1.No two processes may be simultaneously inside their critical regions.2.No assumptions may bemade about speeds or the number of CPUs.3.No process running outside its critical region may block any process.4.No process should have to wait forever to enter its critical region.3.4.3 Mutual Exclusion and busy waiting:Noother process will enterits critical region, when one process isin its criticalregion;Following are the ways for achieving mutualexclusion3.4.3.1 Disabling interrupts:•Each process disables all interrupts just after entering in its criticalsection and re-enable all interrupts just before leaving critical section.•With interrupts turned off the CPU could not be switched to otherprocess. Hence, no other process will enter its critical section andmutual exclusion will be achieved.•But disabling interrupts is sometimes auseful technique within thekernel of an operating system, but it is not appropriate as a generalmutual exclusion mechanism for user’s process. The reason is that it isunwise to give user process the power to turn off interrupts.munotes.in

Page 29

293.4.3.2 Lock Variables:Asingle, shared, lock variable is considered initially at 0. When aprocess wants to enter in its critical section, it first test the lock. If lock is0, the process first sets it to 1 and then enters the critical section. If thelock is already 1, theprocess just waits until lock variable becomes 0.Thus, a 0 means that no process in its critical section, and 1 means to waitsince some process is in its critical section.But the technique has a drawback as explained Suppose that oneprocess reads thelock and sees that it is 0. Before it can set the lock to 1,another process is scheduled, runs, and sets the lock to 1.When the first process runs again, it will also set the lock to 1, andtwo processes will be in their critical regions at the same time.3.4.2.3 Strict Alternation:Theinteger variable ‘turn’ keeps track of whose turn is to enter thecritical section. Initially, process A inspect turn, finds it to be 0, and entersin its critical section.Process B also finds it to be 0 and sits ina loop continually testing‘turn’ to see when it becomes 1. Continuously testing a variable waitingfor some value to appear is called the Busy-Waiting.Busy waiting wastes CPU time and should be avoided3.4.4 Sleep and Wake:sleep:A system call that causes the caller to block or remain suspendeduntil another process wakes it up.Wakeup:The process to be awakened is passed as a parameter to thewakeup system call.3.4.4.1 Producer Consumer Problem (Bounded Buffer):•The producer-consumer problem alsoknown as bounded bufferproblem which assumes that there is a fixed sized buffer available.•To suspend the producers when the buffer is full, to suspend theconsumers when the buffer is empty, and to make sure that only oneprocess at a time manipulates abuffer so there are no race conditions.•Two processes share a common, fixed-size (bounded) buffer. Theproducer puts information into the buffer and the consumer takesinformation out.•Problem arises if the following scenario comes across:•The producer wants to put a new data in the buffer, but buffer isalready full.munotes.in

Page 30

30Solution:Producer goes to sleep and to be awakened when the consumerhas removed data. The consumer wants to remove data from the buffer butbuffer is already empty.Solution:Consumer goesto sleep until the producer puts some data inbuffer and wakes consumer up.Conclusion:This approaches also leads to same race conditions we have seen inearlier approaches. Race condition can occur due to the fact that access to‘count’ isunconstrained. The essence of the problem is that a wakeup call,sent to a process that is not sleeping, is lost.3.4.5 Semaphore:E.W. Dijkstra (1965) suggest semaphore, an integer variable tocount the number of wakeups saved for future use. A semaphore couldhave the value 0, indicating that no wakeups were saved, or some positivevalue if one or more wakeups were pending.Two operations are performed on semaphores called asdown(sleep) and up(wakeup). Processes to synchronize their activities.Theseoperations are also known as: wait() denoted by P and signal() isdenoted by V. wait(S){while (S <= 0);S—;}signal(S){ S++;}3.4.6 Mutex:•Mutex is a simplified version of the semaphore used for managingmutual exclusion of shared resources.•Theyare easy and efficient to implement and useful in thread packagesthat are implemented entirely in user space.•A mutex is a shared variable that can be in one of two states: unlockedorLocked.•The semaphore is initialized to the number of resources available.•Each process that wishes to use a resource performs a wait() operationon the semaphore. When a process releases a resource, it performs asignal() operation.munotes.in

Page 31

31•When the count for the semaphore goes to 0, all resources are beingused.•Any processes thatwish to use a resource will block until the countbecomes greater than 0.3.5 SCHEDULING•The part of the operating system that makes the choice is called thescheduler, and the algorithm it uses is called the scheduling algorithm•Processes are of twotypes Compute bound or Input output boundCompute-bound processes have long CPU bursts and infrequent I/Owaits I/O-bound processes have short CPU bursts and frequent I/Owaits.•The length of the CPU burst is an important factor•It takes the same time to issue the hardware request to read a diskblock no matter how much or how little time it takes to process thedata after they arrive. Scheduling is of two types preemptive and nonpreemptive•Scheduling algorithm are classified as Batch, Interactive and RealTime•CPU scheduling takes place when one of the following condition istrue Switching of process from the running state to waiting state•Switching of process from the running state to ready state Switching ofprocess from the waiting state to ready stateWhen a processterminates•scheduling under conditions 1 and 4 is called as non-preemptivescheduling. scheduling under conditions 2 and 3 is preemptivescheduling3.5.1 First Come First Serve(Fcfs):•It is a non preemtive algorithm where the ready queue isbased onFIFO procedure.•Processes are assigned to the CPU in the order they requested it.•The strength of FCFS algorithm is that it is easy to understand andequally easy to program.•It has a major disadvantage of high amount of waiting time•It also suffers from convoy effect where in many small process have towait for a longer process to release CPU.ProcessBurstTimeArrivalStartWaitFinishTA12400024242302424272733027273030munotes.in

Page 32

32Gantt chart:average waiting time: (0+24+27)/3 = 17average turnaround time: (24+27+30) = 273.5.2 Shortest Job First(SJF):•Each process is associated the length of its next CPU burst.•According to the algorithm the scheduler selects the process with theshortest time SJF is of two types•non-preemptive:A process once scheduled will continue runninguntil the end of its CPU burst time preemptive also known as shortestremaining time next(SRTN): A process preempt if a new processarrives with a CPU burst of less length than the remaining time of thecurrently executing process. SJF is an optimal algorithm which givesminimum average waiting time for any set of processes but suffersfrom the drawback of assuming the run times are known in advance.SJF (Non Preemptive)ProcessBurstTimeArrivalStartWaitFinishTA1603399280161624243709916164300033Gantt chart:average waiting time: (3+16+9+0)/4 = 7average turnaround time: (9+24+16+3)/4 = 13ProcessBurst TimeArrivalStartWaitFinishTA18009171724110543921715262445352107average waiting time: (9+0+15+2)/4 = 6.5average turnaround time: (17+4+24+7)/4 = 13
munotes.in

Page 33

333.5.3PriorityScheduling:•Priority scheduling associates a priority number with each process inits PCB block•The runnable process with the highest priority is assignedto the CPU•A clash of 2 processes with the same priority is handled using FCFS•The need to take external factors into account leads to priorityscheduling.•To prevent high-priority processes from running indefinitely, thescheduler may decrease the priority of the currently running process ateach clock tick•Priorities can be assigned to processes statically or dynamically•The algorithm faces with starvation low priority processes may neverexecute, they may have to wait indefinitely for the CPU therefore as asolution ageing is attached with eachProcessBurstTimePriorityNumberArrivalStartWaitFinishTA110306616162110001132401616181841501818191955201166average waiting time: (6+0+16+18+1)/5 = 8.2average turnaround time: (16+1+18+19+6)/4 = 123.5.4 Round Robin Scheduling(RR):•Round Robin scheduling algorithm has been designed specifically fortime sharing system•Time quantum or time slice is a small unit of time defined after whichpre-emption ofprocess would take place.•The ready queue is based on FIFO order with each process gettingaccess in circular manner•The RR scheduling algorithm is thus preemptive. If there are nprocesses in the ready queue and the time quantum is q, then eachprocess gets 1/n of the CPU time in chunks of at most q time units.•Each process must waitno longer than (n1) × q time units until itsnext time quantum.•The selection of time slice (q) plays an important role.
munotes.in

Page 34

34if q is very large, Round Robin behaves like FCFSifq is very small, it will result into too many context switch leading tothe overhead timeProcessBurstTimeArrivalStartWaitFinishTA12400630302304477330771010Time quantum = 4Gantt chart:Average WaitingTime:(6+4+7)/3= 5.67Average Turn aroundTime:(30+7+10) = 15.673.5.5 Multiple Queues:•Division is made between foreground or interactive and background orbatch processes and batch processes•These two types of processes have different response-timerequirements and so may have different scheduling needs.•foreground processes may have priority over background processes.•A multilevel queue scheduling algorithm partitions the ready queueinto several separate queues•The processes are permanently assigned to one queue, generally basedon some property of the process, such as memory size, processpriority, or process type.•Each queue has its own scheduling algorithm.•The foreground queue might be scheduled by an RR algorithm, whilethe background queue is scheduled by an FCFS algorithm•There must be scheduling among the queues, which is commonlyimplemented as fixed-priority preemptive scheduling3.6 CLASSICAL IPC PROBLEM3.6.1 Dinning Philosophers problem:•Five silent philosophers sit at a round table withbowls of spaghetti.Forks are placed between each pair of adjacent philosophers.•Each philosopher must alternately think and eat.
munotes.in

Page 35

35•However, a philosopher can only eat spaghetti when he has both leftand right forks. Each fork can be held by only one philosopher and soa philosopher can use the fork only if it is not being used by anotherphilosopher.•After he finishes eating, he needs to put down both forks so theybecome available to others.•A philosopher can take the fork on his right or the one on hisleft asthey become available, but cannot start eating before getting both ofthem.•The problem is how to design a discipline of behaviour (a concurrentalgorithm) such that no philosopher will starve.•Mutual exclusion is the basic idea of the problem; thediningphilosophers create a generic and abstract scenario useful forexplaining issues of this type.•The failures these philosophers may experience are analogous to thedifficulties that arise in real computer programming when multipleprograms need exclusive access to shared resourcesProblem:Dinning philosopher suffers from the problem of deadlock wheneveryone want to eat simultaneously. If all five philosophers take theirleft forks simultaneously. None will be able to take their right forks, andthere will be a deadlock.The second problem of starvation arises when the philosopherscould start the algorithm simultaneously, picking up their left forks,seeing that their right forks were not available, putting down their leftforks, waiting, pickingup their left forks again simultaneously, and soon, forever.
Figure 3.6 Dinning Philosopher Problem
35•However, a philosopher can only eat spaghetti when he has both leftand right forks. Each fork can be held by only one philosopher and soa philosopher can use the fork only if it is not being used by anotherphilosopher.•After he finishes eating, he needs to put down both forks so theybecome available to others.•A philosopher can take the fork on his right or the one on hisleft asthey become available, but cannot start eating before getting both ofthem.•The problem is how to design a discipline of behaviour (a concurrentalgorithm) such that no philosopher will starve.•Mutual exclusion is the basic idea of the problem; thediningphilosophers create a generic and abstract scenario useful forexplaining issues of this type.•The failures these philosophers may experience are analogous to thedifficulties that arise in real computer programming when multipleprograms need exclusive access to shared resourcesProblem:Dinning philosopher suffers from the problem of deadlock wheneveryone want to eat simultaneously. If all five philosophers take theirleft forks simultaneously. None will be able to take their right forks, andthere will be a deadlock.The second problem of starvation arises when the philosopherscould start the algorithm simultaneously, picking up their left forks,seeing that their right forks were not available, putting down their leftforks, waiting, pickingup their left forks again simultaneously, and soon, forever.
Figure 3.6 Dinning Philosopher Problem
35•However, a philosopher can only eat spaghetti when he has both leftand right forks. Each fork can be held by only one philosopher and soa philosopher can use the fork only if it is not being used by anotherphilosopher.•After he finishes eating, he needs to put down both forks so theybecome available to others.•A philosopher can take the fork on his right or the one on hisleft asthey become available, but cannot start eating before getting both ofthem.•The problem is how to design a discipline of behaviour (a concurrentalgorithm) such that no philosopher will starve.•Mutual exclusion is the basic idea of the problem; thediningphilosophers create a generic and abstract scenario useful forexplaining issues of this type.•The failures these philosophers may experience are analogous to thedifficulties that arise in real computer programming when multipleprograms need exclusive access to shared resourcesProblem:Dinning philosopher suffers from the problem of deadlock wheneveryone want to eat simultaneously. If all five philosophers take theirleft forks simultaneously. None will be able to take their right forks, andthere will be a deadlock.The second problem of starvation arises when the philosopherscould start the algorithm simultaneously, picking up their left forks,seeing that their right forks were not available, putting down their leftforks, waiting, pickingup their left forks again simultaneously, and soon, forever.
Figure 3.6 Dinning Philosopher Problemmunotes.in

Page 36

363.6.2 Readers and Writers problem:The dining philosophers problem is useful for modelling processesthat arecompeting for exclusive access to alimited number of resources,such as I/O devices▪There is a data area that is shared among a number of processes.▪Any number of readers may simultaneously write to the data area.Only one writer at a time may write to the data area.▪If a writer is writingto the data area, no reader may read it.▪If there is at least one reader reading the data area, no writer may writeto it.▪Readers only read and writers only write.▪The Reader and Writer problem, which models access to a database.▪For example, an airline reservation system, with many competingprocesses wishing to read and write it.▪It is acceptable to have multiple processes reading the database at thesame time, but if one process is updating the database, no otherprocess should access the database, noteven readers.▪To avoid this situation, the program could be written slightlydifferently: when areader arrives and a writer is waiting, the reader issuspended behind the writer instead of being admitted immediately.3.7 LET US SUM UP•Processes cancommunicate with one another using interprocesscommunication•Primitives•A process can be running, runnable, or blocked and can change statewhen it or another process executes one of the interprocesscommunication primitives.•Interprocess communicationprimitives can be used to solve suchproblems asthe producer-consumer, dining philosophers, and reader-writer3.8 LIST OF REFERENCE•Modern Operating system, Fourth edition, Andrew S. Tanenbaum,Herbert Bos Operating System concepts, Eighth edition,Silberschatz,Galvin Gagne•http://academic.udayton.edu/SaverioPerugini/courses/cps346/lecture_notes/scheduling.ht mlmunotes.in

Page 37

373.9 BIBLIOGRAPHYOperating Systems–Internal Design and Principles , William Stallings3.10 UNIT ENDQUSTIONSS1.Writea short note on process termination2.Write a short note on the process model.3.What is race condition? How mutual exclusion handles race condition4.With suitable example explain the shortest job first schedulingalgorithm.5.Explain round robin scheduling give proper example.*****munotes.in

Page 38

38UNIT II4MEMORY MANAGEMENTUnit Structure4.0Objectives4.1Introduction4.2 Address Space4.3 Virtual Memory4.4Let us Sum Up4.5List of Reference4.6Bibliography4.7Unit EndQuestions4.0OBJECTIVES•Description of various ways of organizing memory hardware.•Techniques of allocating memory to processes.•Paging works in contemporary computer systems.•To describe the benefits of a virtual memory system.•To explain the concepts of demand paging,page-replacementalgorithms, and allocation of page frames.•To discuss the principles of the working-set model.•To examine the relationship between shared memory and memory-mapped files.•To explore how kernel memory is managed4.1 INTRODUCTIONWe showedhow the CPU can be shared by a set of processes. As aresult of CPU scheduling, we can improve both the utilization of the CPUand the speed of the computer’s response to its users. To realize thisincrease in performance, however, we must keep several processes inmemory—that is, we must share memory. In this chapter, we discussvarious ways to manage memory. The memory management algorithmsvary from a primitive bare-machine approach to paging and segmentationstrategies. Each approach has its own advantages and disadvantages.Selection of a memory-management method for a specific systemdepends on many factors, especially on the hardware design of the system.As we shall see, many algorithms require hardware support, leading manymunotes.in

Page 39

39systems to haveclosely integrated hardware and operating-systemmemory management.4.2 ADDRESS SPACEAn address space defines a range of discrete addresses, each ofwhich may correspond to a network host, peripheral device, disk sector, amemory cell or other logicalor physical entity.For software programs to save and retrieve stored data, each unit ofdata must have an address where it can be individually located or else theprogram will be unable to find and manipulate the data. The number ofaddress spaces available will depend on the underlying address structureand these will usually be limited by the computer architecture being used.4.2.1 Logical Versus Physical Address Space:An address generated by the CPU is commonly referred to as alogicaladdress,whereas an address seen by the memory unit—that is, theone loaded into the memory-address register of the memory—iscommonly referred to as a physical address.The compile-time and load-time address-binding methods generateidentical logical andphysical addresses. However, the execution-time
Fig: Convert logical addresses to physical addresses?4.2.2 Address Mapping & Translation:Another common feature of address spaces are mappings andtranslations, often forming numerous layers. This usually means that somehigher-level address must be translated to lower-level ones in some way.For example, a file system on a logical disk operates linear sectornumbers, which have to be translated toabsoluteLBA sector addresses, insimple cases, via addition of the partition’s first sector address. Then, for adisk drive connected via Parallel ATA, each of them must be converted tologicalcylinder-head-sector address due to the interface's historicalshortcomings. It is converted back to LBA by the disk controller then,finally, tophysicalcylinder, head and sector numbers.
39systems to haveclosely integrated hardware and operating-systemmemory management.4.2 ADDRESS SPACEAn address space defines a range of discrete addresses, each ofwhich may correspond to a network host, peripheral device, disk sector, amemory cell or other logicalor physical entity.For software programs to save and retrieve stored data, each unit ofdata must have an address where it can be individually located or else theprogram will be unable to find and manipulate the data. The number ofaddress spaces available will depend on the underlying address structureand these will usually be limited by the computer architecture being used.4.2.1 Logical Versus Physical Address Space:An address generated by the CPU is commonly referred to as alogicaladdress,whereas an address seen by the memory unit—that is, theone loaded into the memory-address register of the memory—iscommonly referred to as a physical address.The compile-time and load-time address-binding methods generateidentical logical andphysical addresses. However, the execution-time
Fig: Convert logical addresses to physical addresses?4.2.2 Address Mapping & Translation:Another common feature of address spaces are mappings andtranslations, often forming numerous layers. This usually means that somehigher-level address must be translated to lower-level ones in some way.For example, a file system on a logical disk operates linear sectornumbers, which have to be translated toabsoluteLBA sector addresses, insimple cases, via addition of the partition’s first sector address. Then, for adisk drive connected via Parallel ATA, each of them must be converted tologicalcylinder-head-sector address due to the interface's historicalshortcomings. It is converted back to LBA by the disk controller then,finally, tophysicalcylinder, head and sector numbers.
39systems to haveclosely integrated hardware and operating-systemmemory management.4.2 ADDRESS SPACEAn address space defines a range of discrete addresses, each ofwhich may correspond to a network host, peripheral device, disk sector, amemory cell or other logicalor physical entity.For software programs to save and retrieve stored data, each unit ofdata must have an address where it can be individually located or else theprogram will be unable to find and manipulate the data. The number ofaddress spaces available will depend on the underlying address structureand these will usually be limited by the computer architecture being used.4.2.1 Logical Versus Physical Address Space:An address generated by the CPU is commonly referred to as alogicaladdress,whereas an address seen by the memory unit—that is, theone loaded into the memory-address register of the memory—iscommonly referred to as a physical address.The compile-time and load-time address-binding methods generateidentical logical andphysical addresses. However, the execution-time
Fig: Convert logical addresses to physical addresses?4.2.2 Address Mapping & Translation:Another common feature of address spaces are mappings andtranslations, often forming numerous layers. This usually means that somehigher-level address must be translated to lower-level ones in some way.For example, a file system on a logical disk operates linear sectornumbers, which have to be translated toabsoluteLBA sector addresses, insimple cases, via addition of the partition’s first sector address. Then, for adisk drive connected via Parallel ATA, each of them must be converted tologicalcylinder-head-sector address due to the interface's historicalshortcomings. It is converted back to LBA by the disk controller then,finally, tophysicalcylinder, head and sector numbers.
munotes.in

Page 40

40Fig: Illustration of translation from logical block addressing to physicalgeometry4.2.3 virtual address space to physical address space:The Domain Name System maps its namesto (and from) network-specific addresses (usually IP addresses), which in turn may be mapped tolink layer network addresses via Address Resolution Protocol. Also,network address translation may occur on the edge of different IP spaces,such as a local area network and the Internet.An iconic example of virtual-to-physical address translation isvirtual memory, where different pages of virtual address space map eitherto page file or to main memory physical address space. It is possible thatseveral numerically different virtual addresses all refer to one physicaladdress and hence to the same physical byte of RAM. It is also possiblethat a single virtual address maps to zero, one, or more than one physicaladdress
Fig: Illustration of translation fromvirtual address space to physical address space.
munotes.in

Page 41

414.2.4 Types of Memory Address:The operating system takes care of mapping the logical addressesto physical addresses at the time of memory allocation to the program.There are three types of addresses used in a program before and aftermemory is allocated−S.N.Memory Addresses & Description1Symbolic addressesThe addresses used in a source code. The variable names,constants, and instruction labels are the basic elements of thesymbolic address space.2Relative addressesAt the time of compilation, a compiler converts symbolicaddresses into relative addresses.3Physical addressesThe loader generates these addresses at the time when aprogram is loaded into main memory4.2.4.1 symbolic addressing :An addressing scheme whereby reference to an address is made bysome convenient symbol that (preferably) has some relationship to themeaning of the data expected to be located at that address. It serves as anaid to the programmer. The symbolic address isreplaced by some form ofcomputable/computed address during the operation of an assembler orcompiler.4.2.4.2 Relative addressing:This is the technique of addressing instructions and data areas bydesignating their location in relation to the location counter or to somesymbolic location. This type of addressing is always in bytes—never inbits, words, or instructions. Thus, the expression *+4 specifies an addressthat is 4 bytes greater than the current value of the location counter. In thesequenceof instructions in the following example, the location of the CRmachine instruction can be expressed in two ways, ALPHA+2, or BETA-4, because all the machine instructions in the example are for 2 byteinstructions.
Fig: Relative Addressing
munotes.in

Page 42

424.2.4.3 Physical Address:This identifies a physical location of required data in a memory.The user never directly deals with the physical address but can access byits corresponding logical address. The user program generates the logicaladdress and thinks that the program is running in this logical address butthe program needs physical memory for its execution, therefore, thelogical address must be mapped to the physical address by MMU beforethey are used. The term Physical Address Space is used forall physicaladdresses corresponding to the logical addresses in a Logical addressspace.
4.2.5 Difference between logical address and physical address:ParameterLogical AddressPhysical AddressBasicGenerated by CPUAddressLogical AddressSpaceis set of alllogical addressesgenerated by cpuinAll physical addressesmapped to the correspondinglogical addressesVisibilityUser can view thelogical address ofthe programmeUser can never view theaddress of the programmeGenerationGeneratedby CPUComputed by MMUAccessThe user can usethe logical addressto access thephysical addressUser can indirectly access thephysical address but notdirectly4.3VIRTUAL MEMORYThe memory-management algorithms outlined in Chapter 8 arenecessary because of one basic requirement: The instructions beingexecuted must be the entire logical address space in physical memory.
munotes.in

Page 43

43Dynamic loading can help to ease this restriction, but it generally requiresspecial precautions and extra.A computer can address more memory than the amount physicallyinstalled on the system. This extra memory is actually called virtualmemory and it is a section of a hard disk that's set up to emulate thecomputer's RAM.The main visible advantage of this scheme is that programs can belarger than physical memory. Virtual memory serves two purposes. First,it allows us to extend the use of physical memory by using a disk. Second,it allows us to have memory protection, because each virtual address istranslated to a physical address.Following are the situations, when the entire program is not required to beloaded fully in main memory.•User written error handling routines are used only when an erroroccurred in the data or computation.•Certain options and features of a program may be used rarely.•Many tables are assigned a fixed amount of address space even thoughonly a small amount of the table is actually used.•The ability to execute a program thatis only partially in memorywould counter many benefits.•Less number of I/O would be needed to load or swap each userprogram into memory. A program would no longer be constrained bythe amount of physical memory that is available.•Each user program couldtake less physicalmemory;more programscould berun at thesame time, with a corresponding increase in CPUutilization and throughput.Modern microprocessors intended for general-purpose use, amemory management unit, or MMU, is built into thehardware. TheMMU's job is to translate virtual addresses into physical addresses. Abasic example is given below
munotes.in

Page 44

44Virtual memory is commonly implemented by demand paging. Itcan also be implemented in a segmentation system. Demand segmentationcan also be used to provide virtual memory.Virtual memoryinvolves the separation of logical memory asperceived by users from physical memory. This separation allows anextremely large virtual memory to be provided for programmers whenonly a smaller physicalmemory is available.Virtual memory makes thetask of programming much easier, because the programmer no longerneeds to worry about the amount of physical memory available; shecanconcentrate instead on the problem to be programmed.4.3.1 Demand Paging:A demand paging system is quite similar to a paging system withswapping where processes reside in secondary memory and pages areloaded only on demand, not in advance. When acontext switch occurs, theoperating system does not copy any of the old program’s pages out to thedisk or any of the new program’s pages into the main memory. Instead, itjust begins executing the new program after loading the first page andfetches thatprogram’s pages as they are referenced.
While executing a program, if the program references a pagewhich is not available in the main memory because it was swapped out alittle ago, the processor treats this invalid memory reference as a pagefault and transfers control from the program to the operating system todemand the page back into the memory.
44Virtual memory is commonly implemented by demand paging. Itcan also be implemented in a segmentation system. Demand segmentationcan also be used to provide virtual memory.Virtual memoryinvolves the separation of logical memory asperceived by users from physical memory. This separation allows anextremely large virtual memory to be provided for programmers whenonly a smaller physicalmemory is available.Virtual memory makes thetask of programming much easier, because the programmer no longerneeds to worry about the amount of physical memory available; shecanconcentrate instead on the problem to be programmed.4.3.1 Demand Paging:A demand paging system is quite similar to a paging system withswapping where processes reside in secondary memory and pages areloaded only on demand, not in advance. When acontext switch occurs, theoperating system does not copy any of the old program’s pages out to thedisk or any of the new program’s pages into the main memory. Instead, itjust begins executing the new program after loading the first page andfetches thatprogram’s pages as they are referenced.
While executing a program, if the program references a pagewhich is not available in the main memory because it was swapped out alittle ago, the processor treats this invalid memory reference as a pagefault and transfers control from the program to the operating system todemand the page back into the memory.
44Virtual memory is commonly implemented by demand paging. Itcan also be implemented in a segmentation system. Demand segmentationcan also be used to provide virtual memory.Virtual memoryinvolves the separation of logical memory asperceived by users from physical memory. This separation allows anextremely large virtual memory to be provided for programmers whenonly a smaller physicalmemory is available.Virtual memory makes thetask of programming much easier, because the programmer no longerneeds to worry about the amount of physical memory available; shecanconcentrate instead on the problem to be programmed.4.3.1 Demand Paging:A demand paging system is quite similar to a paging system withswapping where processes reside in secondary memory and pages areloaded only on demand, not in advance. When acontext switch occurs, theoperating system does not copy any of the old program’s pages out to thedisk or any of the new program’s pages into the main memory. Instead, itjust begins executing the new program after loading the first page andfetches thatprogram’s pages as they are referenced.
While executing a program, if the program references a pagewhich is not available in the main memory because it was swapped out alittle ago, the processor treats this invalid memory reference as a pagefault and transfers control from the program to the operating system todemand the page back into the memory.
munotes.in

Page 45

45Advantages:Following are the advantages of Demand Paging•Large virtual memory.•More efficient use of memory.•There is no limiton the degreeof multiprogramming.Disadvantages:Number of tables and the amount of processor overhead forhandling page interrupts are greater than in the case of the simple pagedmanagement techniques.4.3.4 Page Replacement Algorithm:Page replacementalgorithms are the techniques using which anOperating System decides which memory pages to swap out, write to diskwhen a page of memory needs to be allocated. Paging happens whenever apage fault occurs and a free page cannot be used for allocation purposeaccounting to reason that pages are not available or the number of freepages is lower than required pages.When the page that was selected for replacement and was pagedout, is referenced again, it has to read in from disk, and this requires forI/O completion. This process determines the quality of the pagereplacement algorithm: the lesser the time waiting for page-ins, the betteris the algorithm.A page replacement algorithm looks at the limited informationabout accessing the pages provided by hardware, and tries to select whichpages should be replaced to minimize the total number of page misses,while balancing it with the costs of primary storage and processor timeofthe algorithm itself. There are many different page replacementalgorithms. We evaluate an algorithm by running it on a particular stringof memory reference and computing the number of page faults,4.3.5 Optimal Page algorithm:•An optimal page-replacement algorithm has the lowest page-fault rateof all algorithms. An optimalpage-replacement algorithm exists, andhas been called OPT or MIN.•Replace the page that will not be used for the longest period of time.Use the time when a page is to be usedmunotes.in

Page 46

464.3.6 Least Recently Used (LRU) algorithm:•Page which has not been used forthe longest time in main memory isthe one which will be selected for replacement.•Easy to implement, keep a list, replace pages by looking back intotime.
4.3.7 Page Buffering algorithm:•To get a process start quickly, keep a pool of free frames.•On page fault, select a page to be replaced.•Write the new page in the frameof the freepool, mark the page tableand restart the process.•Now write the dirty page out of disk and place the frame holdingreplaced pagein the freepool.
464.3.6 Least Recently Used (LRU) algorithm:•Page which has not been used forthe longest time in main memory isthe one which will be selected for replacement.•Easy to implement, keep a list, replace pages by looking back intotime.
4.3.7 Page Buffering algorithm:•To get a process start quickly, keep a pool of free frames.•On page fault, select a page to be replaced.•Write the new page in the frameof the freepool, mark the page tableand restart the process.•Now write the dirty page out of disk and place the frame holdingreplaced pagein the freepool.
464.3.6 Least Recently Used (LRU) algorithm:•Page which has not been used forthe longest time in main memory isthe one which will be selected for replacement.•Easy to implement, keep a list, replace pages by looking back intotime.
4.3.7 Page Buffering algorithm:•To get a process start quickly, keep a pool of free frames.•On page fault, select a page to be replaced.•Write the new page in the frameof the freepool, mark the page tableand restart the process.•Now write the dirty page out of disk and place the frame holdingreplaced pagein the freepool.
munotes.in

Page 47

474.3.8 Least frequentlyUsed (LFU) algorithm:•The page with the smallest count is the one which will be selected forreplacement.•This algorithm suffers from the situation in which a page is usedheavily during the initial phase of a process, but then is never usedagain.4.3.9 Most frequentlyUsed (MFU) algorithm:•This algorithm is based on the argument that the page with the smallestcount was probably just brought in and has yet to be used4.7SUMMARY•Memory management is the process of controlling andcoordinatingcomputer memory, allocating portions called blocks to various runningprograms to optimize the complete performance of the system.•It allows you to check how ample memory needs to be allocated toprocesses that decide which processor should get memory at whattime.•In Single Contiguous Allocation, all types of computer's memoryexcluding a small portion which is reserved for the OS is available forone application•Partitioned Allocation method splits primary memory into variousmemory partitions, which is mostly contiguous areas of memory.4.8 UNIT END QUESTIONS1.What is Memory Management?2.Why Use Memory Management?3.Memory Management Techniques4.What is Swapping?5.What is Memory allocation?6.Explain Disk replacementalgorithms.7.Disk replacement algorithmsnumerical*****munotes.in

Page 48

485PAGING AND SEGMENTATIONUnit Structure5.0Objectives5.1Memory management goals5.2Segmentation5.3Paging5.4Page replacement algorithms5.5Design issues for paging System5.6Summary5.7Unit EndQuestions5.0 OBJECTIVES OF A MEMORY MANAGEMENT(MM) SYSTEMRelocation:•Relocatability-the ability to move process around in memory withoutit affecting its execution•OS manages memory, not programmer, and processes may be movedaround in memory•MM must convert program's logical addresses into physical addresses•Process's first address is stored as virtual address 0•Static Relocation-Program must be relocated before or during loadingof process into memory.Programsmust always be loadedinto thesameaddress space in memory, or relocator must be run again.•Dynamic Relocation-Process can be freely moved around in memory.Virtual-to-physical address space mappingis done at run-time.5.1MEMORY MANAGEMENT GOALSProtection:•Write Protection-to prevent data & instructions from beingoverwritten.•Read Protection-To ensure privacy of data & instructions.•OS needs to be protected from user processes, and user processes needto be protected from each other.•Memory protection (to prevent memory overlaps) is usually supportedby the hardware (limit registers), because most languages allowmemory addresses tobe computed at run-time.munotes.in

Page 49

49Sharing:•Sometimes distinct processes may need to execute the same process(e.g., many usersexecuting the sameeditor), or even the same data(when one process prepares data for another process).•When different processes signal or wait the same semaphore, theyneed to access the same memory address.•OS has to allow sharing, while at the same time ensure protection.Logical Organisation of Memory:•Uni-dimensional address space•If memory wassegmentedthen it would be possible to code programsandsubroutinesseparately, each with its own degree of protection.•The MM would manage inter-segment references at run-time, andcould allow a segment to be accessed by many different processes.Physical Organisation of Memory:•PM is expensive, so tends to be limited-but the amount of PM helpsto determine thedegree of multiprogramming(the number of runnableprocesses that can be simultaneously maintained)•A two-level storage scheme (one RAM, the other slower secondarydisk) can be used to virtually increase the overall amount of PM.•Processes can be kept in secondary storage and only brought into PMwhen needed. MM and OS have tomanage the operationof movingprocesses between the two levels.Paging and segmentation:Paging and segmentation are processes by which data is stored to,then retrieved from, a computer's storage disk.Paging is a computer memory management function that presentsstoragelocations to the computer's CPU as additional memory, calledvirtual memory. Each piece of data needs a storage address.Segmentation is a virtual process that creates variable-sizedaddress spaces in computer storage for related data, called segments. Thisprocess speed retrieval.Managing computer memory is a basic operating system function--both paging and segmentation are basic functions of the OS. No systemcan efficiently rely on limited RAM alone. So the computer’s memory
munotes.in

Page 50

50management unit (MMU) uses the storage disk, HDD or SSD, as virtualmemory tosupplement RAM.What is Paging?:As mentioned above, the memory management function calledpaging specifies storage locations to the CPU as additional memory, calledvirtual memory. The CPU cannot directly access storage disk, so theMMU emulates memory by mapping pages to frames that are in RAM.Before we launch into a more detailed explanation of pages and frames,let’s define some technical terms.•Page:A fixed-length contiguous block of virtual memory residing ondisk.•Frame:A fixed-length contiguous block located in RAM; whosesizing is identical topages.•Physical memory:The computer’s random access memory (RAM),typically contained in DIMM cards attached to the computer’smotherboard.•Virtual memory:Virtual memory is a portion of an HDD or SSD thatis reserved to emulate RAM. The MMU serves up virtual memoryfrom disk to the CPU to reduce the workload on physical memory.•Virtual address:The CPU generates a virtual address for each activeprocess. The MMU maps the virtual address to a physical location inRAM and passes the address to the bus. A virtual address space istherange of virtual addresses under CPU control.•Physical address:The physical address is a location in RAM. Thephysical address space is the set of all physical addresses orrespondingto the CPU’s virtual addresses.A physical address space is the rangeofphysical addresses under MMU control.
5.0 Fig: Paging concept
munotes.in

Page 51

51By assigning an address to a piece of data using a "page table" betweenthe CPU andthe computer's physical memory, a computer's MMU enablesthe system to retrieve thatdata whenever needed.The Paging Process:A page table stores the definition of each page. When an activeprocess requests data,the MMU retrieves corresponding pages intoframeslocated in physical memory forfaster processing. The process is calledpaging.The MMU uses pagetables to translate virtual addresses tophysical ones. Each tableentry indicates where a page is located: in RAMor on disk as virtual memory. Tablesmay have a single or multi-level pagetable such as different tables for applications andsegments.However, constant table lookups can slow down the MMU. Amemory cache called theTranslation Lookaside Buffer (TLB) storesrecenttranslations of virtual to physicaladdresses for rapid retrieval. Manysystems have multiple TLBs, which may reside atdifferent locations,including between the CPU and RAM, or between multiple pagetablelevels.Different frame sizes are available for data sets with larger orsmaller pages andmatching-sized frames. 4KB to 2MB are common sizes,and GB-sized frames areavailable in high-performance servers.Paging with Example:In Operating Systems, Paging is a storage mechanism used toretrieve processes from the secondary storage into the main memory in theform of pages.The main idea behind the paging is to divide each process in theform of pages. Themain memory will also be divided in the form offrames.One page of the process is to be stored in one of the frames of thememory. The pagescan be stored at the different locations of the memorybut the priority is always to findthecontiguous frames or holes.Pages of the process are brought into the main memory only whenthey are requiredotherwise they reside in the secondary storage.Different operating system defines different frame sizes. The sizesof each frame mustbeequal. Considering the fact that the pages aremapped to the frames in Paging, pagesize needs to be as same as framesize.munotes.in

Page 52

525.1A fig: Paging processExample:Let us consider the main memory size 16 Kb and Frame size is 1KB therefore the main memory will be divided into the collection of 16frames of 1 KB each.There are 4 processes in the system that is P1, P2, P3 and P4 of 4KB each. Each process is dividedinto pages of 1 KB each so that one pagecan be stored in one frame.Initially, all the frames are empty therefore pages of the processeswill get stored in the contiguous way.Frames, pages and the mapping between the two is shown in theimage below.
5.1 B fig: Paging process
munotes.in

Page 53

53Let us consider that, P2 and P4 are moved to waiting state aftersome time. Now, 8 frames become empty and therefore other pages can beloaded in that empty place. The process P5 of size 8 KB (8 pages) iswaiting inside the ready queue.Given the fact that, we have 8 non contiguous frames available inthe memory and paging provides the flexibility of storing the process atthe different places. Therefore, we can load the pages of process P5 in theplace of P2 and P4.
5.1Cfig: Paging processDesign Issues of Paging.•The Working Set Model. In the purest form of paging, processes arestarted up with none of their pages in memory.•Local versus Global Allocation Policies. In the preceding sections wehave discussed several algorithms for choosing a page to replace whena fault occurs. ...•Page Size. ...•Virtual Memory Interface.5.2SEGMENTATIONIn Operating Systems, Segmentation is a memory managementtechnique in which the memory is divided into the variable size parts.Each part is known as a segment which can be allocated to a process.What is Segmentation?The process known assegmentationis a virtual process that createsaddress spaces of various sizes in a computer system, called segments.
munotes.in

Page 54

54Each segment is a different virtual address space that directly correspondsto process objects.The details about each segment are stored in a table called asegment table. Segment table is stored in one (or many) of the segments.Segment table contains mainly two informationabout segment:1.Base:It is the base address of the segment2.Limit:It is the length of the segment.5.2.1 Why Segmentationis required??Till now, we were using Paging as our main memorymanagement technique. Paging is more close to theOperating systemrather than the User. It divides all the process into the form of pagesregardless of the fact that a process can have some relative parts offunctions which need to be loaded in the same page.Operating system doesn't care about the User's view of the process.It may divide the same function into different pages and those pages mayor may not be loaded at the same time into the memory. It decreases theefficiency of the system.It is better to have segmentation which divides the processinto thesegments. Each segment contain same type of functions such as mainfunction can be included in one segment and the library functions can beincluded in the other segment,5.2.2 Translation of Logical address into physical address by segmenttableCPU generates a logical address which contains two parts:1.Segment Number2.OffsetThe Segment number is mapped to the segment table. The limit ofthe respective segment is compared with the offset. If the offset is lessthan the limit then the address is valid otherwise it throws an error as theaddress is invalid.In the case of a valid address, the base address of the segment isadded to the offset to get the physical address of the actual word in themain memory.munotes.in

Page 55

55Fig : SegmentationAdvantages of Segmentation:1.No internal fragmentation2.Average Segment Size is larger than the actual page size.3.Less overhead4.It is easier to relocate segments than entire address space.5.The segment table is of lesser size as compare to the pagetable inpaging.Disadvantages:1.It can have external fragmentation.2.It is difficult to allocate contiguous memory to variable sized partition.3.Costly memory managementalgorithms.Difference between Paging and Segmentation:S.NOPAGINGSEGMENTATION1.In paging, program isdivided intofixed ormounted size pages.In segmentation, program isdivided into variable size sections2.For paging operatingsystem is accountableFor segmentation compiler isaccountable3.Page size is determined byhardware.Here, the section size is givenbythe user.4.It is faster in thecomparison ofsegmentation.Segmentation is slow.5.Paging could result ininternal fragmentationSegmentation could result inexternal fragmentation.6.In paging, logical addressis split into page numberandpage offset.Here, logical address is split intosection number and section offset
munotes.in

Page 56

567.Paging comprises a pagetable which encloses thebase address of everypage.While segmentation alsocomprises the segment tablewhich encloses segment numberand segment offsetPage Replacement Algorithms in Operating Systems:In an operating system that uses paging for memory management,a page replacement algorithm is needed to decide which page needs to bereplacedwhen a newpage comes in.Page Fault:A page fault happens when a running program accesses amemory page that is mapped into the virtual address space, but not loadedin physical memory. Since actual physical memory is much smaller thanvirtual memory, page faults happen. In case of page fault,the OperatingSystem might have to replace one of the existing pages with the newlyneeded page. Different page replacement algorithms suggest differentways to decide which page to replace. The target for all algorithms is toreduce the number of pagefaults.Page Replacement Algorithms:First In First Out (FIFO)This is the simplest page replacement algorithm. In this algorithm,the operating system keeps track of all pages in the memory in a queue,the oldest page is in the front of the queue. When a page needs to bereplaced page in the front of the queue is selected for removal.Example-1Consider page reference string 1, 3, 0, 3, 5, 6 with 3 pageframes.Find number of page faults
munotes.in

Page 57

57Initially all slots are empty, so when 1, 3, 0 came they are allocated to theempty slots—>3 Page Faults.when 3 comes, it isalready in memory so—>0 Page Faults.Then 5 comes, it is not available in memory so itreplaces the oldest page slot i.e 1.—>1 Page Fault.6 comes, it is also not available in memory so it replaces the oldest pageslot i.e 3 >1 Page Fault.Finally when 3 come it is not available so it replaces 01 page faultOptimal Page replacement:In this algorithm, pages are replaced which would not be used for thelongest duration of time in the future.Example-2:Consider the page references 7, 0, 1,2, 0, 3, 0, 4, 2, 3, 0, 3, 2,with a 4 page frame. Find the number of page fault.
Initially all slots are empty, so when 7 0 1 2 are allocated to the emptyslots—>4Page faults0 is already there so—>0 Page fault.when 3 came it will take the placeof 7 because it is not used for thelongest durationof time in the future.—>1 Page fault.0 is already there so—>0 Page fault..4will takes place of 1—>1 Page Fault.Now for the further page reference string—>0 Page faultbecausethey are already available in the memory.Optimal page replacement is perfect, but not possible in practice asthe operating system cannot know future requests. The use of OptimalPage replacement is to set up a benchmark so that other replacementalgorithms can be analyzed against it.
munotes.in

Page 58

58Least Recently Used:In this algorithm page will be replaced which is least recently used.Example-3Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0,3, 2 with 4 page frames.Find number ofpage faults.
Initially all slots are empty, so when 7 0 1 2 are allocated to the emptyslots—>4 Page faults0 is already their so—>0 Page fault.when 3 came it will take the place of 7 because it is least recently used—>1 Page fault0 is already inmemory so—>0 Page fault.4 will takes place of 1—>1 Page FaultNow for the further page reference string—>0 Page faultbecause theyare already available in the memory.5.5 DESIGN ISSUES OF PAGING•The Working Set Model. In the purest form of paging,processes arestarted up with noneof their pages in memory. .•Local versus Global Allocation Policies. In the preceding sections wehave discussed several algorithms for choosing a page to replace whena fault occurs. ...•Page Size. ...•Virtual MemoryInterface.5.6 SUMMARY1.Paging is a storage mechanism that allows the OS to retrieve processesfrom the secondary storage into the main memory in the form ofpages.2.Fragmentation refers to the condition of a disk in which files aredivided into piecesscattered around the disk.3.Segmentation method works almost similarly to paging. The only
munotes.in

Page 59

59difference between the two is that segments are of variable-length,whereas, in the paging method, pages are always of fixed size.4.Dynamic loading is a routine of a program which is not loaded untilthe program calls it.5.Linking is a method that helps OS to collect and merge variousmodules of code and data into a single executable file.5.7UNIT ENDQUESTIONS1.What is Paging?2.What is Segmentation?andPaging vs.Segmentation3.Advantages of Paging4.Advantage of SegmentationandDisadvantages of Paging5.Disadvantages of Segmentation6.Page replacement algorithmsnumerical*****munotes.in

Page 60

606FILE SYSTEMUnit Structure6.0Objectives6.1Introduction6.2File structure6.3Filetype6.4File access mechanism6.5Space Allocations6.6Let us Sum Up6.6List of Reference6.7Bibliography6.8Unit EndQuestions6.0OBJECTIVES•Files•Directories•file system implementation•file-system management and optimization•MS-DOS file system•UNIX V7 file system•CDROM file system6.1FILEA file is a named collection of related information that is recordedon secondary storage such as magnetic disks, magnetic tapes and opticaldisks. In general, a file is a sequence of bits, bytes, lines orrecords whosemeaning is defined by the files creator and user.6.2 FILE STRUCTUREA File Structure should be according to a required format that theoperating system can understand.•A file has a certain defined structure according to its type.•A textfile is a sequence of characters organized into lines.•A source file is a sequence of procedures and functions.munotes.in

Page 61

61•An object file is a sequence of bytes organized into blocks that areunderstandable by the machine.•When an operatingsystem defines different file structures, it alsocontains the code to support these filestructures. Unix, MS-DOSsupport a minimumnumber of filestructures.6.3 FILE TYPEFile type refers to the ability of the operating system to distinguishdifferent types of file such as text files, source files and binary files etc.Many operating systems support many types of files. Operating systemlike MS-DOS and UNIX have the following types of files’’1)Ordinary files:•These are the files that contain user information.•These may have text, databases or executableprograms.•The user can apply various operations on such files like add, modify,delete or even remove the entire file.2)Directory files:•These filescontain a listof file names and other information related tothese files.3)Special files:•These files are also known as device files.•These files represent physicaldeviceslike disks, terminals, printers,networks, tape drive etc.These files are of two types “•Character special files“data is handled character by character as incase of terminals or printers.•Block special files“data is handled in blocks as in the caseof disksand tapes.6.4 FILE ACCESS MECHANISMSFile access mechanism refers to the manner in which the records of a filemay be accessed. There are several ways to access files “•Sequential access•Direct/Random access•Indexed sequential access1)Sequential accessA sequential access is that in which the records are accessed insome sequence, i.e., the information in the file is processed in order, onerecord after the other. This access method is the most primitive one.Example: Compilersusually access files in this fashion.munotes.in

Page 62

622) Direct/Random access:•Random access file organization provides, accessing the recordsdirectly.•Each record has its own address on the filewith thehelp of which itcan be directly accessed for reading or writing.•The records need not be in any sequence within the file and they neednot be in adjacent locations on the storage medium.Indexed sequential access:•This mechanism is built up onthe basisof sequential access.•An index is created for each file which contains pointers to variousblocks.•Index is searched sequentially and its pointer is used to access the filedirectly.6.5 SPACE ALLOCATIONFiles are allocated disk spaces by operating system. Operating systemsdeploy following three main ways to allocatedisk space to files.•Contiguous Allocation•Linked Allocation•Indexed Allocation1)Contiguous Allocation•Each file occupies a contiguous address space on disk.•Assigned disk address is in linear order.•Easy to implement.•External fragmentation is a majorissue with this type of allocationtechnique.2) Linked Allocation:•Each file carries a list of links to disk blocks.•Directorycontains a link/ pointerto the firstblock of a file.•No external fragmentation•Effectively usedin a sequentialaccess file.•Inefficient in case of direct access file.3) Indexed Allocation:•Provides solutions to problems of contiguous and linked allocation.•An indexblock is created having all pointers to files.•Each file has its own index block which stores the addresses ofdiskspace occupied by the file.•Directory contains the addresses of index blocks of files.munotes.in

Page 63

63Structure of directory in OS:Adirectoryis a container that is used to contain folders and files. Itorganizes files and folders into a hierarchical manner.
There are several logical structures of a directory, these are given below.1.Single-leveldirectory:Single level directory is the simplest directory structure.In it allfiles are contained in the same directory which make it easy to support andunderstand.A single level directory has a significant limitation, however, whenthe number of files increases or when the system has more than one user.Since all the files are in the same directory, they must have the uniquename .If two users calltheir dataset test, then the unique name ruleviolated.
Advantages:•Since it is a single directory, so its implementation is very easy.•If the files are smaller in size, searching will become faster.•The operations like file creation, searching,deletion, updating are veryeasy in such a directory structure.Disadvantages:•Theremay be a chanceof name collision because two files can nothave the same name.
munotes.in

Page 64

64•Searching will become time taking if the directory is large.•Thiscan not group the same type of files together.2.Two-level directory:As we have seen, a single level directory often leads to confusionof files names among different users. The solution to this problem is tocreate a separate directory for each user.In the two-level directory structure, each user has theirownuserfiles directory (UFD). The UFDs have similar structures, but each listsonly the files of a single user. The system’smaster file directory (MFD)issearched whenever a new user id=slogged in. The MFD is indexed byusername or account number, and each entry points to the UFD for thatuser.
Advantages:•We cangive a fullpath like /User-name/directory-name/.•Different users canhave the samedirectory as well as file name.•Searchingfor filesbecomesmore easy due to path name and user-grouping.Disadvantages:•A user is not allowed to share files with other users.•Stillit is notveryscalable;two files of the same type cannot begrouped together in the same user.3. Tree-structureddirectory:Once we have seen a two-level directory as a tree of height 2, thenatural generalization is to extend the directory structure to a tree of
64•Searching will become time taking if the directory is large.•Thiscan not group the same type of files together.2.Two-level directory:As we have seen, a single level directory often leads to confusionof files names among different users. The solution to this problem is tocreate a separate directory for each user.In the two-level directory structure, each user has theirownuserfiles directory (UFD). The UFDs have similar structures, but each listsonly the files of a single user. The system’smaster file directory (MFD)issearched whenever a new user id=slogged in. The MFD is indexed byusername or account number, and each entry points to the UFD for thatuser.
Advantages:•We cangive a fullpath like /User-name/directory-name/.•Different users canhave the samedirectory as well as file name.•Searchingfor filesbecomesmore easy due to path name and user-grouping.Disadvantages:•A user is not allowed to share files with other users.•Stillit is notveryscalable;two files of the same type cannot begrouped together in the same user.3. Tree-structureddirectory:Once we have seen a two-level directory as a tree of height 2, thenatural generalization is to extend the directory structure to a tree of
64•Searching will become time taking if the directory is large.•Thiscan not group the same type of files together.2.Two-level directory:As we have seen, a single level directory often leads to confusionof files names among different users. The solution to this problem is tocreate a separate directory for each user.In the two-level directory structure, each user has theirownuserfiles directory (UFD). The UFDs have similar structures, but each listsonly the files of a single user. The system’smaster file directory (MFD)issearched whenever a new user id=slogged in. The MFD is indexed byusername or account number, and each entry points to the UFD for thatuser.
Advantages:•We cangive a fullpath like /User-name/directory-name/.•Different users canhave the samedirectory as well as file name.•Searchingfor filesbecomesmore easy due to path name and user-grouping.Disadvantages:•A user is not allowed to share files with other users.•Stillit is notveryscalable;two files of the same type cannot begrouped together in the same user.3. Tree-structureddirectory:Once we have seen a two-level directory as a tree of height 2, thenatural generalization is to extend the directory structure to a tree of
munotes.in

Page 65

65arbitrary height. This generalization allows the user to create their ownsubdirectories and to organize their files accordingly.
A tree structure is the most common directory structure. The treehas a root directory, and every file in the system has a unique path.Advantages:•Very generalize, since full pathnamescan be given.•Very scalable, the probability of name collision is less.•Searching becomes veryeasy;we can use both absolutepathsas wellasrelative paths.Disadvantages:•Every file does not fit into the hierarchicalmodel;files may be savedinto multiple directories.•Wecannotshare files.•It is inefficient, because accessing a file may go under multipledirectories.4. Acyclic graph directory:An acyclic graph is a graph with no cycle and allows to sharesubdirectoriesand files. The same file or subdirectories may be in twodifferent directories. It is a natural generalization of the tree-structureddirectory.It is used in situations like when two programmers are working ona joint project and they need to access files. The associated files are stored
munotes.in

Page 66

66in a subdirectory, separating them from other projects and files of otherprogrammers, since they are working on a joint project so they want thesubdirectories to be into their own directories. The common subdirectoriesshould be shared. So here we use Acyclic directories.It is the point to note that shared file is not the same as copyfile.Ifany programmer makes some changes in the subdirectory it will reflect inboth subdirectories.
Advantages:•We can sharefiles.•Searching is easy due to different-different paths.Disadvantages:•We share the files via linking, in case of deleting it may create theproblem,•If the link is softlink then after deleting the file we left with a danglingpointer.•In the caseof hardlink, to delete a file we have to delete all thereferencesassociated with it.General graph directorystructure:In general graph directory structure, cycles are allowed within adirectory structure where multiple directories can be derived from morethan one parent directory. The main problem with this kind of directorystructure is to calculate total size or space that has been taken by the filesand directories.
66in a subdirectory, separating them from other projects and files of otherprogrammers, since they are working on a joint project so they want thesubdirectories to be into their own directories. The common subdirectoriesshould be shared. So here we use Acyclic directories.It is the point to note that shared file is not the same as copyfile.Ifany programmer makes some changes in the subdirectory it will reflect inboth subdirectories.
Advantages:•We can sharefiles.•Searching is easy due to different-different paths.Disadvantages:•We share the files via linking, in case of deleting it may create theproblem,•If the link is softlink then after deleting the file we left with a danglingpointer.•In the caseof hardlink, to delete a file we have to delete all thereferencesassociated with it.General graph directorystructure:In general graph directory structure, cycles are allowed within adirectory structure where multiple directories can be derived from morethan one parent directory. The main problem with this kind of directorystructure is to calculate total size or space that has been taken by the filesand directories.
66in a subdirectory, separating them from other projects and files of otherprogrammers, since they are working on a joint project so they want thesubdirectories to be into their own directories. The common subdirectoriesshould be shared. So here we use Acyclic directories.It is the point to note that shared file is not the same as copyfile.Ifany programmer makes some changes in the subdirectory it will reflect inboth subdirectories.
Advantages:•We can sharefiles.•Searching is easy due to different-different paths.Disadvantages:•We share the files via linking, in case of deleting it may create theproblem,•If the link is softlink then after deleting the file we left with a danglingpointer.•In the caseof hardlink, to delete a file we have to delete all thereferencesassociated with it.General graph directorystructure:In general graph directory structure, cycles are allowed within adirectory structure where multiple directories can be derived from morethan one parent directory. The main problem with this kind of directorystructure is to calculate total size or space that has been taken by the filesand directories.
munotes.in

Page 67

67Advantages:•It allows cycles.•It is more flexible than other directoriesstructure.Disadvantages:•It is more costly than others.•It needs garbage collection.File System Implementation:A file is a collection of related information. The file system resideson secondary storage and providesefficient and convenient access to thedisk by allowing data to be stored, located, and retrieved.Filesystem organized in many layers:Application ProgramsLogical file systemFile organization moduleBasic file systemI/O ControlDevices•I/OControl level:Device drivers act as an interface between devices and Os, theyhelp to transfer data between disk and main memory. It takes block
munotes.in

Page 68

68number as input and as output it gives low level hardware specificinstruction./li>•Basic file system:ItIssues general commands to device drivers to read and writephysical blocks on disk.It manages the memory buffers and caches. Ablock in buffer can hold the contents of the disk block and cache storesfrequently used file system metadata.•Fileorganization Module:It has information about files, location of files and their logical andphysical blocks.Physical blocks do not match with logical blocksnumbered from 0 to N. It also has a free space which tracks unallocatedblocks.•Logical file system:It manages metadata information about a file i.e includes all detailsabout a file except the actual contents of the file. It also maintains via filecontrol blocks. File control block (FCB) has information about a file–owner, size, permissions, location of file contents.Advantages:1.Duplication of code is minimized.2.Each file system can have its own logical file system.Disadvantages:•If we access many files at same time then it results in lowperformance. We canimplementfile system byusing two types datastructures :1.On-disk Structures:Generally they contain information about total number of diskblocks, free disk blocks, location of them and etc. Below given aredifferent on-diskstructures:1)Boot Control Block:It is usuallythe first block of volume and it contains informationneeded to boot an operating system.In UNIX it is called boot block and inNTFS it is called as partition boot sector.munotes.in

Page 69

692)Volume Control Block:It has information about a particular partition ex:-free blockcount, block size and block pointers etc.In UNIX it is called super blockand in NTFS it is stored in the master file table.3)Directory Structure:They store file names and associated inode numbers.In UNIX,includes file names and associatedfile names and in NTFS, it is stored inthe master file table.3)Per-File FCB:It contains details about files and it has a unique identifier numberto allow association with directory entry. In NTFS it is stored in masterfile tableFile Control Block(FCB)File permissionsFile dates (create, access, write)File owner, group, ACLFile sizeFile data blocksor pointers to filedata blocks2)In-MemoryStructure:They are maintained in main-memory and these are helpful for filesystem management for caching. Several in-memory structures givenbelow:Mount Table :It contains information about each mounted volume.1)Directory-Structure cache :This cache holds thedirectory information of recently accesseddirectories.2)System wide open-file table :It contains the copy of FCB of each open file.3)Per-process open-file table :It contains information opened by that particular process and it mapswith appropriatesystem wide open-file.munotes.in

Page 70

703) DirectoryImplementation:1) LinearList:It maintains a linear list of filenames with pointers to the datablocks. Itis time-consuming also.To create a new file, we must firstsearch the directory to be sure that no existingfile has the same name thenwe add a file at end of the directory.To delete a file, we search thedirectory for the named file and release the space.To reuse the directoryentry either we can mark the entry as unused or we can attach it to a list offreedirectories.2) Hash Table:The hash table takes a value computed from the file name andreturns a pointer to the file. It decreases the directory search time. Theinsertion and deletion process of files is easy. The major difficulty is hashtables areits generally fixed size and hash tables are dependent on hashfunction on that size.File System Management and Optimization
1)Disk-Space Management1)Disk-Space Management:Since all the files are normally stored on disk one of the mainconcernsof file system is management of disk space.2)Block Size:The main question that arises while storing files in a fixed-sizeblockis the size of the block. If the block is toolarge, spacegets wastedand if the block is toosmall, timegetswasted. So,to choose a correctblock size some information about the file-size distribution is required.Performance
703) DirectoryImplementation:1) LinearList:It maintains a linear list of filenames with pointers to the datablocks. Itis time-consuming also.To create a new file, we must firstsearch the directory to be sure that no existingfile has the same name thenwe add a file at end of the directory.To delete a file, we search thedirectory for the named file and release the space.To reuse the directoryentry either we can mark the entry as unused or we can attach it to a list offreedirectories.2) Hash Table:The hash table takes a value computed from the file name andreturns a pointer to the file. It decreases the directory search time. Theinsertion and deletion process of files is easy. The major difficulty is hashtables areits generally fixed size and hash tables are dependent on hashfunction on that size.File System Management and Optimization
1)Disk-Space Management1)Disk-Space Management:Since all the files are normally stored on disk one of the mainconcernsof file system is management of disk space.2)Block Size:The main question that arises while storing files in a fixed-sizeblockis the size of the block. If the block is toolarge, spacegets wastedand if the block is toosmall, timegetswasted. So,to choose a correctblock size some information about the file-size distribution is required.Performance
703) DirectoryImplementation:1) LinearList:It maintains a linear list of filenames with pointers to the datablocks. Itis time-consuming also.To create a new file, we must firstsearch the directory to be sure that no existingfile has the same name thenwe add a file at end of the directory.To delete a file, we search thedirectory for the named file and release the space.To reuse the directoryentry either we can mark the entry as unused or we can attach it to a list offreedirectories.2) Hash Table:The hash table takes a value computed from the file name andreturns a pointer to the file. It decreases the directory search time. Theinsertion and deletion process of files is easy. The major difficulty is hashtables areits generally fixed size and hash tables are dependent on hashfunction on that size.File System Management and Optimization
1)Disk-Space Management1)Disk-Space Management:Since all the files are normally stored on disk one of the mainconcernsof file system is management of disk space.2)Block Size:The main question that arises while storing files in a fixed-sizeblockis the size of the block. If the block is toolarge, spacegets wastedand if the block is toosmall, timegetswasted. So,to choose a correctblock size some information about the file-size distribution is required.Performance
munotes.in

Page 71

713)Keeping track of free blocks:After a block size has been finalized the next issue that needs to becatered is how to keep track of the freeblocks. In order to keep track thereare two methods that are widely used:Using a linked list: Using a linked list of disk blocks witheachblock holding as many free disk block numbers as will fit.
Bitmap:A disk with n blocks has a bitmap with n bits. Freeblocks arerepresented using 1's and allocated blocks as 0'sas seen below in thefigure.
munotes.in

Page 72

721)Disk quotas:Multiuser operating systems often provide a mechanism forenforcing disk quotas. A systemadministrator assigns each user amaximum allotment of files and blocks and the operating system makessure that the users do not exceed their quotas. Quotas are kept track of on aper-user basis in a quota table.5) File-system Backups:If a computer’sfile system is irrevocably lost, whether due to hardware orsoftware restoring all the information will be difficult, time consumingand in many cases impossible. So it is advised to always have file-systembackups.•Backing up files is time consuming and as welloccupies a largeamount of space, so doing it efficiently andconvenientlyis important.Below are few points to be considered before creating backups forfiles.•Is itrequiredto backup the entire file system or only a part of it.•Backing up filesthat haven’t been changed from previous backupleads toincremental dumps. So it’s better to take a backup of onlythose files which have changed from the time•of previous backup. But recovery gets complicated in such cases.
munotes.in

Page 73

73•Since thereis an immenseamount of data, it is generally desired tocompress the data before taking a backup for the same.•It is difficult to perform a backup on an active file-system since thebackup may be inconsistent.•Making backups introduces many security issuesThere aretwo ways for dumping a disk to the backup disk:•Physical dump:In thisway the dumpstarts at block 0 of the disk,writes all the disk blocks ontotheoutput disk in order and stops aftercopying the last one.•Advantages:Simplicity and great speed.•Disadvantages:inability to skip selected directories, makeincremental dumps, and restore individual files upon request•Logical dump:In this way the dump starts at one or more specifieddirectories and recursively dump all files and directories found thathave been changed since some given base date. This is the mostcommonly used way
The above figure depicts a popular algorithm used in many UNIXsystems wherein squares depict directories and circles depict files. Thisalgorithmdumps all the files and directories that have been modified andalso theones on the path to a modified file or directory. The dumpalgorithm maintains a bitmap indexed by i-node number with several bitsper i-node. Bits will be set and cleared in this map as the algorithmproceeds. Although logical dumping is straightforward, there are fewissues associated with it.•Since the free block list is not a file, it is not dumped and hence it mustbe reconstructed from scratch after all the dumps have been restored
munotes.in

Page 74

74•If a file islinked to two or more directories, it is important that the fileis restored only one time and that all the directories that are supposedto point to it do so•UNIX files may contain holes•Special files, named pipes and all other files that are not realshouldnever be dumped.6) File-system Consistency:To deal with inconsistent file systems, most computers have autility program that checks file-system consistency. For example, UNIXhas fsck and Windows has sfc. This utility can be run whenever the systemis booted. The utility programs perform two kinds of consistency checks.•Blocks:To check block consistency the program builds two tables,each one containing a counter for each block, initially set to 0. If thefile system is consistent, each blockwill have a 1 either in the firsttable or in the second table as you can see in the figure below.
In case if both the tables have 0 in it that may be because the blockis missing and hence will be reported as a missing block. The two othersituations are if a block is seen more than oncein a freelistand the samedata block is present in two or morefiles.•In addition to checking to see that each block is properly accountedfor, the file-system checker also checks the directory system. It toouses a table of counters but per file-size rather than per block. Thesecounts start at 1 when a file iscreated and are incremented each time a(hard) link is made to the file. In a consistent file system, both countswill agree7)File-system Performance:Since the access to disk is much slower than access to memory,many file systems have been designed withvarious optimizations toimprove performance as described below.
munotes.in

Page 75

758) Caching:The most common technique used to reduce disk access time is theblock cache or buffer cache. Cache can be defined as a collection of itemsof the same type stored in a hiddenor inaccessible place. The mostcommon algorithm for cache works in such a way that if a disk access isinitiated, the cache is checked first to see if the disk block is present. If yesthen the read request can be satisfied without a disk access else thediskblock is copied to cache first and then the read request is processed.
The above figure depicts how to quickly determine if a block ispresent in a cache or not. For doing so a hash table can be implementedand look up the result in a hash table.9)Block Read Ahead:Another technique to improve file-system performance is to try toget blocks into the cache before they are needed to increase the hit rate.This works only when files are read sequentially. When a file system isasked for block ‘k’ in the file it does that and then also checks beforehandif ‘k+1’ is available if not it schedules a read for the block k+1 thinkingthat it might be of use later.10)Reducing disk arm motion:Another way to increase file-system performance is byreducingthe disk-arm motion by putting blocks that are likely to be accessed insequence close to each other,preferably in the same cylinder.
munotes.in

Page 76

76In the above figure all the i-nodes are near the start of the disk, so theaverage distance between an inode and its blocks will be half the numberof cylinders, requiring long seeks. But to increase the performance theplacement of i-nodes can be modifiedas below next setting
I-nodes arelocated nearthe start of thedisk
Disk Is DividedInto cylindergroups, eachwith its own i-nodesmunotes.in

Page 77

778)Defragmenting Disks:Due to continuous creation and removal of files the disks get badlyfragmented with files and holes all over the place. As a consequence,when a new file is created, the blocks used for it may be spread all overthe disk, giving poor performance. The performance can be restored bymoving files around to make them contiguous and to put all (or at leastmost) of the free space in one or morelarge contiguous regions on thedisk.MS-DOS Filesystem:The MS-DOS filesystem is very straightforward. It is a 16-bit systembased on a File Allocation Table, or FAT16 (FAT for short). The purposeof the file allocation table is to keep track of whereto find files on thedisk.In MS-DOS, every DOS based partition has a letter: (A: or B: orC:). Typically, the drive letters A: and B: are reserved for floppy drives.You will most frequently find that the C: drive is the ‘bootable partition’.Each drivehas a root directory (‘\’) so the root directory on a given drivelooks like this: C:\Changing drives is as simple as typing the name of the drive letter:A:> C:C:>MS-DOS then stores files to the system in any arrangement youchoose. You can create directories, and store files within thosedirectories. A typical file/path might look like this:C:\ms-dos\dir\filename.txtLimitations:FAT16 file systems are compatible with all Microsoft operatingsystems, but it has severe limitations. First,all files on the system arelimited to eight characters and a three letter extension. The MS-DOS filesystem also has a limit of approximately 2.1 Gigabytes owing to the factthat the MS-DOS operating system doesn’t recognize ‘Int 13’ basedcommands, and therefore cannot issue commands to access the remainderof larger disks.Keep in mind that MS-DOS is a legacy system kept around fordoing command line based work in Windows.Unix File System:Unix file system is a logical method oforganizing and storinglargeamounts of information in a way that makes it easy to manage. A file is asmallest unit in which the information is stored. The Unix file system hasseveral important features. All data in Unix is organized into files. All filesmunotes.in

Page 78

78are organized intodirectories. These directories are organized into a tree-like structure called the file system.Files in the Unix System are organized into a multi-level hierarchystructure known as a directory tree. At the very top of the file system is adirectory called “root” which is represented by a “/”. All other files are“descendants” of root.
Directoriesor Files and their description:•/ :The slash / character alone denotes the root of the filesystem tree.•/bin :Stands for “binaries” and contains certainfundamental utilities,such as ls or cp, which are generally needed by all users.•/boot :Contains all the files that are requiredfor a successfulbootingprocess.•/dev :Stands for “devices”. Contains file representations of peripheraldevices and pseudo-devices.•/etc :Contains system-wide configuration files and system databases.Originally also contained•“dangerous maintenance utilities” such as init,but these have typicallybeen moved to /sbin or elsewhere.•/home :Contains the home directories for the users.•/lib :Contains system libraries, and some critical files such as kernelmodules or device drivers.•/media :Default mount point for removable devices, such as USBsticks, media players, etc.•/mnt :Stands for “mount”. Contains filesystem mountpoints. Theseare used, for example, if the•system uses multiple hard disks or hard disk partitions. It is also oftenused for remote (network)•filesystems, CD-ROM/DVD drives, and so on.
munotes.in

Page 79

79•/proc :procfs virtual filesystem showing information about processesas files.•/root :The home directory for the superuser “root”–that is, thesystem administrator. This account’s home directory is usually on theinitial filesystem, and hence not in /home (which may be a mountpoint for another filesystem) in casespecific maintenance needs to beperformed, during•which other filesystems are not available. Such a case could occur, forexample, if a hard disk drive suffers physical failures and cannot beproperly mounted.●/tmp :A place for temporary files. Manysystems clear this directoryupon startup; it might have tmpfs mounted atop it, in which case itscontents do not survive a reboot, or it might be explicitly cleared by astartup script at boot time.•/usr :Originally the directory holding user homedirectories, itsusehas changed. It now holds executables, libraries, and shared resourcesthat are not system critical, like the X Window System,•KDE, Perl, etc. However, on some Unix systems, some user accountsmay still have a home directory that is a directsubdirectory of /usr,such as the default as in Minix. (on modern systems, these useraccounts are often related to server or system use, and not directlyused by a person).•/usr/bin :This directory stores all binary programs distributed with theoperating system not residing in /bin, /sbin or (rarely) /etc.•/usr/include :Stores the development headers used throughout thesystem. Header files are mostly used by the#includedirective inC/C++ programming language.•/usr/lib :Stores the required librariesand data files for programsstored within /usr or elsewhere.•/var :A short for “variable.” A place for files that may change often–especially in size, for example e-mail sent to users on the system, orprocess-ID lock files.•/var/log :Contains systemlog files.•/var/mail :The place where all the incoming mails are stored. Users(other than root) can access their own mail only. Often, this directoryis a symbolic link to /var/spool/mail.•/var/spool :Spool directory. Contains print jobs, mail spools and otherqueued tasks.•/var/tmp :A place for temporary files which should be preservedbetween system reboots.munotes.in

Page 80

80Types of Unix files–The UNIX files system contains several differenttypes of files :Ordinary FilesDirectoriesSpecialFilesPipesSocketsSymbolic Links1. Ordinary files:An ordinary file is a file on the system that containsdata, text, or program instructions.•Used to store your information, such as some text you have written oran image you have drawn. Thisis the type of file that you usuallywork with.•Always located within/under a directory file.•Do not contain other files.•In long-format output of ls-l, this type of file is specified by the “-”symbol.2.Directories:Directories store both special andordinary files. For users familiarwith Windows or Mac OS, UNIX directories are equivalent to folders. Adirectory file contains an entry for every file and subdirectory that ithouses. If you have 10 files in a directory, there will be 10 entries in thedirectory. Each entry has two components.(1)The Filename(2)A unique identification number for the file or directory (called theinode number)•Branching points in the hierarchical tree.•Used to organize groups of files.•May contain ordinary files,special files or other directories.•Never contain “real” information which you would work with(such as text). Basically, justused fororganizing files.•All files are descendants of the root directory, ( named / ) locatedat the top of the tree.Inlong-format output of ls–l , this type of file is specified by the “d”symbol.3. Special Files:Used to represent a real physical device such as a printer, tapedrive or terminal, used for Input/Output (I/O) operations.Device ormunotes.in

Page 81

81special filesare usedfor device Input/Output(I/O) on UNIX and Linuxsystems. They appear in a file system just like an ordinaryfile or a directory.On UNIX systems there are two flavors of special files for each device,character special files and block special files:•Whena character special file is used for device Input/Output(I/O), datais transferred one character at a time. This type of access is called rawdevice access.•When a block special file is used for device Input/Output(I/O), data istransferred in largefixed-size blocks. This type of access is calledblock device access.For terminal devices, it’s one character at a time. For disk devicesthough, raw access means reading or writing in whole chunks of data–blocks, which are native to your disk.•Inlong-format output of ls-l, character special files are marked by the“c” symbol.•In long-format output of ls-l, block special files are marked by the “b”symbol.4. Pipes:UNIX allows you to link commands together using a pipe. Thepipe acts a temporaryfile which only exists to hold data from onecommand until it is read by another.A Unixpipe providesa one-way flowof data.The output or result of the first command sequenceis used as theinput to the second command sequence. To make a pipe, put a vertical bar(|) on the command line between two commands.For example:who | wc-l7.0 What is CDFS (Compact Disc File System)?:Introduction:CDFSstands for Compact Disc FileSystem. Before the era ofCDFS, there was no medium for people to store their memories or filesthat they want to store for the long term purpose. The storing of data andinformation was a major problem because in that time the world needs asystem that can store multiple files incompressed format. But therevolution of technology changed the culture of the world and newadvanced things started coming to the market. CDFS came into the pictureon 21 August 1999. At that time CDFS is considered the most advancedtechnology in the technology Industry. There were many features offeredby CDFS that came into limelight immediately:1.It is a file system for read-only and write-once CD-ROMs.2.It exports all tracks and boot images on a CD as normal files.munotes.in

Page 82

823.CDFS provides a wide range of services which include creation,replacing, renaming, or deletion of files on write-oncemedia.4.It uses a VCACHE driver to control the CD-ROM disc cache allowingfor a smoother playback.5.It includes several discproperties like volume attributes, file attributes,and file placement.History:CDFS was developed by Simson Garfinkel and J. Spencer Love atthe MIT Media Lab between 1985 and 1986. CDFS was developed fromthe write-once CD-ROM simulator. They are designed to store any dataand information on read-only and write-once media.A great setback forCDFS was that it never gets sold. The File System source code waspublished on the internet.Disk Images can be saved using the CDFS standard, which may beused to burn ISO 9660 discs.ISO 9660also referred to as CDFS by somehardware and software providers, is a file system published by ISO(International Organization for Standardization) for optical disc media.Applications:A file system is a systematic organized way in which files have toget organized in a hard disk. The file system is initiated when a user opensa hard disk to access files. Here aresome applications of the Compact DiskFile System:1.CDFS creates a way in which the system first sets up the root directoryand then automatically creates all the subsequent folders for it.2.The system also provides a wide range of services for all users. Youcan create new files or folders which are added to the main root file orwe can say the “file tree” of the system.3.There was also a problem of transferring data or files from CDs to alaptop or computer. But CDFS shows us a great solution to solve thisproblem. It is useful for burning discs that can be exchanged betweendifferent devices.4.CDFS is not specific to a single Operating System, it means that a discburned on Macintosh using CDFS can be read on a Windows or Linuxbased computer.5.It can operate over numerous Operating Systems. It means if a userstarted shifting files from Macintosh using Compact Disk File System,he can also operate the files in Windows Operating System.6.Disc Pictures are also saved using proper system standards. All fileshave a typical.ISOname extension.Types:There are different versions of Compact Disk File System:1.Clustered operated system. (can be Global or Grid)2.Flash operatedmunotes.in

Page 83

833.Object file system4.Semantic5.Steganographic process6.Versioning7.Synthetic operatedsystem6.6SUMMARY•A file is a collection of correlated information which is recorded onsecondary or non-volatile storage like magnetic disks, optical disks,and tapes.•It provides I/O support for a variety of storage device types.•Files are stored ondisk or other storage and do not disappear when auser logs off.•A File Structure needs to be predefined format in such a way that anoperating system understands it.•File type refers to the ability of the operating system to differentiatedifferent typesof files like text files, binary, and source files.•Create find space on disk and make an entry in the directory.•Indexed Sequential Access method is based on simple sequentialaccess•In Sequential Access method records are accessed in a certainpredefinedsequence•The random access method is also called direct random access•Three types of space allocation methods are: LinkedAllocation,Indexed Allocation,Contiguous Allocation•Information about files is maintained by Directories6.7UNIT ENDQUESTIONS1.Explain File System?2.State the Objectives of File management System3.Discuss the properties of a File System4.Explain File structure5.Discuss File Attributes6.Explain File Type7.List the Functions of File8.State and explain Commonly used terms in File systems9.Explain different File Access Methods10.Explain Space Allocation11.Discuss the File Directories12.Explain File types-name, extension*****munotes.in

Page 84

84UNIT III7PRINCIPLES OF I/O HARDWARE ANDSOFTWAREUnit Structure7.0Objectives7.1Introduction7.2Principles ofI/O software7.3I/O software layers7.4Summary7.5Unit End Questions7.0 OBJECTIVES•To understand principles of i/o hardware•To learn principles of i/o software•To learn different, i/o software layersIn addition to providing abstractions such as processes, addressspaces, and files, an operating system also controls all the computer’s I/O(Input/Output) devices. It must issue commands to the devices, catchinterrupts, and handle errors. It should alsoprovide an interface betweenthe devices and the rest of the system that is simple and easy to use. To theextent possible, the interface should be the same for all devices (deviceindependence). The I/O code represents a significant fraction of the totaloperating system. How the operating system manages I/O is the subject ofthis chapter.This chapter is organized as follows. We will look first at some ofthe principles of I/O hardware and then at I/O software in general. I/Osoftware can be structuredin layers, with each having a well-defined task.We will look at these layers to see what they do and how they fit together.Next, we will look at several I/O devices in detail: disks, clocks,keyboards, and displays. For each device we will look at its hardware andsoftware. Finally, we will consider power management.7.1INTRODUCTIONDifferent people look at I/O hardware in different ways. In thisbook we are concerned with programming I/O devices, not designing,munotes.in

Page 85

85building, or maintaining them, so our interest is in how the hardware isprogrammed, not how it works inside.7.1.1 I/ODevices:I/O devices can be roughly divided into two categories:blockdevicesandcharacter devices. A block device is one that storesinformation in fixed-size blocks, each one with its own address. Commonblock sizes range from 512 to 65,536 bytes. All transfers are in units ofone or more entire (consecutive) blocks. The essential property of a blockdevice is that it is possible to read or write each blockindependently of allthe other ones. Hard disks, Blu-ray discs, and USB sticks are common blockdevices.The other type of I/O device is the character device. A characterdevice delivers or accepts a stream of characters, without regard to any blockstructure. It is not addressable and does not have any seek operation. Printers,network interfaces, mice (for pointing), rats (for psychology lab experiments),and most other devices that are not disk-like can be seen as character devices.I/O devices cover a huge range in speeds, which puts considerable pressureon the software to perform well over many orders of magnitude in data rates.Figure 7.1 shows the data rates of some common devices.
Figure 7.1 data rates of devices7.1.2 DeviceControllers:I/O units often consist of a mechanical component and anelectronic component. It is possible to separate the two portions to providea more modular and general design. The electronic componentis called
85building, or maintaining them, so our interest is in how the hardware isprogrammed, not how it works inside.7.1.1 I/ODevices:I/O devices can be roughly divided into two categories:blockdevicesandcharacter devices. A block device is one that storesinformation in fixed-size blocks, each one with its own address. Commonblock sizes range from 512 to 65,536 bytes. All transfers are in units ofone or more entire (consecutive) blocks. The essential property of a blockdevice is that it is possible to read or write each blockindependently of allthe other ones. Hard disks, Blu-ray discs, and USB sticks are common blockdevices.The other type of I/O device is the character device. A characterdevice delivers or accepts a stream of characters, without regard to any blockstructure. It is not addressable and does not have any seek operation. Printers,network interfaces, mice (for pointing), rats (for psychology lab experiments),and most other devices that are not disk-like can be seen as character devices.I/O devices cover a huge range in speeds, which puts considerable pressureon the software to perform well over many orders of magnitude in data rates.Figure 7.1 shows the data rates of some common devices.
Figure 7.1 data rates of devices7.1.2 DeviceControllers:I/O units often consist of a mechanical component and anelectronic component. It is possible to separate the two portions to providea more modular and general design. The electronic componentis called
85building, or maintaining them, so our interest is in how the hardware isprogrammed, not how it works inside.7.1.1 I/ODevices:I/O devices can be roughly divided into two categories:blockdevicesandcharacter devices. A block device is one that storesinformation in fixed-size blocks, each one with its own address. Commonblock sizes range from 512 to 65,536 bytes. All transfers are in units ofone or more entire (consecutive) blocks. The essential property of a blockdevice is that it is possible to read or write each blockindependently of allthe other ones. Hard disks, Blu-ray discs, and USB sticks are common blockdevices.The other type of I/O device is the character device. A characterdevice delivers or accepts a stream of characters, without regard to any blockstructure. It is not addressable and does not have any seek operation. Printers,network interfaces, mice (for pointing), rats (for psychology lab experiments),and most other devices that are not disk-like can be seen as character devices.I/O devices cover a huge range in speeds, which puts considerable pressureon the software to perform well over many orders of magnitude in data rates.Figure 7.1 shows the data rates of some common devices.
Figure 7.1 data rates of devices7.1.2 DeviceControllers:I/O units often consist of a mechanical component and anelectronic component. It is possible to separate the two portions to providea more modular and general design. The electronic componentis called
munotes.in

Page 86

86thedevice controlleroradapter. On personal computers, it often takesthe form of a chip on the parent board or a printed circuit7.1.3 Memory-Mapped I/O:Each controller has a few registers that are used for communicatingwith the CPU. By writing into these registers, the operating system cancommand the device to deliver data, accept data, switch itself on or off, orotherwise perform some action. By reading from these registers, the operatingsystem can learn what the device’s state is, whether it is prepared to accept anew command, and so on.Each control register is assigned a unique memory address towhich no memory is assigned. This system is calledmemory-mappedI/O. In most systems, the assigned addresses are at or near the top of theaddress space.7.1.4 Direct Memory Access:No matter whether a CPU does or does not have memory-mappedI/O, it needs to address the device controllers to exchange data with them.The CPU can request data from an I/O controller one byte at a time, but doingso wastes the CPU’s time, so a different scheme, calledDMA(DirectMemory Access) is often used. To simplify the explanation, we assume thatthe CPU accesses all devices and memory via a single system bus thatconnects the CPU, the memory, and the I/O devices. We already know thatthe real organization in modern systems is more complicated, but all theprinciples are the same. The operating system can use only DMA if thehardware has a DMA controller, which most systems do.Sometimes this controller is integrated into disk controllers and othercontrollers, but such a design requires a separate DMA controller for eachdevice. More commonly, a single DMA controller is available (e.g., on theparentboard) for regulating transfers to multiple devices, often concurrently
Fig 7.2 DMA controller
86thedevice controlleroradapter. On personal computers, it often takesthe form of a chip on the parent board or a printed circuit7.1.3 Memory-Mapped I/O:Each controller has a few registers that are used for communicatingwith the CPU. By writing into these registers, the operating system cancommand the device to deliver data, accept data, switch itself on or off, orotherwise perform some action. By reading from these registers, the operatingsystem can learn what the device’s state is, whether it is prepared to accept anew command, and so on.Each control register is assigned a unique memory address towhich no memory is assigned. This system is calledmemory-mappedI/O. In most systems, the assigned addresses are at or near the top of theaddress space.7.1.4 Direct Memory Access:No matter whether a CPU does or does not have memory-mappedI/O, it needs to address the device controllers to exchange data with them.The CPU can request data from an I/O controller one byte at a time, but doingso wastes the CPU’s time, so a different scheme, calledDMA(DirectMemory Access) is often used. To simplify the explanation, we assume thatthe CPU accesses all devices and memory via a single system bus thatconnects the CPU, the memory, and the I/O devices. We already know thatthe real organization in modern systems is more complicated, but all theprinciples are the same. The operating system can use only DMA if thehardware has a DMA controller, which most systems do.Sometimes this controller is integrated into disk controllers and othercontrollers, but such a design requires a separate DMA controller for eachdevice. More commonly, a single DMA controller is available (e.g., on theparentboard) for regulating transfers to multiple devices, often concurrently
Fig 7.2 DMA controller
86thedevice controlleroradapter. On personal computers, it often takesthe form of a chip on the parent board or a printed circuit7.1.3 Memory-Mapped I/O:Each controller has a few registers that are used for communicatingwith the CPU. By writing into these registers, the operating system cancommand the device to deliver data, accept data, switch itself on or off, orotherwise perform some action. By reading from these registers, the operatingsystem can learn what the device’s state is, whether it is prepared to accept anew command, and so on.Each control register is assigned a unique memory address towhich no memory is assigned. This system is calledmemory-mappedI/O. In most systems, the assigned addresses are at or near the top of theaddress space.7.1.4 Direct Memory Access:No matter whether a CPU does or does not have memory-mappedI/O, it needs to address the device controllers to exchange data with them.The CPU can request data from an I/O controller one byte at a time, but doingso wastes the CPU’s time, so a different scheme, calledDMA(DirectMemory Access) is often used. To simplify the explanation, we assume thatthe CPU accesses all devices and memory via a single system bus thatconnects the CPU, the memory, and the I/O devices. We already know thatthe real organization in modern systems is more complicated, but all theprinciples are the same. The operating system can use only DMA if thehardware has a DMA controller, which most systems do.Sometimes this controller is integrated into disk controllers and othercontrollers, but such a design requires a separate DMA controller for eachdevice. More commonly, a single DMA controller is available (e.g., on theparentboard) for regulating transfers to multiple devices, often concurrently
Fig 7.2 DMA controller
munotes.in

Page 87

87Some DMA controllers can also operate in either mode. In the formermode, the DMA controller requests the transfer of one word and gets it. If theCPU also wants the bus, it has to wait. The mechanism is calledcyclestealingbecause the device controllersneaks in and steals an occasional buscycle from the CPU once in a while, delaying it slightly.In block mode, the DMA controller tells the device to acquire thebus, issue a series of transfers, then release the bus. This form of operationis calledburst mode. It is more efficient than cycle stealing becauseacquiring the bus takes time and multiple words can be transferred for theprice of one bus acquisition. The down side to burst mode is that it canblock the CPU and other devices for a substantial period if a long burst isbeing transferred.7.1.5 Interrupts Revisited:In a typical personal computer system, the interrupt structure is asshown in Fig. 7.3 At the hardware level, interrupts work as follows. Whenan I/O device has finished the work given to it, it causes an interrupt(assuming that interrupts have been enabled by the operating system). Itdoes this by asserting a signal on a bus line that it has been assigned. Thissignal is detected by the interrupt controller chip on the parent board,which then decides what to do.
Fig 7.3 InterruptsIf no other interrupts are pending, the interrupt controller handlesthe interrupt immediately. However, if another interrupt is in progress, oranother device has made a simultaneous request on a higher-priorityinterrupt request line on the bus, the device is just ignored for the moment.In this case it continues to assert an interrupt signal on the bus until it isserviced by the CPU. To handle the interrupt, the controller puts a numberon the address lines specifying which device wants attention and asserts asignal to interrupt the CPU. The interrupt signal causes the CPU to stop
87Some DMA controllers can also operate in either mode. In the formermode, the DMA controller requests the transfer of one word and gets it. If theCPU also wants the bus, it has to wait. The mechanism is calledcyclestealingbecause the device controllersneaks in and steals an occasional buscycle from the CPU once in a while, delaying it slightly.In block mode, the DMA controller tells the device to acquire thebus, issue a series of transfers, then release the bus. This form of operationis calledburst mode. It is more efficient than cycle stealing becauseacquiring the bus takes time and multiple words can be transferred for theprice of one bus acquisition. The down side to burst mode is that it canblock the CPU and other devices for a substantial period if a long burst isbeing transferred.7.1.5 Interrupts Revisited:In a typical personal computer system, the interrupt structure is asshown in Fig. 7.3 At the hardware level, interrupts work as follows. Whenan I/O device has finished the work given to it, it causes an interrupt(assuming that interrupts have been enabled by the operating system). Itdoes this by asserting a signal on a bus line that it has been assigned. Thissignal is detected by the interrupt controller chip on the parent board,which then decides what to do.
Fig 7.3 InterruptsIf no other interrupts are pending, the interrupt controller handlesthe interrupt immediately. However, if another interrupt is in progress, oranother device has made a simultaneous request on a higher-priorityinterrupt request line on the bus, the device is just ignored for the moment.In this case it continues to assert an interrupt signal on the bus until it isserviced by the CPU. To handle the interrupt, the controller puts a numberon the address lines specifying which device wants attention and asserts asignal to interrupt the CPU. The interrupt signal causes the CPU to stop
87Some DMA controllers can also operate in either mode. In the formermode, the DMA controller requests the transfer of one word and gets it. If theCPU also wants the bus, it has to wait. The mechanism is calledcyclestealingbecause the device controllersneaks in and steals an occasional buscycle from the CPU once in a while, delaying it slightly.In block mode, the DMA controller tells the device to acquire thebus, issue a series of transfers, then release the bus. This form of operationis calledburst mode. It is more efficient than cycle stealing becauseacquiring the bus takes time and multiple words can be transferred for theprice of one bus acquisition. The down side to burst mode is that it canblock the CPU and other devices for a substantial period if a long burst isbeing transferred.7.1.5 Interrupts Revisited:In a typical personal computer system, the interrupt structure is asshown in Fig. 7.3 At the hardware level, interrupts work as follows. Whenan I/O device has finished the work given to it, it causes an interrupt(assuming that interrupts have been enabled by the operating system). Itdoes this by asserting a signal on a bus line that it has been assigned. Thissignal is detected by the interrupt controller chip on the parent board,which then decides what to do.
Fig 7.3 InterruptsIf no other interrupts are pending, the interrupt controller handlesthe interrupt immediately. However, if another interrupt is in progress, oranother device has made a simultaneous request on a higher-priorityinterrupt request line on the bus, the device is just ignored for the moment.In this case it continues to assert an interrupt signal on the bus until it isserviced by the CPU. To handle the interrupt, the controller puts a numberon the address lines specifying which device wants attention and asserts asignal to interrupt the CPU. The interrupt signal causes the CPU to stop
munotes.in

Page 88

88what it is doing and start doing something else. The number on the addresslines is used as an index into a table called theinterrupt vectorto fetch anew program counter. This program counter points to the start of thecorresponding interrupt-service procedure.PreciseInterrupts:An interrupt that leaves the machine in a well-defined state is called aprecise interrupt.Such an interrupt has four properties:1. The PC (Program Counter) is saved in a known place.2. All instructions before the one pointed to by the PC have completed.3. No instruction beyond the one pointed to by the PC has finished.4. The execution state of the instruction pointed to by the PC is known.7.2 PRINCIPLES OF I/O SOFTWAREFirst we will look at its goals and then at the different ways I/O can bedone from thepoint of view of the operating system.7.2.1 Goals of the I/OSoftware:A key concept in the design of I/O softwareis known asdeviceindependence.What it means is that we should be able to write programs that canaccess any I/O device without having to specify the device in advance. Forexample, a programthat readsa file as input should be able to read a fileon a hard disk, a DVD, or on a USB stick without having to be modifiedfor each different device.It is up to the operating system to take care of the problems causedby the factthat thesedevices really are different and require very differentcommand sequences to read or write.Another important issue for I/O software iserror handling. Ingeneral, errors should be handled as close to the hardware as possible. Ifthe controller discovers a read error, it should try to correct the error itselfif it can.Still another important issue is that ofsynchronous(blocking) vs.asynchronous(interrupt-driven) transfers. Most physical I/O isasynchronous—the CPU starts the transfer and goes offto do somethingelse until the interrupt arrives. User programs are much easier to write ifthe I/O operations are blocking—after a read system callthe program isautomatically suspended until the data are available in the buffer. It is upmunotes.in

Page 89

89to the operating system to make operations that are actually interrupt-driven look blocking to the user programs.The final concept that we will mention here is sharable vs.dedicated devices. Some I/O devices, such as disks, can be used by manyusers at the same time. No problems are caused by multiple users havingopen files on the same disk at the same time. Other devices, such asprinters, have to be dedicated to a single user until that user is finished.Then another user can have the printer.7.2.2 Programmed I/O:The simplest form of I/O is to have the CPU do all the work. Thismethod is calledprogrammed I/O.It is simplest to illustrate how programmed I/O works by means ofan example.Consider a user process that wants to print the eight-characterstring‘‘ABCDEFGH’’on the printer via a serial interface. Displays onsmall embedded systems sometimes work this way. The software firstassembles the string in a buffer in user space, as shown in Fig. 7.4
Fig 7.4 programmed I/OThe user process then acquires the printer for writing by making asystem call to open it. If the printer is currently in use by another process,this call will fail and return an error code or will block until the printer isavailable, depending on the operating system and the parameters of thecall. Once it has the printer, the user process makes a system call tellingthe operating system to print the string on the printer.The operating system then (usually) copies the buffer with thestring to an array, say,p, in kernel space, where it is more easily accessed(because the kernel may have to change the memory map to get at userspace). It then checks to see if the printer is currently available. If not, it
89to the operating system to make operations that are actually interrupt-driven look blocking to the user programs.The final concept that we will mention here is sharable vs.dedicated devices. Some I/O devices, such as disks, can be used by manyusers at the same time. No problems are caused by multiple users havingopen files on the same disk at the same time. Other devices, such asprinters, have to be dedicated to a single user until that user is finished.Then another user can have the printer.7.2.2 Programmed I/O:The simplest form of I/O is to have the CPU do all the work. Thismethod is calledprogrammed I/O.It is simplest to illustrate how programmed I/O works by means ofan example.Consider a user process that wants to print the eight-characterstring‘‘ABCDEFGH’’on the printer via a serial interface. Displays onsmall embedded systems sometimes work this way. The software firstassembles the string in a buffer in user space, as shown in Fig. 7.4
Fig 7.4 programmed I/OThe user process then acquires the printer for writing by making asystem call to open it. If the printer is currently in use by another process,this call will fail and return an error code or will block until the printer isavailable, depending on the operating system and the parameters of thecall. Once it has the printer, the user process makes a system call tellingthe operating system to print the string on the printer.The operating system then (usually) copies the buffer with thestring to an array, say,p, in kernel space, where it is more easily accessed(because the kernel may have to change the memory map to get at userspace). It then checks to see if the printer is currently available. If not, it
89to the operating system to make operations that are actually interrupt-driven look blocking to the user programs.The final concept that we will mention here is sharable vs.dedicated devices. Some I/O devices, such as disks, can be used by manyusers at the same time. No problems are caused by multiple users havingopen files on the same disk at the same time. Other devices, such asprinters, have to be dedicated to a single user until that user is finished.Then another user can have the printer.7.2.2 Programmed I/O:The simplest form of I/O is to have the CPU do all the work. Thismethod is calledprogrammed I/O.It is simplest to illustrate how programmed I/O works by means ofan example.Consider a user process that wants to print the eight-characterstring‘‘ABCDEFGH’’on the printer via a serial interface. Displays onsmall embedded systems sometimes work this way. The software firstassembles the string in a buffer in user space, as shown in Fig. 7.4
Fig 7.4 programmed I/OThe user process then acquires the printer for writing by making asystem call to open it. If the printer is currently in use by another process,this call will fail and return an error code or will block until the printer isavailable, depending on the operating system and the parameters of thecall. Once it has the printer, the user process makes a system call tellingthe operating system to print the string on the printer.The operating system then (usually) copies the buffer with thestring to an array, say,p, in kernel space, where it is more easily accessed(because the kernel may have to change the memory map to get at userspace). It then checks to see if the printer is currently available. If not, it
munotes.in

Page 90

90waits until it is. As soon as the printer is available, the operating systemcopies the first character to the printer’s data register, in this exampleusing memory-mapped I/O. This action activates the printer. The charactermay not appear yet because some printers buffer a line or a page beforeprinting anything. In Fig. 7.4 (b), however, we see that the first characterhas been printed and that the system has marked the ‘‘B’’ as the nextcharacter to be printed. As soon as it has copied the first character to theprinter, the operating system checksto see if the printer is ready to acceptanother one. Generally, the printer has a second register, which gives itsstatus. The act of writing to the data register causes the status to becomenot ready. When the printer controller has processed the current character,it indicates its availability by setting some bit in its status register orputting some value in it.At this point the operating system waits for the printer to becomeready again. When that happens, it prints the next character, as showninFig. 7.4 (c). This loop continues until the entire string has been printed.Then control returns to the user process.7.2.3 Interrupt-Driven I/O:Now let us consider the case of printing on a printer that does notbuffer characters but prints eachone as it arrives. If the printer can print,say 100 characters/ sec, each character takes 10 msec to print. This meansthat after every character is written to the printer’s data register, the CPUwill sit in an idle loop for 10 msec waiting to be allowedto output the nextcharacter. This is more than enough time to do a context switch and runsome other process for the 10 msec that would otherwise be wasted.The way to allow the CPU to do something else while waiting forthe printer to become ready is to use interrupts. When the system call toprint the string is made, the buffer is copied to kernel space, as we showedearlier, and the first character is copied to the printer as soon as it iswilling to accept a character. At that point the CPU calls thescheduler andsome other process is run. The process that asked for the string to beprinted is blocked until the entire string has printed.7.2.4 I/O Using DMA:An obvious disadvantage of interrupt-driven I/O is that an interruptoccurs on every character. Interrupts take time, so this scheme wastes acertain amount of CPU time.A solution is to use DMA. Here the idea isto let the DMA controller feed the characters to the printer one at time,without the CPU being bothered. In essence, DMA is programmed I/O,only with the DMA controller doing all the work, instead of the mainCPU. This strategy requires special hardware (the DMA controller) butfrees up the CPU during the I/O to do other work.munotes.in

Page 91

91The big win with DMA is reducing the number of interrupts fromone per character to one per buffer printed. If there are many charactersand interrupts are slow, this can be a major improvement. On the otherhand, the DMA controller is usually much slower than the main CPU. Ifthe DMA controller is not capableof driving the device at full speed, orthe CPU usually has nothing to do anyway while waiting for the DMAinterrupt, then interrupt-driven I/O or even programmed I/O may be better.7.3 I/O SOFTWARE LAYERSI/O software is typically organized in fourlayers, as shown in Fig. 7.5Each layer has a well-defined function to perform and a well-definedinterface to the adjacent layers
Figure 7.5I/O SOFTWARE LAYERS7.3.1 InterruptHandlers:When the interrupt happens, the interrupt procedure does whateverit has to in order to handle the interrupt. Then it can unblock the driver thatwas waiting for it. In some cases it will just complete up on a semaphore.In others it will do a signal on acondition variable in a monitor. In stillothers, it will send a message to the blocked driver. In all cases the neteffect of the interrupt will be that a driver that was previously blocked willnow be able to run. This model works best if drivers are structured askernel processes, with their own states, stacks, and program counters. Ofcourse, reality is not quite so simple. Processing an interrupt is not just amatter of taking the interrupt, doing an up on some semaphore, and thenexecuting an IRET instruction to return from the interrupt to the previousprocess. There is a great deal more work involved for the operatingsystem. We will now give an outline of this work as a series of steps thatmust be performed in software after the hardware interrupthas completed.It should be noted that the details are highly system dependent, so some ofthe steps listed below may not be needed on a particular machine, and
91The big win with DMA is reducing the number of interrupts fromone per character to one per buffer printed. If there are many charactersand interrupts are slow, this can be a major improvement. On the otherhand, the DMA controller is usually much slower than the main CPU. Ifthe DMA controller is not capableof driving the device at full speed, orthe CPU usually has nothing to do anyway while waiting for the DMAinterrupt, then interrupt-driven I/O or even programmed I/O may be better.7.3 I/O SOFTWARE LAYERSI/O software is typically organized in fourlayers, as shown in Fig. 7.5Each layer has a well-defined function to perform and a well-definedinterface to the adjacent layers
Figure 7.5I/O SOFTWARE LAYERS7.3.1 InterruptHandlers:When the interrupt happens, the interrupt procedure does whateverit has to in order to handle the interrupt. Then it can unblock the driver thatwas waiting for it. In some cases it will just complete up on a semaphore.In others it will do a signal on acondition variable in a monitor. In stillothers, it will send a message to the blocked driver. In all cases the neteffect of the interrupt will be that a driver that was previously blocked willnow be able to run. This model works best if drivers are structured askernel processes, with their own states, stacks, and program counters. Ofcourse, reality is not quite so simple. Processing an interrupt is not just amatter of taking the interrupt, doing an up on some semaphore, and thenexecuting an IRET instruction to return from the interrupt to the previousprocess. There is a great deal more work involved for the operatingsystem. We will now give an outline of this work as a series of steps thatmust be performed in software after the hardware interrupthas completed.It should be noted that the details are highly system dependent, so some ofthe steps listed below may not be needed on a particular machine, and
91The big win with DMA is reducing the number of interrupts fromone per character to one per buffer printed. If there are many charactersand interrupts are slow, this can be a major improvement. On the otherhand, the DMA controller is usually much slower than the main CPU. Ifthe DMA controller is not capableof driving the device at full speed, orthe CPU usually has nothing to do anyway while waiting for the DMAinterrupt, then interrupt-driven I/O or even programmed I/O may be better.7.3 I/O SOFTWARE LAYERSI/O software is typically organized in fourlayers, as shown in Fig. 7.5Each layer has a well-defined function to perform and a well-definedinterface to the adjacent layers
Figure 7.5I/O SOFTWARE LAYERS7.3.1 InterruptHandlers:When the interrupt happens, the interrupt procedure does whateverit has to in order to handle the interrupt. Then it can unblock the driver thatwas waiting for it. In some cases it will just complete up on a semaphore.In others it will do a signal on acondition variable in a monitor. In stillothers, it will send a message to the blocked driver. In all cases the neteffect of the interrupt will be that a driver that was previously blocked willnow be able to run. This model works best if drivers are structured askernel processes, with their own states, stacks, and program counters. Ofcourse, reality is not quite so simple. Processing an interrupt is not just amatter of taking the interrupt, doing an up on some semaphore, and thenexecuting an IRET instruction to return from the interrupt to the previousprocess. There is a great deal more work involved for the operatingsystem. We will now give an outline of this work as a series of steps thatmust be performed in software after the hardware interrupthas completed.It should be noted that the details are highly system dependent, so some ofthe steps listed below may not be needed on a particular machine, and
munotes.in

Page 92

92steps not listed may be required. Also, the steps that do occur may be in adifferent order onsome machines.1.Save any registers (including the PSW) that have not already beensaved by the interrupt hardware.2.Set up a context for the interrupt-service procedure. Doing this mayinvolve setting up the TLB, MMU and a page table.3.Set up astack for the interrupt service-procedure.4.Acknowledge the interrupt controller. If there is no centralizedinterrupt controller,re enableinterrupts.5.Copy the registers from where they were saved (possibly some stack)to the process table.6.Run the interrupt-service procedure. It will extract information fromthe interrupting device controller’s registers.7.Choose which process to run next. If the interrupt has caused somehigh-priority process that was blocked to become ready, it may bechosen to run now.8.Set up the MMU context for the process to run next. Some TLB setupmay also be needed.9.Load the new process’ registers, including its PSW.10.Start running the new process.7.3.2 Device Drivers:Earlier in this chapter welooked at what device controllers do. Wesaw that each controller has some device registers used to give itcommands or some device registers used to read out its status or both. Thenumber of device registers and the nature of the commands vary radicallyfrom device to device. For example, a mouse driver has to acceptinformation from the mouse telling it how far it has moved and whichbuttons are currently depressed. In contrast, a disk driver may have toknow all about sectors, tracks, cylinders, heads,arm motion, motor drives,head settling times, and all the other mechanics of making the disk workproperly.Obviously, these drivers will be very different.Consequently, each I/O device attached to a computer needs somedevice-specific code for controlling it. This code, called thedevice driver,is generally written by the device’s manufacturer and delivered along withthe device. Since each operating systemneeds its own drivers, devicemanufacturers commonly supply drivers for several popular operatingsystems.Each device driver normally handles one device type, or at most,one class of closely related devices. For example, a SCSI disk driver canusuallyhandle multiple SCSI disks of different sizes and different speeds,and perhaps a SCSI Blu-ray disk as well. On the other hand, a mouse andjoystick are so different that different drivers are usually required.munotes.in

Page 93

93However, there is no technical restriction on having onedevice drivercontrol multiple unrelated devices. It is just not a good idea in most cases.Sometimes though, wildly different devices are based on the sameunderlying technology. The best-known example is probably USB, a serialbus technology that is not called ‘‘universal’’ for nothing. USB devicesinclude disks, memory sticks, cameras, mice, keyboards, mini-fans,wireless network cards, robots, credit card readers, rechargeable shavers,paper shredders,barcodescanners, disco balls, and portablethermometers.They all use USB and yet they all do very different things. The trick is thatUSB drivers are typically stacked, like a TCP/IP stack in networks.In order to access the device’s hardware, actually, meaning thecontroller’s registers, the device driver normally has to be part of theoperating system kernel, at least with current architectures. Actually, it ispossible to construct drivers that run in user space, with system calls forreading and writing the device registers. This design isolates the kernelfrom the drivers and the drivers from each other, eliminating a majorsource of system crashes—buggy drivers that interfere with the kernel inone way or another. For building highly reliable systems, this is definitelythe way to go.7.3.3 Device-Independent I/O Software:Although some of the I/O software is device specific, other parts of itare device independent. The exact boundary between the drivers and thedevice-independent software is system (and device) dependent, becausesome functions that could be done in a device-independent way mayactually be done in the drivers, for efficiency or other reasons. Thefunctions shown in Fig. 7.6 are typically done in the device independentsoftware.Uniform interfacing for device driversBufferingError reportingAllocating and releasing dedicated devicesProviding a device-independent block sizeFig 7.6Device-Independent I/O Software FunctionThe basic function of the device-independent software is to performthe I/Ofunctions that are common to all devices and to provide a uniforminterface to the user-level software. We will now look at the above issuesin more detail.Uniform Interfacing for Device Drivers:A major issue in an operating system is how to make all I/O devicesand drivers look more or less the same. If disks, printers, keyboards, andmunotes.in

Page 94

94so on, are all interfaced in different ways, every time a new device comesalong, the operating system must be modified for the new device. Havingto hack on the operatingsystem for each new device is not a good ideaBuffering:Buffering is also an issue, both for block and character devices, for avariety of reasons. To see one of them, consider a process that wants toread data from an (ADSL—Asymmetric DigitalSubscriber Line) modem,something many people use at home to connect to the Internet. Onepossible strategy for dealing with the incoming characters is to have theuser process do a read system call and block waiting for one character.Each arriving character causes an interrupt. The interrupt-serviceprocedure hands the character to the user process and unblocks it. Afterputting the character somewhere, the process reads another character andblocks again.ErrorReporting:Errors are far more common inthe context of I/O than in othercontexts. When they occur, the operating system must handle them as bestit can. Many errors are device specific and must be handled by theappropriate driver, but the framework for error handling is deviceindependent.One class of I/O errors is programming errors. These occur when aprocess asks for something impossible, such as writing to an input device(keyboard, scanner, mouse, etc.) or reading from an output device (printer,plotter, etc.). Other errors are providingan invalid buffer address or otherparameter, and specifying an invalid device (e.g., disk 3 when the systemhas only two disks), and so on. The action to take on these errors isstraightforward: just report back an error code to the caller. Another classof errors is the class of actual I/O errors, for example, trying to write adisk block that has been damaged or trying to read from a camcorder thathas been switched off. In these circumstances, it is up to the driver todetermine what to do. If the driver does not know what to do, it may passthe problem back up to device independent software.What this software does depends on the environment and thenature of the error. If it is a simple read error and there is an interactiveuser available, it maydisplay a dialog box asking the user what to do. Theoptions may include retrying a certain number of times, ignoring the error,or killing the calling process. If there is no user available, probably theonly real option is to have the system call fail with an error code.Allocating and Releasing Dedicated Devices:Some devices, such as printers, can be used only by a singleprocess at any given moment. It is up to the operating system to examinerequests for device usage and accept or reject them, depending on whethermunotes.in

Page 95

95the requested device is available or not. A simple way to handle theserequests is to require processes to perform opens on the special files fordevices directly. If the device is unavailable, the open fails. Closing such adedicated device then releases it.An alternative approach is to have special mechanisms forrequesting and releasing dedicated devices. An attempt to acquire a devicethat is not available blocks the caller instead of failing. Blocked processesare put on a queue. Sooner or later, the requested device becomesavailable and the first process on the queue is allowed to acquire it andcontinue execution.Device-Independent BlockSize:Different disks may have different sector sizes. It is up to thedevice-independentsoftware to hide this fact and provide a uniform blocksize to higher layers, for example, by treating several sectors as a singlelogical block. In this way, the higher layers deal only with abstract devicesthat all use the same logical block size, independent of the physical sectorsize. Similarly, some character devices deliver their data one byte at a time(e.g., mice), while others deliver theirs in larger units (e.g., Ethernetinterfaces). These differences may also be hidden7.3.4 User-Space I/O Software:Although most of the I/O software is within the operating system, asmall portion of it consists of libraries linked together with user programs,and even whole programs running outside the kernel. System calls,including the I/O system calls, arenormally madeby library procedures.When a C program contains the callcount = write(fd, buffer, nbytes);Thelibrary procedurewritemight be linked with the program andcontained in the binary program present in memory at run time. In othersystems,libraries can be loaded during program execution. Either way, thecollection of all these library procedures is clearly part of the I/O system.While these procedures do little more than put their parameters inthe appropriate place for the system call, other I/O procedures actually doreal work.In particular, formatting of input and output is done by library procedures.One example from C isprintf, which takes a format string andpossibly some variables as input, builds an ASCII string, and then callswrite to output the string. As an example ofprintf, consider the statement.printf("The square of %3d is %6d\n", i, i*i);munotes.in

Page 96

96It formats a string consisting of the 14-character string ‘‘Thesquare of ’’ followed by the valueias a 3-character string, then the 4-character string ‘‘ is ’’, theni2 as 6 characters, and finally a line feed.An example of a similar procedure for input isscanf, which readsinput and stores it into variables described in a format string using thesame syntax asprintf.The standard I/O library contains a number of procedures thatinvolve I/O and all run as part of user programs.Not all user-level I/O software consists oflibrary procedures.Another important category is the spooling system.Spoolingis a way ofdealing with dedicated I/O devices in a multiprogramming system.Consider a typical spooled device: a printer. Although it would betechnically easy to let any user process open the character special file forthe printer, suppose a process opened it and then did nothing for hours. Noother process could print anything.Instead what is done is to create a special process, called adaemon, and a special directory, called aspooling directory. To print afile, a process first generates the entire file to be printed and puts it in thespooling directory. It is up to the daemon, which is the only processhaving permission to use the printer’s special file,to print thefiles in thedirectory. By protecting the special file against direct use by users, theproblem of having someone keeping it open unnecessarily long iseliminated. Spooling is used not only for printers. It is also used in otherI/O situations.7.4SUMMARYDifferent people look at I/O hardware in different ways. In thisbook we are concerned with programming I/O devices, not designing,building, or maintaining them, so our interest is in how the hardware isprogrammed, not how it works inside. Different disks may have differentsector sizes. It is up to the device-independent software to hide this factand provide a uniform block size to higher layers, for example, by treatingseveral sectors as a single logical block.7.5UNIT ENDQUESTIONS1)What is the use of Boot Block?2)What is Sector Sparing?3)Explain Direct Memory Access?4)Explain Programmed I/O indetail.5)Explain Device-Independent I/O Software.*****munotes.in

Page 97

978I/O DEVICESUnit structure8.0Objectives8.1Introduction8.2Clocks8.3User interface8.4Thin clients8.5power management8.6Summary8.7UnitEndQuestions8.0 OBJECTIVES•To understand the Disk concept•To learn CLOCKS Concept•To learn different, USER INTERFACES•To understand THIN CLIENTS and Power managementIn addition to providing abstractions such as processes, addressspaces, and files, an operating system also controls all the computer’s I/O(Input/Output) devices.It must issue commands to the devices, catchinterrupts, and handle errors. It should also provide an interface betweenthe devices and the rest of the system that is simple and easy to use. To theextent possible, the interface should be the same for alldevices (deviceindependence). The I/O code represents a significant fraction of the totaloperating system. How the operating system manages I/O is the subject ofthis chapter.This chapter is organized as follows. We will look first at some ofthe principles of I/O hardware and then at I/O software in general. I/Osoftware can be structured in layers, with each having a well-defined task.We will look at these layers to see what they do and how they fit together.Next, we will look at several I/O devices in detail: disks, clocks,keyboards, and displays. For each device we will look at its hardware andsoftware. Finally, we will consider power management.8.1INTRODUCTIONWe will begin with disks, which are conceptually simple, yet veryimportant. After that we will examine clocks, keyboards, and displays.munotes.in

Page 98

988.1.1 Disk Hardware:Disks come in a variety of types. The most common ones are themagnetic hard disks. They are characterized by the fact that reads andwrites are equally fast, which makes them suitable as secondary memory(paging, file systems, etc.). Arrays of these disks are sometimes used toprovide highly reliable storage. For distribution of programs, data, andmovies, optical disks (DVDs and Blu-ray) are also important. Finally,solid-state disks are increasingly popular as they are fast and do notcontain moving parts. In the following sections we will discuss magneticdisks as an example of the hardware and then describe the software fordisk devices in general.MagneticDisks:Magnetic disks are organized into cylinders, each one containing asmany tracks as there are heads stacked vertically. The tracks are dividedinto sectors, with the number of sectors around the circumference typicallybeing 8 to 32 on floppy disks, and up to several hundred on hard disks.The number of heads varies from 1 to about 16.Older disks have little electronics and just deliver a simple serial bitstream. On these disks,the controller does most of the work. On otherdisks, in particular,IDE(Integrated Drive Electronics) andSATA(Serial ATA) disks, the disk drive itself contains a microcontroller thatdoes considerable work and allows the real controller to issue a setofhigher-level commands. The controller often does track caching, bad-block remapping, and much more.A device feature that has important implications for the disk driveris the possibility of a controller doing seeks on two or more drives at thesame time. These are known asoverlapped seeks. While the controllerand software are waiting for aseek tocomplete on one drive, thecontroller can initiate a seek on another drive. Many controllers can alsoread or write on one drive while seeking on one or more other drives, but afloppy disk controller cannot read or write on two drives at the same time.(Reading or writing requires the controller to move bits on a microsecondtime scale, so one transfer uses up most of its computing power.) Thesituation is different for hard disks with integrated controllers, and in asystem with more than one of these hard drives they can operatesimultaneously, at least to the extent of transferring between the disk andthe controller’s buffer memory. Only one transfer between the controllerand the main memory is possible at once, however. The ability to performtwo or more operations at the same time can reduce the average accesstime considerably.Figure 8.1 compares parameters of the standard storage mediumfor the original IBM PC with parameters of a disk made three decadeslater to show how much disks changed in that time. It is interesting to notemunotes.in

Page 99

99that not all parameters have improved as much. Average seek time isalmost 9 times better than it was, transfer rate is 16,000 times better, whilecapacity is up by a factor of 800,000. This pattern has to do with relativelygradual improvements in the moving parts, but much higher bit densitieson the recording surfaces.
Fig 8.1 compares parametersOne thing to be aware of in looking at the specifications of modernhard disks is that the geometry specified, and used by the driver software,is almost always different from the physical format. On old disks, thenumber of sectors per track was the same for all cylinders. Modern disksare divided into zones with more sectors on the outer zones than the innerones.RAID:CPU performance has been increasing exponentially over the pastdecade, roughly doubling every 18 months. Not so with disk performance.In the 1970s, average seek times on minicomputer disks were 50 to 100msec. Now seek times are still a few msec. In most technical industries(say, automobiles or aviation), a factor of 5 to 10 performanceimprovement in two decades would be major news (imagine 300-MPGcars), but in the computer industry it is an embarrassment.Thus the gap between CPU performance and (hard) diskperformance hasbecome muchlarger over time. Can anything be done tohelp?Yes! As we have seen, parallel processing is increasingly beingused to speed up CPU performance. It has occurred to various people overthe years that parallel I/O might be a good idea, too. In their 1988 paper,Patterson et al. suggested six specific disk organizations that could be usedto improve disk performance, reliability, or both (Patterson et al., 1988).
99that not all parameters have improved as much. Average seek time isalmost 9 times better than it was, transfer rate is 16,000 times better, whilecapacity is up by a factor of 800,000. This pattern has to do with relativelygradual improvements in the moving parts, but much higher bit densitieson the recording surfaces.
Fig 8.1 compares parametersOne thing to be aware of in looking at the specifications of modernhard disks is that the geometry specified, and used by the driver software,is almost always different from the physical format. On old disks, thenumber of sectors per track was the same for all cylinders. Modern disksare divided into zones with more sectors on the outer zones than the innerones.RAID:CPU performance has been increasing exponentially over the pastdecade, roughly doubling every 18 months. Not so with disk performance.In the 1970s, average seek times on minicomputer disks were 50 to 100msec. Now seek times are still a few msec. In most technical industries(say, automobiles or aviation), a factor of 5 to 10 performanceimprovement in two decades would be major news (imagine 300-MPGcars), but in the computer industry it is an embarrassment.Thus the gap between CPU performance and (hard) diskperformance hasbecome muchlarger over time. Can anything be done tohelp?Yes! As we have seen, parallel processing is increasingly beingused to speed up CPU performance. It has occurred to various people overthe years that parallel I/O might be a good idea, too. In their 1988 paper,Patterson et al. suggested six specific disk organizations that could be usedto improve disk performance, reliability, or both (Patterson et al., 1988).
99that not all parameters have improved as much. Average seek time isalmost 9 times better than it was, transfer rate is 16,000 times better, whilecapacity is up by a factor of 800,000. This pattern has to do with relativelygradual improvements in the moving parts, but much higher bit densitieson the recording surfaces.
Fig 8.1 compares parametersOne thing to be aware of in looking at the specifications of modernhard disks is that the geometry specified, and used by the driver software,is almost always different from the physical format. On old disks, thenumber of sectors per track was the same for all cylinders. Modern disksare divided into zones with more sectors on the outer zones than the innerones.RAID:CPU performance has been increasing exponentially over the pastdecade, roughly doubling every 18 months. Not so with disk performance.In the 1970s, average seek times on minicomputer disks were 50 to 100msec. Now seek times are still a few msec. In most technical industries(say, automobiles or aviation), a factor of 5 to 10 performanceimprovement in two decades would be major news (imagine 300-MPGcars), but in the computer industry it is an embarrassment.Thus the gap between CPU performance and (hard) diskperformance hasbecome muchlarger over time. Can anything be done tohelp?Yes! As we have seen, parallel processing is increasingly beingused to speed up CPU performance. It has occurred to various people overthe years that parallel I/O might be a good idea, too. In their 1988 paper,Patterson et al. suggested six specific disk organizations that could be usedto improve disk performance, reliability, or both (Patterson et al., 1988).
munotes.in

Page 100

100These ideas were quickly adopted by industry and have led to a new classof I/O device called aRAID. Patterson et al.defined RAID asRedundant Array of Inexpensive Disks, butindustry redefined the I to be ‘‘Independent’’ rather than ‘‘Inexpensive’’(maybe so they could chargemore?). Since a villain was also needed (as inRISC vs. CISC, also due to Patterson), the bad guy here was theSLED(Single Large Expensive Disk).The fundamental idea behind a RAID is to install a box full ofdisks next to the computer, typically a large server, replace the diskcontroller card with a RAID controller, copy the data over to the RAID,and then continue normal operation. In other words, a RAID should looklike a SLED to the operating system but have better performance andbetter reliability. In the past, RAIDs consisted almost exclusively of aRAID SCSI controller plus a box of SCSI disks, because the performancewas good and modern SCSI supports up to 15 disks on a single controller.Now a days, many manufacturers also offer (less expensive) RAIDs basedon SATA. In this way, no software changes are required to use the RAID,a big selling point for many system administrators.In addition to appearing like a single disk to the software, all RAIDshave the property that the data are distributed over the drives, to allowparallel operation. Several different schemes for doing this were definedby Patterson etal. Nowadays, most manufacturers refer to the sevenstandard configurations as RAID level 0 through RAID level 6. Inaddition, there are a few other minor levels that we will not discuss. Theterm ‘‘level’’ is something of a misnomer since no hierarchy isinvolved;there are simply seven different organizations possible.8.1.2 Disk Formatting:A hard disk consists of a stack of aluminum, alloy, or glass platterstypically 3.5 inch in diameter (or 2.5 inch on notebook computers). Oneach platter is deposited a thin magnetizable metal oxide. Aftermanufacturing, there is no information whatsoever on the disk. Before thedisk can be used, each platter must receive alow-level formatdone bysoftware. The format consists of a series of concentric tracks, eachcontaining some number of sectors, with short gaps between the sectors.The format of a sector is shown in Fig. 8.2Fig 8.2The preamble starts with a certain bit pattern that allows thehardware to recognize the start of the sector. It also contains the cylinderand sector numbers and some other information. The size of the data
munotes.in

Page 101

101portion is determined by the low level formatting program. Most disks use512-byte sectors. The ECC field containsredundant information that canbe used to recover from read errors. The size and content of this fieldvaries from manufacturer to manufacturer, depending on how much diskspace the designer is willing to give up for higher reliability and howcomplex an ECC code the controller can handle. A 16-byte ECC field isnot unusual.Furthermore, all hard disks have some number of spare sectorsallocated to be used to replace sectors with a manufacturing defect.The position of sector 0 on each track is offset from the previoustrack when the low-level format is laid down. This offset, calledcylinderskew, is done to improve performance. The idea is to allow the disk toread multiple tracks in onecontinuous operation without losing data.8.1.3 Disk Arm Scheduling Algorithms:In this section we will look at some issues related to disk drivers ingeneral.First, consider how long it takes to read or write a disk block. The timerequired is determined by three factors:1.Seek time (the time to move the arm to the proper cylinder).2.Rotational delay (how long for the proper sector to appear under thereading head).3.Actual data transfer time.If the disk driver accepts requests one at a time and carries themout in that order, that is,FCFS(First-Come, First-Served), littlecan bedone to optimize seek time. However, another strategy is possible whenthe disk is heavily loaded. It is likely that while the arm is seeking onbehalf of one request, other disk requests may be generated by otherprocesses. Many disk drivers maintain a table, indexed by cylindernumber, with all the pending requests for each cylinder chained together ina linked list headed by the table entries.Given this kind of data structure, we can improve upon the first-come, first servedscheduling algorithm.Tosee how, consider animaginary disk with 40 cylinders.A request comes in to read a block oncylinder 11. While theseek to cylinder 11 is in progress, new requestscome in for cylinders 1, 36, 16, 34, 9, and 12, in that order.They are entered into the table of pending requests, with a separatelinked list for each cylinder. The requests are shown in Fig. 8.3 . When thecurrent request (for cylinder 11) is finished, the disk driver has a choice ofwhich request to handle next. Using FCFS, it wouldgo next to cylinder 1,munotes.in

Page 102

102then to 36, and so on. This algorithm would require arm motions of 10, 35,20, 18, 25, and 3, respectively, for a total of 111 cylinders.
Fig 8.38.1.4 Error Handling:Disk manufacturers are constantly pushing the limits of thetechnology by increasing linear bit densities. A track midway out on a5.25-inch disk has a circumference of about 300 mm. If the track holds300 sectors of 512 bytes, the linear recording densitymay be about 5000bits/mm taking into account the fact that some space is lost to preambles,ECCs, and intersector gaps. Recording 5000 bits/mm requires anextremely uniform substrate and a very fine oxide coating. Unfortunately,it is not possible to manufacture a disk to such specifications withoutdefects. As soon as manufacturing technology has improved to the pointwhere it is possible to operate flawlessly at such densities, disk designerswill go to higher densities to increase the capacity. Doing so will probablyreintroduce defects.Manufacturing defects introduce bad sectors, that is, sectors that donot correctly read back the value just written to them. If the defect is verysmall, say, only a few bits, it is possible to use the bad sector and just letthe ECC correct the errors every time. If the defect is bigger, the errorcannot be maskedThere are two general approaches to bad blocks: deal with them inthe controller or deal with them in the operating system. In the formerapproach, before the disk is shipped from the factory, it is tested and a listof bad sectors is written onto the disk. For each bad sector, one of thespares is substituted for it.There are two ways to do this substitution. In Fig. 8.4 (a) we see asingle disk track with 30 data sectors and two spares. Sector 7 is defective.What the controller can do is remap one of the spares as sector 7 as shownin Fig. 8.4(b). The other way is to shift all the sectors up one, as shown inFig. 8.4 (c). In both cases the controller has to know which sector iswhich. It can keep track of this information through internal tables (oneper track) or by rewriting the preambles to give the remapped sectornumbers. If the preambles are rewritten, the method of Fig. 8.4(c) is more
102then to 36, and so on. This algorithm would require arm motions of 10, 35,20, 18, 25, and 3, respectively, for a total of 111 cylinders.
Fig 8.38.1.4 Error Handling:Disk manufacturers are constantly pushing the limits of thetechnology by increasing linear bit densities. A track midway out on a5.25-inch disk has a circumference of about 300 mm. If the track holds300 sectors of 512 bytes, the linear recording densitymay be about 5000bits/mm taking into account the fact that some space is lost to preambles,ECCs, and intersector gaps. Recording 5000 bits/mm requires anextremely uniform substrate and a very fine oxide coating. Unfortunately,it is not possible to manufacture a disk to such specifications withoutdefects. As soon as manufacturing technology has improved to the pointwhere it is possible to operate flawlessly at such densities, disk designerswill go to higher densities to increase the capacity. Doing so will probablyreintroduce defects.Manufacturing defects introduce bad sectors, that is, sectors that donot correctly read back the value just written to them. If the defect is verysmall, say, only a few bits, it is possible to use the bad sector and just letthe ECC correct the errors every time. If the defect is bigger, the errorcannot be maskedThere are two general approaches to bad blocks: deal with them inthe controller or deal with them in the operating system. In the formerapproach, before the disk is shipped from the factory, it is tested and a listof bad sectors is written onto the disk. For each bad sector, one of thespares is substituted for it.There are two ways to do this substitution. In Fig. 8.4 (a) we see asingle disk track with 30 data sectors and two spares. Sector 7 is defective.What the controller can do is remap one of the spares as sector 7 as shownin Fig. 8.4(b). The other way is to shift all the sectors up one, as shown inFig. 8.4 (c). In both cases the controller has to know which sector iswhich. It can keep track of this information through internal tables (oneper track) or by rewriting the preambles to give the remapped sectornumbers. If the preambles are rewritten, the method of Fig. 8.4(c) is more
102then to 36, and so on. This algorithm would require arm motions of 10, 35,20, 18, 25, and 3, respectively, for a total of 111 cylinders.
Fig 8.38.1.4 Error Handling:Disk manufacturers are constantly pushing the limits of thetechnology by increasing linear bit densities. A track midway out on a5.25-inch disk has a circumference of about 300 mm. If the track holds300 sectors of 512 bytes, the linear recording densitymay be about 5000bits/mm taking into account the fact that some space is lost to preambles,ECCs, and intersector gaps. Recording 5000 bits/mm requires anextremely uniform substrate and a very fine oxide coating. Unfortunately,it is not possible to manufacture a disk to such specifications withoutdefects. As soon as manufacturing technology has improved to the pointwhere it is possible to operate flawlessly at such densities, disk designerswill go to higher densities to increase the capacity. Doing so will probablyreintroduce defects.Manufacturing defects introduce bad sectors, that is, sectors that donot correctly read back the value just written to them. If the defect is verysmall, say, only a few bits, it is possible to use the bad sector and just letthe ECC correct the errors every time. If the defect is bigger, the errorcannot be maskedThere are two general approaches to bad blocks: deal with them inthe controller or deal with them in the operating system. In the formerapproach, before the disk is shipped from the factory, it is tested and a listof bad sectors is written onto the disk. For each bad sector, one of thespares is substituted for it.There are two ways to do this substitution. In Fig. 8.4 (a) we see asingle disk track with 30 data sectors and two spares. Sector 7 is defective.What the controller can do is remap one of the spares as sector 7 as shownin Fig. 8.4(b). The other way is to shift all the sectors up one, as shown inFig. 8.4 (c). In both cases the controller has to know which sector iswhich. It can keep track of this information through internal tables (oneper track) or by rewriting the preambles to give the remapped sectornumbers. If the preambles are rewritten, the method of Fig. 8.4(c) is more
munotes.in

Page 103

103work (because 23 preambles must be rewritten) but ultimately gives betterperformance because an entire track can still be read in one rotation.
Fig 8.4Errors can also develop during normal operation after the drive has beeninstalled.The first line of defense upon getting an error that the ECC cannothandle is to just try the read again. Some read errors are transient, that is,are caused by specks of dust under the headand will go away on a secondattempt. If the controller notices that it is getting repeated errors on acertain sector, it can switch to a spare before the sector has diedcompletely. In this way, no data are lost and the operating system and userdo not even notice the problem. Usually, the method of Fig. 8.4(b) has tobe used since the other sectors might now contain data. Using the methodof Fig. 8.4(c) would require not only rewriting the preambles, but copyingall the data as well.8.1.5 StableStorage:As we have seen, disks sometimes make errors. Good sectors cansuddenly become bad sectors. Whole drives can die unexpectedly. RAIDsprotect against a few sectors going bad or even a drive falling out.However, they do not protect against writeerrors laying down bad data inthe first place. They also do not protect against crashes during writescorrupting the original data without replacing them by newer data.For some applications, it is essential that data never be lost orcorrupted, even inthe face of disk and CPU errors. Ideally, a disk shouldsimply work all the time with no errors. Unfortunately, that is notachievable. What is achievable is a disk subsystem that has the followingproperty: when a write is issued to it, the disk either correctly writes thedata or it does nothing, leaving the existing data intact.Such a system is calledstable storageand is implemented insoftware (Lampson and Sturgis, 1979). The goal is to keep the diskconsistent at all costs. Below we will describea slight variant of theoriginal idea.
103work (because 23 preambles must be rewritten) but ultimately gives betterperformance because an entire track can still be read in one rotation.
Fig 8.4Errors can also develop during normal operation after the drive has beeninstalled.The first line of defense upon getting an error that the ECC cannothandle is to just try the read again. Some read errors are transient, that is,are caused by specks of dust under the headand will go away on a secondattempt. If the controller notices that it is getting repeated errors on acertain sector, it can switch to a spare before the sector has diedcompletely. In this way, no data are lost and the operating system and userdo not even notice the problem. Usually, the method of Fig. 8.4(b) has tobe used since the other sectors might now contain data. Using the methodof Fig. 8.4(c) would require not only rewriting the preambles, but copyingall the data as well.8.1.5 StableStorage:As we have seen, disks sometimes make errors. Good sectors cansuddenly become bad sectors. Whole drives can die unexpectedly. RAIDsprotect against a few sectors going bad or even a drive falling out.However, they do not protect against writeerrors laying down bad data inthe first place. They also do not protect against crashes during writescorrupting the original data without replacing them by newer data.For some applications, it is essential that data never be lost orcorrupted, even inthe face of disk and CPU errors. Ideally, a disk shouldsimply work all the time with no errors. Unfortunately, that is notachievable. What is achievable is a disk subsystem that has the followingproperty: when a write is issued to it, the disk either correctly writes thedata or it does nothing, leaving the existing data intact.Such a system is calledstable storageand is implemented insoftware (Lampson and Sturgis, 1979). The goal is to keep the diskconsistent at all costs. Below we will describea slight variant of theoriginal idea.
103work (because 23 preambles must be rewritten) but ultimately gives betterperformance because an entire track can still be read in one rotation.
Fig 8.4Errors can also develop during normal operation after the drive has beeninstalled.The first line of defense upon getting an error that the ECC cannothandle is to just try the read again. Some read errors are transient, that is,are caused by specks of dust under the headand will go away on a secondattempt. If the controller notices that it is getting repeated errors on acertain sector, it can switch to a spare before the sector has diedcompletely. In this way, no data are lost and the operating system and userdo not even notice the problem. Usually, the method of Fig. 8.4(b) has tobe used since the other sectors might now contain data. Using the methodof Fig. 8.4(c) would require not only rewriting the preambles, but copyingall the data as well.8.1.5 StableStorage:As we have seen, disks sometimes make errors. Good sectors cansuddenly become bad sectors. Whole drives can die unexpectedly. RAIDsprotect against a few sectors going bad or even a drive falling out.However, they do not protect against writeerrors laying down bad data inthe first place. They also do not protect against crashes during writescorrupting the original data without replacing them by newer data.For some applications, it is essential that data never be lost orcorrupted, even inthe face of disk and CPU errors. Ideally, a disk shouldsimply work all the time with no errors. Unfortunately, that is notachievable. What is achievable is a disk subsystem that has the followingproperty: when a write is issued to it, the disk either correctly writes thedata or it does nothing, leaving the existing data intact.Such a system is calledstable storageand is implemented insoftware (Lampson and Sturgis, 1979). The goal is to keep the diskconsistent at all costs. Below we will describea slight variant of theoriginal idea.
munotes.in

Page 104

104Before describing the algorithm, it is important to have a clearmodel of the possible errors. The model assumes that when a disk writes ablock (one or more sectors), either the write is correct or it is incorrectandthis error can be detected on a subsequent read by examining the values ofthe ECC fields. In principle, guaranteed error detection is never possiblebecause with a, say, 16-byte ECC field guarding a 512-byte sector, thereare 24096 data values and only 2144 ECC values. Thus if a block isgarbled during writing but the ECC is not, there are billions upon billionsof incorrect combinations that yield the same ECC. If any of them occur,the error will not be detected. On the whole, the probability of randomdata having the proper 16-byte ECC is about 2−144, which is smallenough that we will call it zero, even though it is really not.The model also assumes that a correctly written sector canspontaneously go bad and become unreadable. However, the assumption isthat such events are so rare that having the same sector go bad on a second(independent) drive during a reasonable time interval (e.g., 1 day) is smallenough to ignore.The model also assumes the CPU can fail, in which case it juststops. Anydisk write in progress at the moment of failure also stops,leading to incorrect data in one sector and an incorrect ECC that can laterbe detected. Under all these conditions, stable storage can be made 100%reliable in the sense of writes either workingcorrectly or leaving the olddata in place. Of course, it does not protect againstphysical disasters, suchas an earthquake happening and the computer falling 100 meters into afissure and landing in a pool of boiling magma. It is tough to recover fromthis condition in software.Stable storage uses a pair of identical disks with the correspondingblocks working together to form one error-free block. In the absence oferrors, the corresponding blocks on both drives are the same. Either onecan be readto get the same result. To achieve this goal, the following threeoperations are defined:1.Stable writes. A stable write consists of first writing the block on drive1, then reading it back to verify that it was written correctly. If it was not,the write and reread are done again up tontimes until they work. Afternconsecutive failures, the block is remapped onto a spare and the operationrepeated until it succeeds, no matter how many spares have to be tried.After the write to drive 1 has succeeded,the corresponding block on drive2 is written and reread, repeatedly if need be, until it, too, finally succeeds.In the absence of CPU crashes, when a stable write completes, the blockhas correctly been written onto both drives and verified on both of them.2. Stable reads:A stable read first reads the block from drive 1. If thisyields an incorrect ECC, the read is tried again, up tontimes. If all ofthese give bad ECCs,the corresponding block is read from drive 2. Giventhe fact that a successfulstable writeleaves two good copies of the blockmunotes.in

Page 105

105behind, and our assumption that the probability of the same blockspontaneously going bad on both drives in a reasonable time interval isnegligible, a stable read always succeeds.3. Crash recovery:Aftera crash, a recovery program scans both diskscomparing corresponding blocks. If a pairof blocks are both good and thesame, nothing is done. If one of them has an ECC error, the bad block isoverwritten with the corresponding good block. If a pairof blocks are bothgood but different, the block from drive 1 is written onto drive 2. In theabsence of CPU crashes, this scheme always works because stable writesalways write two valid copies of every block and spontaneous errors areassumed never to occur on both corresponding blocks at the same time.What about in the presence of CPU crashes during stable writes? Itdepends on precisely when the crash occurs. There are five possibilities, asdepicted in Fig. 8.5In the absence of CPU crashes, this scheme always works becausestable writes always write two valid copies of every block andspontaneous errors are assumed never to occur on both correspondingblocks at the same time. What about in the presence of CPU crashesduring stable writes? It depends on precisely when the crash occurs. Thereare five possibilities, as depicted in Fig. 8.5
Fig 8.58.2 CLOCKSClocks(also calledtimers) are essential to the operation of anymultiprogrammed system for a variety of reasons. They maintain the timeof day and prevent one process from monopolizing the CPU, among otherthings. The clock software can take the form of a device driver, eventhough a clock is neither a block device, like a disk, nor a characterdevice, like a mouse. Our examination of clocks will follow the samepattern as in the previous section: first a look at clock hardware and then alook at the clock software.
105behind, and our assumption that the probability of the same blockspontaneously going bad on both drives in a reasonable time interval isnegligible, a stable read always succeeds.3. Crash recovery:Aftera crash, a recovery program scans both diskscomparing corresponding blocks. If a pairof blocks are both good and thesame, nothing is done. If one of them has an ECC error, the bad block isoverwritten with the corresponding good block. If a pairof blocks are bothgood but different, the block from drive 1 is written onto drive 2. In theabsence of CPU crashes, this scheme always works because stable writesalways write two valid copies of every block and spontaneous errors areassumed never to occur on both corresponding blocks at the same time.What about in the presence of CPU crashes during stable writes? Itdepends on precisely when the crash occurs. There are five possibilities, asdepicted in Fig. 8.5In the absence of CPU crashes, this scheme always works becausestable writes always write two valid copies of every block andspontaneous errors are assumed never to occur on both correspondingblocks at the same time. What about in the presence of CPU crashesduring stable writes? It depends on precisely when the crash occurs. Thereare five possibilities, as depicted in Fig. 8.5
Fig 8.58.2 CLOCKSClocks(also calledtimers) are essential to the operation of anymultiprogrammed system for a variety of reasons. They maintain the timeof day and prevent one process from monopolizing the CPU, among otherthings. The clock software can take the form of a device driver, eventhough a clock is neither a block device, like a disk, nor a characterdevice, like a mouse. Our examination of clocks will follow the samepattern as in the previous section: first a look at clock hardware and then alook at the clock software.
105behind, and our assumption that the probability of the same blockspontaneously going bad on both drives in a reasonable time interval isnegligible, a stable read always succeeds.3. Crash recovery:Aftera crash, a recovery program scans both diskscomparing corresponding blocks. If a pairof blocks are both good and thesame, nothing is done. If one of them has an ECC error, the bad block isoverwritten with the corresponding good block. If a pairof blocks are bothgood but different, the block from drive 1 is written onto drive 2. In theabsence of CPU crashes, this scheme always works because stable writesalways write two valid copies of every block and spontaneous errors areassumed never to occur on both corresponding blocks at the same time.What about in the presence of CPU crashes during stable writes? Itdepends on precisely when the crash occurs. There are five possibilities, asdepicted in Fig. 8.5In the absence of CPU crashes, this scheme always works becausestable writes always write two valid copies of every block andspontaneous errors are assumed never to occur on both correspondingblocks at the same time. What about in the presence of CPU crashesduring stable writes? It depends on precisely when the crash occurs. Thereare five possibilities, as depicted in Fig. 8.5
Fig 8.58.2 CLOCKSClocks(also calledtimers) are essential to the operation of anymultiprogrammed system for a variety of reasons. They maintain the timeof day and prevent one process from monopolizing the CPU, among otherthings. The clock software can take the form of a device driver, eventhough a clock is neither a block device, like a disk, nor a characterdevice, like a mouse. Our examination of clocks will follow the samepattern as in the previous section: first a look at clock hardware and then alook at the clock software.
munotes.in

Page 106

1068.2.1 ClockHardware:Two types of 4 are commonly used in computers, and both arequite different from the clocks and watches used by people. The simplerclocks are tied to the 110-or 220-volt power line and cause an interrupt onevery voltage cycle, at 50 or 60 Hz. These clocks used to dominate, butare rare nowadays.When a piece of quartz crystal is properly cut and mounted undertension, it can be made to generate a periodic signal of very greataccuracy, typically in the range of several hundred megahertz to a fewgigahertz, depending on the crystal chosen. Using electronics, this basesignal can be multiplied by a small integer to get frequencies up to severalgigahertz or even more. At least one such circuit is usually found in anycomputer, providing a synchronizing signal to the computer’s variouscircuits. This signal is fed into the counter to make it count down to zero.When the counter gets to zero, it causes a CPU interrupt.Programmable clocks typically have several modes of operation. Inone-shot mode, whenthe clock is started, it copies the value of theholding register into the counter and then decrements the counter at eachpulse from the crystal. When the counter gets to zero, it causes an interruptand stops until it is explicitly started again by thesoftware. Insquare-wave mode, after getting to zero and causing the interrupt, the holdingregister is automatically copied into the counter, and the whole process isrepeated again indefinitely. These periodic interrupts are calledclock ticks8.2.2 Clock Software:All the clock hardware does is generate interrupts at knownintervals. Everything else involving time must be done by the software,the clock driver. The exact duties of the clock driver vary among operatingsystems, but usually include mostof the following:1.Maintaining the time of day.2.Preventing processes from running longer than they areallowed to.3.Accounting for CPU usage.4.Handling thealarmsystem call made by user processes.5.Providing watchdog timers for parts ofthe system itself.6.Doing profiling, monitoring, and statistics gathering.The first clock function, maintaining the time of day (also called thereal time) is not difficult.8.3USER INTERFACES: KEYBOARD, MOUSE,MONITOREvery general-purpose computer has a keyboard and monitor (andsometimes a mouse) to allow people to interact with it. Although thekeyboard and monitor are technically separate devices, they work closelymunotes.in

Page 107

107together. On mainframes, there are frequently manyremote users, eachwith a device containing a keyboard and an attached display as a unit.These devices have historically been calledterminals.People frequently still use that term, even when discussingpersonal computer keyboards and monitors (mostlyfor lack of a betterterm).8.3.1 Input Software:User input comes primarily from the keyboard and mouse (orsometimestouch screens), so let us look at those. On a personal computer,the keyboard contains an embedded microprocessor which usuallycommunicates through a specialized serial port with a controller chip onthe parentboard (although increasingly keyboards are connected to a USBport). An interrupt is generated whenever a key is struck and a second oneis generated whenever a key is released. At each of these keyboardinterrupts, the keyboard driver extracts the information about whathappens from the I/O port associated with the keyboard. Everything elsehappens in software and is pretty much independent of the hardware.Most of the rest of this section can be best understood whenthinking of typing commands to a shell window (command-line interface).This is how programmers commonly work. We will discuss graphicalinterfaces below. Some devices, in particular touch screens, are used forinputandoutput. We have made an (arbitrary) choice to discuss them inthe section on output devices. We will discuss graphical interfaces later inthis chapter.Keyboard Software:The number in the I/O register is the key number, called thescancode, not the ASCII code. Normal keyboards have fewer than 128 keys, soonly 7 bits are needed to represent the key number. The eighth bit is set to0 on a key press and to 1 on a key release. It is up to the driver to keeptrack of the status of each key (up or down). So all the hardware does isgive press and release interrupts. Software does the rest.When theAkey is struck,for example, the scan code (30) is put inan I/O register. It is up to the driver to determine whether it is lowercase,uppercase, CTRLA, ALT-A, CTRL-ALT-A, or some other combination.Since the driver can tell which keys have been struck but not yet released(e.g., SHIFT), it has enough information to do the job.For example, the key sequence DEPRESS SHIFT, DEPRESS A,RELEASE A, RELEASE SHIFT indicates an uppercase A. However, thekey sequence DEPRESS SHIFT, DEPRESS A, RELEASE SHIFT,RELEASE A also indicates an uppercase A. Although this keyboardinterface puts the full burden on the software, it is extremely flexible. Formunotes.in

Page 108

108example, user programs may be interested in whether a digit just typedcame from the top row of keys or the numeric keypad on the side.Inprinciple, the driver can provide this information.Two possible philosophies can be adopted for the driver. In thefirst one, the driver’s job is just to accept input and pass it upwardunmodified. A program reading from the keyboard gets a raw sequence ofASCII codes. (Giving user programs the scan codes is too primitive, aswell as being highly keyboard dependent.) This philosophy is well suitedto the needs of sophisticated screen editors such asemacs, which allow theuser to bind an arbitrary action to any character or sequence of characters.It does, however, mean that if the user typesdsteinstead ofdateand thencorrects the error by typing three backspaces andate, followed by acarriage return, the user program will be given all 11 ASCII codes typed,as follows:d s t e←←←a t e CRNot all programs want this much detail. Often they just want thecorrected input, not the exact sequence of how it was produced. Thisobservation leads to the second philosophy: the driver handles all theintraline editing and just delivers corrected linesto the user programs. Thefirst philosophy is character oriented; the second one is line oriented.Originally they were referred to asraw modeandcooked mode,respectively.MouseSoftware:Most PCs have a mouse, or sometimes a trackball, which is just amouse lying on its back. One common type of mouse has a rubber ballinside that protrudes through a hole in the bottom and rotates as the mouseis moved over a rough surface. As the ball rotates, it rubs against rubberrollers placed on orthogonal shafts. Motion in the east-west directioncauses the shaft parallel to they-axis to rotate; motion in the north-southdirection causes the shaft parallel to thex-axis to rotate. Another populartype is the optical mouse, which is equipped with one or more light-emitting diodes and photodetectors on the bottom. Early ones had tooperate on a special mousepad with a rectangular grid etched onto it so themouse could count lines crossed. Modern optical mice have an image-processing chip in them and make continuous low-resolution photos of thesurface under them, looking for changes from image to image. Whenevera mouse has moved a certain minimum distance in either direction or abutton is depressed orreleased, a message is sent to the computer. Theminimum distance is about 0.1 mm (although it can be set in software).Some people call this unit amickey. Mice (or occasionally, mouses) canhave one, two, or three buttons, depending on the designers’ estimate ofthe users’ intellectual ability to keep track of more than one button. Somemice have wheels that can send additional data back to the computer.Wireless mice are the same as wired mice except that instead of sendingtheir data back to the computer over a wire, they use low-power radios, forexample, using theBluetoothstandard.munotes.in

Page 109

1098.3.2 OutputSoftware:Now let us consider output software. First we will look at simpleoutput to a text window, which is what programmers normally prefer touse. Thenwe will consider graphical user interfaces, which other usersoften prefer.Text Windows:Output is simpler than input when the output is sequentially in asingle font, size, and color. For the most part, the program sendscharacters to the current window and they are displayed there. Usually, ablock of characters, for example, a line, is writtenin one system call.Screen editors and many other sophisticated programs need to beable to update the screen in complex ways such as replacing one line in themiddle of the screen. To accommodate this need, most output driverssupport a series ofcommands to move the cursor, insert and deletecharacters or lines at the cursor, and so on. These commands are oftencalledescape sequences. In the heyday of the dumb 25 × 80 ASCIIterminal, there were hundreds of terminal types, each with its own escapesequences. As a consequence, it was difficult to write software thatworked on more than one terminal type.One solution, which was introduced in Berkeley UNIX, was aterminal database calledtermcap. This software package defined anumber of basic actions, such as moving the cursor to (row,column). Tomove the cursor to a particular location, the software, say, an editor, useda generic escape sequence which was then converted to the actual escapesequence for the terminal being written to. In this way, the editor workedon any terminal that had an entry in the termcap database.Much UNIX software still works this way, even on personalcomputers. Eventually, the industry saw the need for standardizing theescape sequence, so an ANSI standard was developed.8.4 THIN CLIENTSOver the years, the main computing paradigm has oscillatedbetween centralized and decentralized computing. The first computers,such as the ENIAC, were, in fact, personal computers, albeit large ones,because only one person could use one at once. Then cametime sharingsystems, in which many remote users at simple terminals shared a bigcentral computer. Next came the PC era, in which the users had their ownpersonal computers again.While the decentralized PC model has advantages, it also has somesevere disadvantages that are only beginning to be taken seriously.Probably the biggest problem is that each PC has a large hard disk andcomplex software that must be maintained. For example, when a newmunotes.in

Page 110

110release of the operating system comes out, aa greatdeal of work has to bedone to perform the upgrade on each machine separately.At most corporations, the labor costs of doing this kind of softwaremaintenance dwarf the actual hardware and software costs. For homeusers, the labor is technically free, but few people are capable of doing itcorrectly and fewer still enjoy doing it. With a centralized system, onlyone or a few machines have to be updated and those machines have a staffof experts to do the work.A relatedissue is that users should make regularbackups of their gigabyte file systems, but few of them do. When disasterstrikes, a great deal of moaning and wringing of hands tends to follow.With a centralized system, backups can be made every night by automatedtape robots.Another advantage is that resource sharing is easier withcentralized systems. A system with 256 remote users, each with 256 MBof RAM, will have most of that RAM idle most of the time. With acentralized system with 64 GB of RAM, it never happens that some usertemporarily needs a lot of RAM but cannot get it because it is on someoneelse’s PC. The same argument holds for disk space and other resources.Finally, we are starting to see a shift from PC-centric computing toWebCentriccomputing. One area where this shift is very far along is email.People used to get their email delivered to their home machine and read itthere. Nowadays, many people log into Gmail, Hotmail, or Yahoo andread their mail there. The next step is for people to log into other Websitesto do word processing, build spreadsheets, and other things that used torequire PC software. It is even possible that eventually the only softwarepeople run on their PC is a Web browser, and maybe not even that. It isprobably a fair conclusion to say that most users want high-performanceinteractive computing but do not really want to administer a computer.This has led researchers to reexamine timesharing using dumb terminals(now politely calledthin clients) that meet modern terminal expectations.8.5 POWER MANAGEMENTThe first general-purpose electronic computer, the ENIAC, had18,000vacuum tubes and consumed 140,000 watts of power. As a result,it ran up a nontrivial electricity bill. After the invention of the transistor,power usage dropped dramatically and the computer industry lost interestin power requirements. However, nowadayspower management is back inthe spotlight for several reasons, and the operating system is playing a rolehere.Let us start with desktop PCs. A desktop PC often has a 200-wattpower supply (which is typically 85% efficient, that is, loses 15% of theincoming energy to heat). If 100 million of these machines are turned on atonce worldwide, together they use 20,000 megawatts of electricity. This isthe total output of 20 average-sized nuclear power plants. If powerrequirements could be cut in half, we could get rid of 10 nuclear powerplants. From an environmental point of view, getting rid of 10 nuclearmunotes.in

Page 111

111power plants (or an equivalent number of fossil-fuel plants) is a big winand well worth pursuing.The other place where power is a big issue is on battery-poweredcomputers, including notebooks, handhelds, and Webpads, among others.The heart of the problem is that the batteries cannot hold enough charge tolast very long, a few hours at most. Furthermore, despite massive researchefforts by battery companies, computer companies, and consumerelectronics companies, progress is glacial. To an industry used to adoubling of performance every 18 months (Moore’s law), having noprogress at all seems like a violation of the laws of physics, but that is thecurrent situation. As a consequence, making computers use less energy soexisting batteries last longer is high on everyone’s agenda. The operatingsystem plays a major role here, as we will see below.At the lowest level, hardware vendors are trying tomake theirelectronics more energy efficient.Techniques used include reducingtransistor size, employing dynamic voltage scaling, using low-swing andadiabatic buses, and similar techniques.These are outside the scope of this book, but interested readers canfind a good survey in a paper by Venkatachalam and Franz (2005). Thereare two general approaches to reducing energy consumption. The first oneis for the operating system to turn off parts of the computer (mostly I/Odevices) when they are not in use because a device that is off uses little orno energy. The second one is for the application program to use lessenergy, possibly degrading the quality of the user experience, in order tostretch out battery time. We will look at each of these approaches in turn,but first we will say a little bit about hardware design with respect topower usage.8.6 SUMMARYWhile using memory mapped IO, the OS allocates a buffer inmemory and informs I/O device to use that buffer to send data to the CPU.I/O deviceoperates asynchronously with CPU, interrupts CPU whenfinished. Memory mapped IO is used for most high-speed I/O devices likedisks, communication interfaces.8.7UNIT END QUESTIONS1)What problems could occur ifa system allowed a file systemto bemounted simultaneously at more than one location?2)What criteria should be usedin deciding which strategy isbest utilized fora particular file?3)What is meant by RAID?4)What are the various Disk-Scheduling Algorithms?5)What is Low-Level Formatting?6)Explain POWER MANAGEMENT in details7)Explain Thin Clients indetail.*****munotes.in

Page 112

1129DEADLOCKSUnit Structure9.0Objectives9.1Resources9.2Introduction9.3Ignoring the problem--The ostrich algorithm9.4Detecting the deadlock9.5 Deadlockavoidance9.6Deadlock Prevention9.7Issues9.8Summary9.9Unit End Questions9.0 OBJECTIVES•To understand what is a Deadlock•learn how to make Resource acquisition•learn different methods Detecting Deadlocks and Recovering•To understand The Ostrich AlgorithmAdeadlockoccurs when every member of a set of processes iswaiting for an event that can only be caused by a member of the set.Often the event waited for is the release of a resource. In theautomotive world deadlocks are calledgridlocks.The processes are the cars. The resources are the spaces occupied by the cars
Figure 9.a Deadlock
1129DEADLOCKSUnit Structure9.0Objectives9.1Resources9.2Introduction9.3Ignoring the problem--The ostrich algorithm9.4Detecting the deadlock9.5 Deadlockavoidance9.6Deadlock Prevention9.7Issues9.8Summary9.9Unit End Questions9.0 OBJECTIVES•To understand what is a Deadlock•learn how to make Resource acquisition•learn different methods Detecting Deadlocks and Recovering•To understand The Ostrich AlgorithmAdeadlockoccurs when every member of a set of processes iswaiting for an event that can only be caused by a member of the set.Often the event waited for is the release of a resource. In theautomotive world deadlocks are calledgridlocks.The processes are the cars. The resources are the spaces occupied by the cars
Figure 9.a Deadlock
1129DEADLOCKSUnit Structure9.0Objectives9.1Resources9.2Introduction9.3Ignoring the problem--The ostrich algorithm9.4Detecting the deadlock9.5 Deadlockavoidance9.6Deadlock Prevention9.7Issues9.8Summary9.9Unit End Questions9.0 OBJECTIVES•To understand what is a Deadlock•learn how to make Resource acquisition•learn different methods Detecting Deadlocks and Recovering•To understand The Ostrich AlgorithmAdeadlockoccurs when every member of a set of processes iswaiting for an event that can only be caused by a member of the set.Often the event waited for is the release of a resource. In theautomotive world deadlocks are calledgridlocks.The processes are the cars. The resources are the spaces occupied by the cars
Figure 9.a Deadlock
munotes.in

Page 113

113For a computer science example consider two processes A and B that eachwant to print a file currently on tape.1.A has obtained ownership of the printer and will release it after printingone file.2.B has obtained ownership of the tape drive and will release it afterreading one file.3.A tries to get ownership of the tape drive, but is told to wait for B torelease it.4.B tries to get ownership of the printer, but is told to wait for A to releasethe printer.9.1 RESOURCESTheresourceis the object granted to a process.9.1.1: Preemptable and Non-preemptable Resources●Resources come in two types1.Preemptable, meaning that the resource can be taken away from itscurrent owner (and given back later). An example is memory.2.Non-preemptable, meaning that the resource cannot be taken away. Anexample is a printer.●The interesting issues arisewith non-preemptable resources so those arethe ones we study.●Life history of a resource is a sequence of1.Request2.Allocate3.Use4.Release●Processes make requests, use the resource, and release the resource. Theallocationdecisions are made by the system and we will study policiesused to make these decisions.Simple example of the trouble you can get into.Two resources and two processes.•Each process wants both resources.•Use a semaphore for each. Call them S and T.•If both processes execute P(S); P(T);---V(T); V(S) all is well.•But if one executes instead P(T); P(S);--V(S); V(T) disaster! This wasthe printer/tape example just aboveRecall from the semaphore/critical-section treatment, that it is easyto cause trouble if a process dies or stays forever inside its critical section.Similarly, we assume that no process maintains a resource forever. It maymunotes.in

Page 114

114obtain the resource an unbounded number of times (i.e. it can have a loopforever with a resource request inside), but each time it gets the resource,it must release it eventually.9.2 INTRODUCTION TO DEADLOCKSTo repeat: Adeadlockoccurswhen everymember of a set ofprocesses is waiting for an event that can only be caused by a member ofthe set.Often the event waited for is the release of a resource.9.2.1: (Necessary) Conditions for DeadlockThe following four conditions (Coffman; Havender) arenecessarybutnotsufficient for deadlock. Repeat: They arenotsufficient.1.Mutual exclusion:A resource can be assigned to at most one processat a time (no sharing).2.Hold and wait:A processing holding a resource is permittedtorequest another.3.No preemption:A process must release its resources; they cannot betaken away.4.Circular wait:There must be a chain of processes such that eachmember of the chain is waiting for a resource held by thenextmember of the chain.The first threeare characteristics of the system and resources. Thatis, for a given system with a fixed set of resources, the first threeconditions are either true or false: They don't change with time.The truthor falsehood of the last condition does indeed change with time as theresources are equested/allocated/released.9.2.2: Deadlock Modeling:Following are several examples of aResource Allocation Graph,also called aReusable Resource Graph.
Figure 9.1 Resource Allocation Graph
114obtain the resource an unbounded number of times (i.e. it can have a loopforever with a resource request inside), but each time it gets the resource,it must release it eventually.9.2 INTRODUCTION TO DEADLOCKSTo repeat: Adeadlockoccurswhen everymember of a set ofprocesses is waiting for an event that can only be caused by a member ofthe set.Often the event waited for is the release of a resource.9.2.1: (Necessary) Conditions for DeadlockThe following four conditions (Coffman; Havender) arenecessarybutnotsufficient for deadlock. Repeat: They arenotsufficient.1.Mutual exclusion:A resource can be assigned to at most one processat a time (no sharing).2.Hold and wait:A processing holding a resource is permittedtorequest another.3.No preemption:A process must release its resources; they cannot betaken away.4.Circular wait:There must be a chain of processes such that eachmember of the chain is waiting for a resource held by thenextmember of the chain.The first threeare characteristics of the system and resources. Thatis, for a given system with a fixed set of resources, the first threeconditions are either true or false: They don't change with time.The truthor falsehood of the last condition does indeed change with time as theresources are equested/allocated/released.9.2.2: Deadlock Modeling:Following are several examples of aResource Allocation Graph,also called aReusable Resource Graph.
Figure 9.1 Resource Allocation Graph
114obtain the resource an unbounded number of times (i.e. it can have a loopforever with a resource request inside), but each time it gets the resource,it must release it eventually.9.2 INTRODUCTION TO DEADLOCKSTo repeat: Adeadlockoccurswhen everymember of a set ofprocesses is waiting for an event that can only be caused by a member ofthe set.Often the event waited for is the release of a resource.9.2.1: (Necessary) Conditions for DeadlockThe following four conditions (Coffman; Havender) arenecessarybutnotsufficient for deadlock. Repeat: They arenotsufficient.1.Mutual exclusion:A resource can be assigned to at most one processat a time (no sharing).2.Hold and wait:A processing holding a resource is permittedtorequest another.3.No preemption:A process must release its resources; they cannot betaken away.4.Circular wait:There must be a chain of processes such that eachmember of the chain is waiting for a resource held by thenextmember of the chain.The first threeare characteristics of the system and resources. Thatis, for a given system with a fixed set of resources, the first threeconditions are either true or false: They don't change with time.The truthor falsehood of the last condition does indeed change with time as theresources are equested/allocated/released.9.2.2: Deadlock Modeling:Following are several examples of aResource Allocation Graph,also called aReusable Resource Graph.
Figure 9.1 Resource Allocation Graph
munotes.in

Page 115

115•The processes are circles.•The resources are squares.•An arc (directed line) from a process P to a resource R signifies thatprocess P has requested (but not yet been allocated) resource R.•An arc from a resource R to a process P indicates that process P hasbeen allocated resource R.There are four strategies used for dealing with deadlocks1.Ignore the problem2.Detect deadlocks and recover from them3.Avoid deadlocks by carefully deciding when to allocate resources.4.Prevent deadlocks by violating one of the 4 necessary conditions.9.3 IGNORING THE PROBLEM--THE OSTRICHALGORITHMThe “put your head in the sand approach”.•If the likelihood of a deadlock is sufficiently small and the cost ofavoiding a deadlock is sufficiently high it might be better to ignore theproblem.•For example if each PC deadlocks once per 100 years, the one reboot maybe less painfulthanthe restrictions needed to prevent it.•Clearly not a good philosophy for nuclear missile launchers.•For embedded systems (e.g., missile launchers) the programs run arefixed in advance so many of the questions Tanenbaum raises (such asmany processes wanting tofork at the same time) don't occur.9.4 DETECTING DEADLOCKS AND RECOVERING9.4.1Detecting Deadlocks with Single Unit Resources•Consider the case in which there isonly oneinstance of each resource.•Thus a request can be satisfied by only one specificresource.•Inthis casethe 4 necessary conditions for deadlock are also sufficient.•Remember we are making an assumption (single unit resources) that isoften invalid. For example, many systems have several printers and arequest is given for “a printer” not a specific printer. Similarly, one canhave many tape drives.•So the problem comes down to finding a directed cycle in the resourceallocation graph. Why?•Answer: Because the other three conditions are either satisfied by thesystem we are studying or are not in which case deadlock is not amunotes.in

Page 116

116question. That is,conditions 1,2,3 are conditions on the system ingeneral not on what is happening right now.To find a directed cycle in a directed graph is not hard. The idea is simple.1.For each node in the graph do a depth first traversal to see if the graphis a DAG(directed acyclic graph), building a list as you go down theDAG (and pruning it as you backtrackback up).2.If you ever find the same node twice on your list, you have found adirected cycle, the graph is not a DAG, and deadlock exists among theprocesses in your current list.3.If you never find the same node twice, the graph is a DAG and nodeadlock occurs.4.The searches are finite since there are a finite number of nodes.9.4.2: Detecting Deadlocks with Multiple Unit Resources:This is more difficult.
Figure 9.2 Resource allocation graph•The figure on the above shows a resource allocation graph with multipleunit resources.•Each unit is represented by a dot in the box.•Request edges are drawn to the box since they represent a request for anydot in the box.•Allocation edges are drawn from the dot to represent that this unit of theresource has been assigned (but all units of a resource are equivalent andthe choice of which one to assign is arbitrary).•Note that there is a directed cycle in red, but there is no deadlock. Indeedthe middle process might finish, erasing the green arc and permitting theblue dot to satisfy the rightmost process.•An algorithm for detecting deadlocks in this more general setting. Theidea is as follows.
116question. That is,conditions 1,2,3 are conditions on the system ingeneral not on what is happening right now.To find a directed cycle in a directed graph is not hard. The idea is simple.1.For each node in the graph do a depth first traversal to see if the graphis a DAG(directed acyclic graph), building a list as you go down theDAG (and pruning it as you backtrackback up).2.If you ever find the same node twice on your list, you have found adirected cycle, the graph is not a DAG, and deadlock exists among theprocesses in your current list.3.If you never find the same node twice, the graph is a DAG and nodeadlock occurs.4.The searches are finite since there are a finite number of nodes.9.4.2: Detecting Deadlocks with Multiple Unit Resources:This is more difficult.
Figure 9.2 Resource allocation graph•The figure on the above shows a resource allocation graph with multipleunit resources.•Each unit is represented by a dot in the box.•Request edges are drawn to the box since they represent a request for anydot in the box.•Allocation edges are drawn from the dot to represent that this unit of theresource has been assigned (but all units of a resource are equivalent andthe choice of which one to assign is arbitrary).•Note that there is a directed cycle in red, but there is no deadlock. Indeedthe middle process might finish, erasing the green arc and permitting theblue dot to satisfy the rightmost process.•An algorithm for detecting deadlocks in this more general setting. Theidea is as follows.
116question. That is,conditions 1,2,3 are conditions on the system ingeneral not on what is happening right now.To find a directed cycle in a directed graph is not hard. The idea is simple.1.For each node in the graph do a depth first traversal to see if the graphis a DAG(directed acyclic graph), building a list as you go down theDAG (and pruning it as you backtrackback up).2.If you ever find the same node twice on your list, you have found adirected cycle, the graph is not a DAG, and deadlock exists among theprocesses in your current list.3.If you never find the same node twice, the graph is a DAG and nodeadlock occurs.4.The searches are finite since there are a finite number of nodes.9.4.2: Detecting Deadlocks with Multiple Unit Resources:This is more difficult.
Figure 9.2 Resource allocation graph•The figure on the above shows a resource allocation graph with multipleunit resources.•Each unit is represented by a dot in the box.•Request edges are drawn to the box since they represent a request for anydot in the box.•Allocation edges are drawn from the dot to represent that this unit of theresource has been assigned (but all units of a resource are equivalent andthe choice of which one to assign is arbitrary).•Note that there is a directed cycle in red, but there is no deadlock. Indeedthe middle process might finish, erasing the green arc and permitting theblue dot to satisfy the rightmost process.•An algorithm for detecting deadlocks in this more general setting. Theidea is as follows.
munotes.in

Page 117

1171.look for a process that might be able to terminate (i.e., all its requestarcs can be satisfied).2.If one is found pretend that it does terminate (erase all its arcs), andrepeat step 1.3.If any processes remain, they are deadlocked.•Thealgorithm just given makes the most optimistic assumption about arunning process: it will return all its resources and terminate normally. Ifwe still find processes that remain blocked, they are deadlocked.9.4.3 Recovery from deadlock:Suppose that our deadlock detection algorithm has succeeded anddetected a deadlock. What next? Some way is needed to recover and getthe system going again.In this section we will discuss various ways of recovering fromdeadlock.Preemption:In some cases it maybe possible to temporarily take a resourceaway from its current owner and give it to another processPerhaps you can temporarily preempt a resource from a process.Not likely.For example, to take a laser printer away from its owner, theoperator can collect all the sheets already printed and put them in a pile.Then the process can be suspended (marked as not runnable). At this pointthe printer can be assigned to another process. When that process finishes,the pile of printed sheets can be put back in the printer’s output tray andthe original process restarted.Rollback:If the system designers and machine operators know that deadlocksare likely, they can arrange to have processes checkpointed periodically.Checkpointing a process means that its state is written to a file so that itcan be restarted laterDatabase (and other) systems take periodic checkpoints. If thesystem does take checkpoints, one can roll back to a checkpoint whenevera deadlock is detected.Somehow must guarantee forward progress.munotes.in

Page 118

118Kill processe:The crudest but simplest way to break a deadlock is to kill one ormore processes. One possibility is to kill a process in the cycle. With alittle luck, the other processes will be able to continue. If this does nothelp, it canbe repeated until the cycle is broken.Can always be done but might be painful. For example someprocesses have had effects that can't be simply undone. Print, launch amissile, etc.9.5 DEADLOCK AVOIDANCEIn the discussion of deadlock detection, wetacitly assumed thatwhen a process asks for resources, it asks for them all at once (Figure 9.3R Matrix ). In most systems, however, resources are requested one at atime. The system must be able to decide whether granting a resource issafe or not and make the allocation only when it is safe. Thus, the questionarises: Is there an algorithm that can always avoid deadlock bymaking theright choice all the time? The answer is a qualified yes—we can avoiddeadlocks, but only if certain information is available in advance.9.5.1 Resource Trajectories:We plot progress of each process along an axis. In the example weshow, there are two processes, hence two axes, i.e., planar. This procedureassumes that we know the entire request and release pattern of the processesin advanceso it is not a practical solution. I present it as it is some motivationfor the practical solution that follows, the Banker's Algorithm.
Figure 9.3 R Matrix
118Kill processe:The crudest but simplest way to break a deadlock is to kill one ormore processes. One possibility is to kill a process in the cycle. With alittle luck, the other processes will be able to continue. If this does nothelp, it canbe repeated until the cycle is broken.Can always be done but might be painful. For example someprocesses have had effects that can't be simply undone. Print, launch amissile, etc.9.5 DEADLOCK AVOIDANCEIn the discussion of deadlock detection, wetacitly assumed thatwhen a process asks for resources, it asks for them all at once (Figure 9.3R Matrix ). In most systems, however, resources are requested one at atime. The system must be able to decide whether granting a resource issafe or not and make the allocation only when it is safe. Thus, the questionarises: Is there an algorithm that can always avoid deadlock bymaking theright choice all the time? The answer is a qualified yes—we can avoiddeadlocks, but only if certain information is available in advance.9.5.1 Resource Trajectories:We plot progress of each process along an axis. In the example weshow, there are two processes, hence two axes, i.e., planar. This procedureassumes that we know the entire request and release pattern of the processesin advanceso it is not a practical solution. I present it as it is some motivationfor the practical solution that follows, the Banker's Algorithm.
Figure 9.3 R Matrix
118Kill processe:The crudest but simplest way to break a deadlock is to kill one ormore processes. One possibility is to kill a process in the cycle. With alittle luck, the other processes will be able to continue. If this does nothelp, it canbe repeated until the cycle is broken.Can always be done but might be painful. For example someprocesses have had effects that can't be simply undone. Print, launch amissile, etc.9.5 DEADLOCK AVOIDANCEIn the discussion of deadlock detection, wetacitly assumed thatwhen a process asks for resources, it asks for them all at once (Figure 9.3R Matrix ). In most systems, however, resources are requested one at atime. The system must be able to decide whether granting a resource issafe or not and make the allocation only when it is safe. Thus, the questionarises: Is there an algorithm that can always avoid deadlock bymaking theright choice all the time? The answer is a qualified yes—we can avoiddeadlocks, but only if certain information is available in advance.9.5.1 Resource Trajectories:We plot progress of each process along an axis. In the example weshow, there are two processes, hence two axes, i.e., planar. This procedureassumes that we know the entire request and release pattern of the processesin advanceso it is not a practical solution. I present it as it is some motivationfor the practical solution that follows, the Banker's Algorithm.
Figure 9.3 R Matrix
munotes.in

Page 119

119•We have two processes H (horizontal) and V.•The origin represents them both starting.•Their combined state is a point on the graph.•The parts where the printer and plotter are needed by each process areindicated.•The dark green is where both processes have the plotter and henceexecution cannot reach this point.•Light green represents both having the printer; also impossible.•Pink is both havingboth a printerand plotter; impossible.•Gold is possible (H has plotter, V has printer), but the system can't getthere.•The upper right corner is the goal; both processes have finished.•The red dot is ... (cymbals) deadlock. We don't want to go there.•The cyan is safe. From anywhere in the cyan we have horizontal andvertical moves to the finish point (the upper right corner) without hittingany impossible area.•Themagenta interior is very interesting. It is•Possible: each processor has a different resource•Not deadlocked: each processor can move within the magenta•Deadly: deadlock is unavoidable. You will hit a magenta-greenboundary•and thenwill have nochoice butto turn and go to the red dot.•The cyan-magenta border is the danger zone.•The dashed line represents a possible execution pattern.•With a uniprocessor no diagonals are possible. We either move to theright meaning H is executing or move up indicating V is executing.•The trajectory shown represents.1.Hexecutinga little.2.Vexecutinga little.3.H executes; requests the printer; gets it; executes some more.4.V executes; requests the plotter•The crisis is at hand!•If the resource manager gives V the plotter, the magenta has beenentered and all is lost. “Abandon all hope ye who enter here”--Dante.•The right thing to do is to deny the request, let H execute movinghorizontally under the magenta and dark green. Atthe end of the darkgreen, no danger remains, both processes will complete successfully.Victory!munotes.in

Page 120

120•This procedure is not practical for a general purpose OS since itrequires knowing the programs in advance. That is, the resourcemanager, knows in advancewhat requests each process will make andin what order.9.5.2: SafeStates:Avoiding deadlocksgivessome extra knowledge.•Not surprisingly, the resource manager knows how many units of eachresource it had to begin with.•Also it knows how many units ofeach resource it has given to eachprocess.•It would be great to see all the programs in advance and thus know allfuture requests, but that is asking for too much.•Instead, when each process starts, it announces its maximum usage.•That is each process,before making any resource requests, tells theresource manager the maximum number of units of each resource theprocess canpossiblyneed.•This is called theclaimof the process.•If the claim is greater than the total number of units in the systemtheresource manager kills the process when receiving the claim (orreturns an error code so that the process can make a new claim).•If during the run the process asks for more than its claim, theprocess is aborted (or an error code is returned and no resourcesare allocated).•If a process claims more than it needs, the result is that theresource manager will be more conservative thanit needs tobe andthere will be more waiting.Definition: A state issafeif there is an ordering of the processessuchthat: if the processes are run in this order, they will all terminate(assuming none exceeds its claim).Recall the comparison made above between detecting deadlocks (withmulti-unit) resources) and the banker's algorithm•The deadlock detection algorithm given makes the mostoptimisticassumption about a running process: it will return all its resources andterminate normally. If we still find processes that remain blocked, theyare deadlocked.•The banker's algorithm makesthe mostpessimisticassumption about arunning process: it immediately asks for all the resources itcan(details lateron “can”). If, even with such demanding processes, theresource manager can assure that allprocessesterminate, then we canassure that deadlock is avoided.munotes.in

Page 121

121In the definition of a safe state no assumption is made about therunning processes; that is, for a state to be safe termination must occur nomatter what the processes do (providing the all terminate and to notexceed their claims).Making no assumption is the same as making themost pessimistic assumption.Give an example of each of the four possibilities. A state that is1.Safe and deadlocked--not possible.2.Safe and not deadlocked--trivial (e.g., no arcs).3.Not safe and deadlocked--easy (any deadlocked state).4.Not safe and not deadlocked—interestingIs the figure on the Below safe or not?
Figure 9.4 safe state•You canNOTtell until I give you the initial claims of the process.•Please do not make the unfortunately common exam mistake to give anexample involving safe states without giving the claims.•For the figure on the right, if the initial claims are: P: 1 unit of R and 2units of S (written (1,2)) Q: 2 units of R and 1 units of S (written (2,1))the state isNOTsafe.•But if the initial claims are instead: P: 2 units of R and 1unitof S (written(2,1)) Q: 1 unit of R and 2 units of S (written (1,2)) the stateISsafe.•Explain why this is so.A manager can determine if a state is safe.•Since the managerknowsall the claims, it can determinethe maximumamount of additional resources each process can request.•The manager knows how many units of each resource it has left.
121In the definition of a safe state no assumption is made about therunning processes; that is, for a state to be safe termination must occur nomatter what the processes do (providing the all terminate and to notexceed their claims).Making no assumption is the same as making themost pessimistic assumption.Give an example of each of the four possibilities. A state that is1.Safe and deadlocked--not possible.2.Safe and not deadlocked--trivial (e.g., no arcs).3.Not safe and deadlocked--easy (any deadlocked state).4.Not safe and not deadlocked—interestingIs the figure on the Below safe or not?
Figure 9.4 safe state•You canNOTtell until I give you the initial claims of the process.•Please do not make the unfortunately common exam mistake to give anexample involving safe states without giving the claims.•For the figure on the right, if the initial claims are: P: 1 unit of R and 2units of S (written (1,2)) Q: 2 units of R and 1 units of S (written (2,1))the state isNOTsafe.•But if the initial claims are instead: P: 2 units of R and 1unitof S (written(2,1)) Q: 1 unit of R and 2 units of S (written (1,2)) the stateISsafe.•Explain why this is so.A manager can determine if a state is safe.•Since the managerknowsall the claims, it can determinethe maximumamount of additional resources each process can request.•The manager knows how many units of each resource it has left.
121In the definition of a safe state no assumption is made about therunning processes; that is, for a state to be safe termination must occur nomatter what the processes do (providing the all terminate and to notexceed their claims).Making no assumption is the same as making themost pessimistic assumption.Give an example of each of the four possibilities. A state that is1.Safe and deadlocked--not possible.2.Safe and not deadlocked--trivial (e.g., no arcs).3.Not safe and deadlocked--easy (any deadlocked state).4.Not safe and not deadlocked—interestingIs the figure on the Below safe or not?
Figure 9.4 safe state•You canNOTtell until I give you the initial claims of the process.•Please do not make the unfortunately common exam mistake to give anexample involving safe states without giving the claims.•For the figure on the right, if the initial claims are: P: 1 unit of R and 2units of S (written (1,2)) Q: 2 units of R and 1 units of S (written (2,1))the state isNOTsafe.•But if the initial claims are instead: P: 2 units of R and 1unitof S (written(2,1)) Q: 1 unit of R and 2 units of S (written (1,2)) the stateISsafe.•Explain why this is so.A manager can determine if a state is safe.•Since the managerknowsall the claims, it can determinethe maximumamount of additional resources each process can request.•The manager knows how many units of each resource it has left.
munotes.in

Page 122

122The manager then follows the following procedure, which is part ofBanker'sAlgorithmsdiscovered by Dijkstra, to determine if the state is safe.1.If there are no processes remaining, the state issafe.2.Seek a process P whose max additional requests is less than what remains(for each resource type).•If no such process can be found, then the state is not safe.•The banker (manager) knows that if it refuses all requestsexceptthose from P, then it will be able to satisfy all of P's requests. Why?Ans: Look at how P was chosen.3.The banker now pretends that P has terminated (since the banker knowsthat it can guarantee this will happen). Hence the banker pretends that allof P's currently held resources are returned. This makes the banker richerand hence perhaps a process that was not eligible to be chosen as Ppreviously, can now be chosen.4.Repeat these steps.Example 1
•One resource type R with 22 unit•Three processes X, Y, and Z with initial claims 3, 11, and 19 respectively.•Currently the processes have 1, 5, and 10 units respectively.•Hence the manager currently has 6 units left.•Also note that the max additional needs for the processes are 2, 6, 9respectively.•So the manager cannot assure (with itscurrentremaining supply of 6units) that Z can terminate. But that isnotthe question.•This state is safe1.Use 2units to satisfy X; now the manager has 7 units.2.Use 6 units to satisfy Y; now the manager has 12 units.3.Use 9 units to satisfy Z; done
122The manager then follows the following procedure, which is part ofBanker'sAlgorithmsdiscovered by Dijkstra, to determine if the state is safe.1.If there are no processes remaining, the state issafe.2.Seek a process P whose max additional requests is less than what remains(for each resource type).•If no such process can be found, then the state is not safe.•The banker (manager) knows that if it refuses all requestsexceptthose from P, then it will be able to satisfy all of P's requests. Why?Ans: Look at how P was chosen.3.The banker now pretends that P has terminated (since the banker knowsthat it can guarantee this will happen). Hence the banker pretends that allof P's currently held resources are returned. This makes the banker richerand hence perhaps a process that was not eligible to be chosen as Ppreviously, can now be chosen.4.Repeat these steps.Example 1
•One resource type R with 22 unit•Three processes X, Y, and Z with initial claims 3, 11, and 19 respectively.•Currently the processes have 1, 5, and 10 units respectively.•Hence the manager currently has 6 units left.•Also note that the max additional needs for the processes are 2, 6, 9respectively.•So the manager cannot assure (with itscurrentremaining supply of 6units) that Z can terminate. But that isnotthe question.•This state is safe1.Use 2units to satisfy X; now the manager has 7 units.2.Use 6 units to satisfy Y; now the manager has 12 units.3.Use 9 units to satisfy Z; done
122The manager then follows the following procedure, which is part ofBanker'sAlgorithmsdiscovered by Dijkstra, to determine if the state is safe.1.If there are no processes remaining, the state issafe.2.Seek a process P whose max additional requests is less than what remains(for each resource type).•If no such process can be found, then the state is not safe.•The banker (manager) knows that if it refuses all requestsexceptthose from P, then it will be able to satisfy all of P's requests. Why?Ans: Look at how P was chosen.3.The banker now pretends that P has terminated (since the banker knowsthat it can guarantee this will happen). Hence the banker pretends that allof P's currently held resources are returned. This makes the banker richerand hence perhaps a process that was not eligible to be chosen as Ppreviously, can now be chosen.4.Repeat these steps.Example 1
•One resource type R with 22 unit•Three processes X, Y, and Z with initial claims 3, 11, and 19 respectively.•Currently the processes have 1, 5, and 10 units respectively.•Hence the manager currently has 6 units left.•Also note that the max additional needs for the processes are 2, 6, 9respectively.•So the manager cannot assure (with itscurrentremaining supply of 6units) that Z can terminate. But that isnotthe question.•This state is safe1.Use 2units to satisfy X; now the manager has 7 units.2.Use 6 units to satisfy Y; now the manager has 12 units.3.Use 9 units to satisfy Z; done
munotes.in

Page 123

1239.6 DEADLOCK PREVENTIONAttack one of the coffman/havender conditions.9.6.1: Attacking Mutual Exclusion:First let us attack the mutual exclusion condition. If no resource wereever assigned exclusively to a single process, we would neverhavedeadlocks. For data, the simplest method is to make data read only, so thatprocesses can use the data concurrently.However, it is equally clear thatallowing two processes to write on the printer at the same time will lead tochaos. By spooling printer output, several processes can generate output atthe same time. In this model, the only process that actually requeststhephysical printer is the printer daemon. Since the daemon never requestsany other resources, we can eliminate deadlock for the printer. If thedaemon is programmed to begin printing even before all the output isspooled, the printer might lie idle ifan output process decides to waitseveral hours after the first burst of output. For this reason, daemons arenormally programmed to print only after the complete output file isavailable. However, this decision itself could lead to deadlock. Whatwould happen if two processes each filled up one half of the availablespooling space with output and neither was finished producing its fulloutput? In this case, we would have two processes that had each finishedpart, but not all, of their output, and could notcontinue. Neither processwill ever finish, so we would have a deadlock on the disk.9.6.2: Attacking Hold and Wait:Require each process to request all resources at the beginning ofthe run. This is often calledOne Shot.If we can prevent processes that hold resources from waiting formore resources, we can eliminate deadlocks. One way to achieve this goalis to require all processes to request all their resources before startingexecution. If everything is available, the process will be allocatedwhatever it needs and can run to completion. If one or more resources arebusy, nothing will be allocated and the process will just wait.An immediate problem with this approach is that many processes donot know how many resources they will need until theyhavestartedrunning.9.6.3: Attacking No Preempt:If a process has been assigned the printer and is in the middle ofprinting its output,forcibly taking away the printer because a neededplotter is not available is tricky at best and impossible at worst. However,some resources can be virtualized to avoid this situation. Spooling printeroutput to the disk and allowing only the printerdaemon accessto the realmunotes.in

Page 124

124printer eliminates deadlocks involving the printer, although it creates apotential for deadlock over disk space. With large disks though, runningout of disk space is unlikely.9.6.4: Attacking Circular Wait:The circular wait canbe eliminated in several ways. One way issimply to have a rule saying that a process is entitled only to a singleresource at any moment. If it needs a second one, it must release the firstone. For a process that needs to copy a huge file from a tape toa printer,this restriction is unacceptable. Another way to avoid the circular wait isto provide a global numbering of all the resources, as shown in Fig. 9.bNow the rule is this: processes can request resources whenever they wantto, but all requests must be made in numerical order. A process mayrequest first a printer and then a tape drive, but it may not request first aplotter and then a printer.
Figure 9. a) numerically order resource b) resource graph9.7 ISSUES9.7.1: Two-phase locking:Although both avoidance and prevention are not terribly promisingin the general case, for specific applications, many excellent special-purpose algorithms are known. As an example, in many database systems,an operation that occurs frequently is requesting locks on several recordsand then updating all the locked records. When multiple processes arerunning at the same time, there is a real danger of deadlock. The approachoften used is called two-phase locking.9.7.2: Starvation:As usual FCFS is a good cure. Often this is done by priority agingand picking the highest priority process to get the resource. Also can
124printer eliminates deadlocks involving the printer, although it creates apotential for deadlock over disk space. With large disks though, runningout of disk space is unlikely.9.6.4: Attacking Circular Wait:The circular wait canbe eliminated in several ways. One way issimply to have a rule saying that a process is entitled only to a singleresource at any moment. If it needs a second one, it must release the firstone. For a process that needs to copy a huge file from a tape toa printer,this restriction is unacceptable. Another way to avoid the circular wait isto provide a global numbering of all the resources, as shown in Fig. 9.bNow the rule is this: processes can request resources whenever they wantto, but all requests must be made in numerical order. A process mayrequest first a printer and then a tape drive, but it may not request first aplotter and then a printer.
Figure 9. a) numerically order resource b) resource graph9.7 ISSUES9.7.1: Two-phase locking:Although both avoidance and prevention are not terribly promisingin the general case, for specific applications, many excellent special-purpose algorithms are known. As an example, in many database systems,an operation that occurs frequently is requesting locks on several recordsand then updating all the locked records. When multiple processes arerunning at the same time, there is a real danger of deadlock. The approachoften used is called two-phase locking.9.7.2: Starvation:As usual FCFS is a good cure. Often this is done by priority agingand picking the highest priority process to get the resource. Also can
124printer eliminates deadlocks involving the printer, although it creates apotential for deadlock over disk space. With large disks though, runningout of disk space is unlikely.9.6.4: Attacking Circular Wait:The circular wait canbe eliminated in several ways. One way issimply to have a rule saying that a process is entitled only to a singleresource at any moment. If it needs a second one, it must release the firstone. For a process that needs to copy a huge file from a tape toa printer,this restriction is unacceptable. Another way to avoid the circular wait isto provide a global numbering of all the resources, as shown in Fig. 9.bNow the rule is this: processes can request resources whenever they wantto, but all requests must be made in numerical order. A process mayrequest first a printer and then a tape drive, but it may not request first aplotter and then a printer.
Figure 9. a) numerically order resource b) resource graph9.7 ISSUES9.7.1: Two-phase locking:Although both avoidance and prevention are not terribly promisingin the general case, for specific applications, many excellent special-purpose algorithms are known. As an example, in many database systems,an operation that occurs frequently is requesting locks on several recordsand then updating all the locked records. When multiple processes arerunning at the same time, there is a real danger of deadlock. The approachoften used is called two-phase locking.9.7.2: Starvation:As usual FCFS is a good cure. Often this is done by priority agingand picking the highest priority process to get the resource. Also can
munotes.in

Page 125

125periodically stop accepting new processes until all old ones get theirresourcesA problem closely related to deadlock is starvation. In a dynamicsystem, requests for resources happen all the time. Some policy is neededto make a decision about who gets which resource when. This policy,although seemingly reasonable, may lead to some processes never gettingservice eventhough they are not deadlocked. As an example, considerallocation of the printer. Imagine that the system uses some algorithm toensure that allocating the printer does not lead to deadlock. Now supposethat several processes all want it at once. Who should get it? One possibleallocation algorithm is to give it to the process with the smallest file toprint (assuming this information is available). This approach maximizesthe number of happy customers and seems fair. Now consider whathappens in a busy system when one process has a huge file to print. Everytime the printer is free, the system will look around and choose the processwith the shortest file. If there is a constant stream of processes with shortfiles, the process with the huge file will never beallocated to theprinter. Itwill simply starve to death (be postponed indefinitely, even though it isnot blocked).Problems on Deadlock:Problem 01:A system is having 3 user processes each requiring 2 units of resource R.The minimum number ofunits of R such that no deadlock will occur-1.32.53.44.6Solution:In worst case,The number of units that each process holds = One less than itsmaximum demandSo,•Process P1 holds 1 unit of resource R•Process P2 holds 1 unit of resourceR•Process P3 holds 1 unit of resource RThus,•Maximum number of units of resource R that ensures deadlock = 1 + 2+ 3 = 6•Minimum number of units of resource R that ensures no deadlock = 6+ 1 = 7munotes.in

Page 126

1269.8 SUMMARYA deadlock state occurs when two or moreprocesses are waitingindefinitely for an event that can be caused only by one of the waitingprocesses.There are three principal methods for dealing with deadlocks:•Use some protocol to prevent or avoid deadlocks, ensuring that thesystem will neverenter a deadlock state.•Allow the system to enter a deadlock state, detect it, and then recover.•Ignore the problem altogether and pretend that deadlocks never occurin the system. The third solution is the one used by most operatingsystems,including UNIX and WindowsA deadlock can occur only if four necessary conditions holdsimultaneously in the system: mutual exclusion, hold and wait, nopreemption, and circular wait. To prevent deadlocks, we can ensure that atleast one of the necessaryconditions never holds.A method for avoiding deadlocks that is less stringent than theprevention algorithms requires that the operating system have a prioriinformation on how each process will utilize system resources.9.9UNIT END QUSTIONS1)Explain how the system can recover from the deadlock using(a)recovery through preemption.(b)recovery through rollback.(c)recovery through killing processes.2)Explain deadlock detection and recovery.3)Howcan deadlocksbe prevented?4)Explain deadlock prevention techniques in Details.5)Explain Deadlock Ignorance.6)Define the Deadlock with Suitable examples.7)Explain Deadlock Avoidance indetail.8)Explain other issues in deadlocks.9)Explain Resource Acquisition in Deadlock.10)A systemhas3 user processes P1, P2 and P3 where P1 requires 21units of resource R, P2 requires 31 units of resource R, P3 requires41 units of resource R. The minimum number of units of R thatensures no deadlock is _____?*****munotes.in

Page 127

127UNIT IV10VIRTUALIZATION AND CLOUDUnit Structure10.0Objectives10.1Introduction10.1.1 About VMM10.1.2Advantages10.2Introduction-Cloud10.3Requirements for Virtualization10.4Type 1 & Type 2 Hypervisors10.5Let us sum it up10.6List of references10.7Bibliography10.8UnitEndQuestions10.0 OBJECTIVESThe objectives of this chapter is as follows:i)The objective of this chapter is make students learn about the differentVirtualization and Cloud technologies.ii)To learn why there is a need of virtualization in a company or a datacentre.iii)What are the requirements of Virtualization10.1 INTRODUCTION TO VIRTUALIZATION &CLOUDIn some situations, an organization needs a multi-computer, forexample a company has an email server, a Web server, an FTP server,some e-commerce servers, and others. These all run on differentcomputers in the same equipment rack, all connected by a high-speednetwork. The only objective to gain reliability, because a company can’ttrust on single operating system which is working 24X7. By putting eachservice on a separate computer, if one of the server crashes, at least theother ones are not affected.This is good for security also. Even if somemalevolent intruder manages to compromise the Web server, he will notimmediately have access to sensitive emails also this property sometimesreferred to as sandboxing.munotes.in

Page 128

128For instance, organizations often depend on more than oneoperating system for their daily operations: a Web server on Linux, a mailserver on Windows, an e-commerce server for customers running on OSX, and a few other services running on various types of UNIX. Theobvious solution to this ismaking use of virtual machine technology10.1.1 About VMM:1)The main idea is that a VMM (Virtual Machine Monitor) creates theillusion of multiple (virtual) machines on the same physical hardware.2)VMM is also known as a hypervisor.3)we distinguish between type 1 hypervisors which run on the baremetal, and type 2 hypervisors that may make use of all the wonderfulservices and abstractions offered by an underlying operating system.4)Either way, virtualization allows a single computer to host multiplevirtual machines, each potentially running a completely differentoperating system.5)The advantage of this approach is that a failure in one virtual machinedoes not bring down any others6)On a virtualized system, different servers can run on different virtualmachines, thus maintaining the partial-failure model that amulticomputer has, but at a lower cost and with easier maintainability.7)Moreover, we can now run multiple different operating systems on thesame hardware, benefit from virtual machine isolation in the face ofattacks.8)With virtual machine technology, the only software running in thehighest privilege mode is the hypervisor, which has two orders ofmagnitude fewer lines of code than a full operating system, and thustwo orders of magnitude fewer bugs.10.1.2 Having Virtualization has many advantages:1.Afailure in one virtual machine does not bring down any others.2.Run multiple different operating systems on the same hardware3.Having fewer physical machines saves money4.Less hardware and electricity and takes up less rack space.5.Helps in trying out new ideas6.Each application can take its own environment with it.7.Check-pointing and migrating virtual machines ismuch easier thanmigrating processes running on a normal operating system.8.Easy to migrate from one operating system to another.9.Helps to run legacy applications which are no longer supported orwhich do not work on current hardware.10. Helpsin software development.munotes.in

Page 129

12910.2 CLOUD-INTRODUCTION1.The key idea of a cloud is simple: Outsource.2.Your computation or storage needs to a well-managed data center runby a company specializing in this and staffed by experts in the area.3.Becausethe data center typically belongs to someone else, you willprobably have to pay for the use of the resources, but at least you willnot have to worry about the physical machines, power, cooling, andmaintenance.4.Because of the isolation offered by virtualization, cloud-providers canallow multiple clients, even competitors, to share a single physicalmachine.5.Earlier the organizations were not comfortable sharing theirinformation on cloud. By now, however, virtualized machines in thecloud areused by countless organization for countless applications,and while it may not be for all organizations and all data, there is nodoubt that cloud computing has been a success.6.After a lot of research from the year 1960, finally in the year 1990researchers at Stanford University developed a new hypervisor andfound VMware. VMware offers type 1 & type 2 hypervisors.10.3 REQUIREMENTS OF VIRTUALIZATION1.It is important that virtual machines act just like the real McCoy (realthing).2.In particular, it must be possible to boot them like real machines andinstall arbitrary operating systems on them, just as can be done on thereal hardware.3.It is the task of hypervisor to provide this illusion and to do itefficiently. Every hypervisor measured on following threedimensions:a.Safety:The hypervisor should have full control of the virtualizedresources.b.Fidelity:The behaviour of theprogram on a virtual machineshould be identical to that of the same program running on barehardware.c.Efficiency:Much of the code in a virtual machine should runwithout intervention of hypervisor.4.The interpreter may be able to execute an INC (increment) as it is, butinstructions that are not safe to execute directly must be simulated bythe interpreter.5.For instance, we cannot really allow the guest operating system todisable interrupts forthe entire machine or modify the page-tablemappings.munotes.in

Page 130

1306.The idea is to make the operating system on top of the hypervisorthink that it has disabled interrupts, or changed the machine’s pagemappings.7.Every CPU with kernel mode and user mode has aset of instructionsthat behave differently when executed in kernelmode than whenexecuted in user mode.8.These include instructions that do I/O, change the MMU settings, andso on.9.Popek and Goldberg called these sensitive instructions. There is also aset of instructions that cause a trap if executed in usermode.10.Popek and Goldberg called these privileged instructions. Their paperstated for the first time that a machine is “virtualizable” only if thesensitive instructions are a subset ofthe privileged instructions.10.4 TYPE 1 & TYPE 2 HYPERVISORS1.It is important to mention that not all virtualization technology tries totrick the guest into believing that it has the entire system.2.Sometimes, the aim is simply to allow a process to run that wasoriginally written for a different operating system and/or architecture.3.We therefore distinguish between full system virtualization andprocess-level virtualization.4.In the year 1972, Goldberg distinguished between two approaches ofvirtualization.a)Type 1 Hypervisor:Technically, it is like an operating system, sinceit is the only program running in the most privileged mode. Its job is tosupport multiple copies of the actual hardware, called virtualmachines, similar to the processes a normal operating system runs.
Fig shows Type 1 Hypervisor
1306.The idea is to make the operating system on top of the hypervisorthink that it has disabled interrupts, or changed the machine’s pagemappings.7.Every CPU with kernel mode and user mode has aset of instructionsthat behave differently when executed in kernelmode than whenexecuted in user mode.8.These include instructions that do I/O, change the MMU settings, andso on.9.Popek and Goldberg called these sensitive instructions. There is also aset of instructions that cause a trap if executed in usermode.10.Popek and Goldberg called these privileged instructions. Their paperstated for the first time that a machine is “virtualizable” only if thesensitive instructions are a subset ofthe privileged instructions.10.4 TYPE 1 & TYPE 2 HYPERVISORS1.It is important to mention that not all virtualization technology tries totrick the guest into believing that it has the entire system.2.Sometimes, the aim is simply to allow a process to run that wasoriginally written for a different operating system and/or architecture.3.We therefore distinguish between full system virtualization andprocess-level virtualization.4.In the year 1972, Goldberg distinguished between two approaches ofvirtualization.a)Type 1 Hypervisor:Technically, it is like an operating system, sinceit is the only program running in the most privileged mode. Its job is tosupport multiple copies of the actual hardware, called virtualmachines, similar to the processes a normal operating system runs.
Fig shows Type 1 Hypervisor
1306.The idea is to make the operating system on top of the hypervisorthink that it has disabled interrupts, or changed the machine’s pagemappings.7.Every CPU with kernel mode and user mode has aset of instructionsthat behave differently when executed in kernelmode than whenexecuted in user mode.8.These include instructions that do I/O, change the MMU settings, andso on.9.Popek and Goldberg called these sensitive instructions. There is also aset of instructions that cause a trap if executed in usermode.10.Popek and Goldberg called these privileged instructions. Their paperstated for the first time that a machine is “virtualizable” only if thesensitive instructions are a subset ofthe privileged instructions.10.4 TYPE 1 & TYPE 2 HYPERVISORS1.It is important to mention that not all virtualization technology tries totrick the guest into believing that it has the entire system.2.Sometimes, the aim is simply to allow a process to run that wasoriginally written for a different operating system and/or architecture.3.We therefore distinguish between full system virtualization andprocess-level virtualization.4.In the year 1972, Goldberg distinguished between two approaches ofvirtualization.a)Type 1 Hypervisor:Technically, it is like an operating system, sinceit is the only program running in the most privileged mode. Its job is tosupport multiple copies of the actual hardware, called virtualmachines, similar to the processes a normal operating system runs.
Fig shows Type 1 Hypervisor
munotes.in

Page 131

131b)Type 2 Hypervisor:is a different kind of animal. It is a program thatrelies on, say, Windows or Linux to allocate and schedule resources,very much like a regular process. Of course, the type 2 hypervisor stillpretends to be a full computer with a CPU and various devices. Bothtypes of hypervisor must execute the machine’s instruction set in asafe manner. For instance, an operating system running on top of thehypervisor may change and even mess up its own page tables, but notthose of others.
Fig shows Type 2Hypervisor5.The operating system running on top of the hypervisor in both cases iscalled the guest operating system.6.For a type 2 hypervisor, the operating system running on the hardwareis called the host operating system.7. Type 2 hypervisors, sometimes referred to as hosted hypervisors,depend for much of their functionality on a host operating system suchas Windows, Linux, or OS X.8.When it starts for the first time, it acts like a newly booted computerand expectsto find a DVD, USB drive, or CD-ROM containing anoperating system in the drive. however, the drive could be a virtualdevice.10.5 LET US SUM IT UP1.VMM creates the illusion of multiple machines on the same physicalhardware. 2. Virtualizationgives a range of advantages from runningdifferent operating systems to developing software. 3. Outsourcing isthe best option for storing data in the data centre. 4. Type 1 and type 2are the two categories offered by VMM to achieve virtualization.
131b)Type 2 Hypervisor:is a different kind of animal. It is a program thatrelies on, say, Windows or Linux to allocate and schedule resources,very much like a regular process. Of course, the type 2 hypervisor stillpretends to be a full computer with a CPU and various devices. Bothtypes of hypervisor must execute the machine’s instruction set in asafe manner. For instance, an operating system running on top of thehypervisor may change and even mess up its own page tables, but notthose of others.
Fig shows Type 2Hypervisor5.The operating system running on top of the hypervisor in both cases iscalled the guest operating system.6.For a type 2 hypervisor, the operating system running on the hardwareis called the host operating system.7. Type 2 hypervisors, sometimes referred to as hosted hypervisors,depend for much of their functionality on a host operating system suchas Windows, Linux, or OS X.8.When it starts for the first time, it acts like a newly booted computerand expectsto find a DVD, USB drive, or CD-ROM containing anoperating system in the drive. however, the drive could be a virtualdevice.10.5 LET US SUM IT UP1.VMM creates the illusion of multiple machines on the same physicalhardware. 2. Virtualizationgives a range of advantages from runningdifferent operating systems to developing software. 3. Outsourcing isthe best option for storing data in the data centre. 4. Type 1 and type 2are the two categories offered by VMM to achieve virtualization.
131b)Type 2 Hypervisor:is a different kind of animal. It is a program thatrelies on, say, Windows or Linux to allocate and schedule resources,very much like a regular process. Of course, the type 2 hypervisor stillpretends to be a full computer with a CPU and various devices. Bothtypes of hypervisor must execute the machine’s instruction set in asafe manner. For instance, an operating system running on top of thehypervisor may change and even mess up its own page tables, but notthose of others.
Fig shows Type 2Hypervisor5.The operating system running on top of the hypervisor in both cases iscalled the guest operating system.6.For a type 2 hypervisor, the operating system running on the hardwareis called the host operating system.7. Type 2 hypervisors, sometimes referred to as hosted hypervisors,depend for much of their functionality on a host operating system suchas Windows, Linux, or OS X.8.When it starts for the first time, it acts like a newly booted computerand expectsto find a DVD, USB drive, or CD-ROM containing anoperating system in the drive. however, the drive could be a virtualdevice.10.5 LET US SUM IT UP1.VMM creates the illusion of multiple machines on the same physicalhardware. 2. Virtualizationgives a range of advantages from runningdifferent operating systems to developing software. 3. Outsourcing isthe best option for storing data in the data centre. 4. Type 1 and type 2are the two categories offered by VMM to achieve virtualization.
munotes.in

Page 132

13210.6LIST OF REFERENCES•Modern Operating system, Fourth edition, Andrew S. Tanenbaum,Herbert Bos.•https://www.geeksforgeeks.org/generations-of-computer/10.7 BIBLIOGRAPHY•Modern Operating System by Galvin10.8 UNIT ENDQUESTIONS1. What is Cloud?2. Explain Virtualization.3. Explain types of Hypervisor with neat diagrams.*****munotes.in

Page 133

13311MULTIPROCESSING SYSTEMUnit Structure11.0Objectives11.1Pre-requisites11.2Techniques for efficient virtualization11.3Memory virtualization11.4I/O Virtualization11.5Virtual appliances11.6Let us sum it up11.7List of References11.8Bibliography11.9UnitEndQuestions11.0 OBJECTIVESThe objectives of this chapter is as follows:1.The objective of this chapter is make students learn about the differentVirtualization and Cloud technologies.2.To learn what are the different techniques used for Virtualization.3.Understand Memory Virtualization as well as I/O Virtualization.11.1 PRE-REQUISITES1.VMM creates the illusion of multiple machines on the same physicalhardware.2.Virtualization gives a range of advantages from running differentoperating systems to developing software.3.Outsourcing is the best option for storing data in the data centre.4.Type 1 and type 2 are the two categories offered by VMM to achievevirtualization.11.2 TECHNIQUES FOR EFFICIENT VIRTUALIZATION S1.The Type 1 Hypervisor runs on bare metal.2.The virtual machine runs as a user process in user mode, and as such isnot allowed to execute sensitive instructions.munotes.in

Page 134

1343.However, the virtual machine runs a guest operating system that thinksit is in kernel mode. We will call this virtual kernel mode.4.The virtual machine also runs user processes, which think they are inuser mode.5.What happens when the guest operating system (which thinks it is inkernel mode) executes an instruction that is allowed only when theCPU really is in kernel mode? Normally, on CPUs without VT, theinstruction fails and the operating system crashes.6.On CPUs with VT, when the guest operating system executes asensitive instruction, a trap to the hypervisor does occur.7.Now, let’s understand how to migrate fromOne Virtual machine toanother.a)To move the virtual machine from the hardware to the new machinewithout taking it down at all.b)Modern virtualization solutions offer is something known as livemigration.Theymove the virtual machine while it is stilloperational.c)For instance, they employ techniques like pre-copy memorymigration.d)This means that they copy memory pages while the machine is stillserving requests.e)Most memory pages are not written much, so copying them over issafe.f)Remember, the virtual machine is still running, so a page may bemodified after it has already been copied.g)When memory pages are modified, we have to make sure that thelatest version is copied to the destination, so we mark them as dirty.h)They will be recopied later. When most memory pages have beencopied, we are left with a small number of dirty pages.i)We now pause very briefly to copy the remaining pages and resumethe virtual machine at the new location. While there is still a pause,itis so brief that applications typically are not affected.j)When the downtime is not noticeable, it is known as a seamless livemigration.11.3 MEMORY VIRTUALIZATION1.We have discussed the issue of how to virtualize the CPU so far. But acomputer system has more than just a CPU.2.It also has memory and I/O devices. They have to be virtualized, too.munotes.in

Page 135

135a)The boxes represent pages, and the arrows show the differentmemory mappings.b)The arrows from guest virtual memory to guest physical memoryshow the mapping maintained by the page tables in the guestoperating system.c)The arrows from guest physical memory to machine memory showthe mapping maintained by the VMM.d)The dashed arrows show the mapping from guest virtual memoryto machine memory in the shadow page tables also maintained bythe VMM.e)The underlying processor running the virtual machine uses theshadow page table mappings4.Modern operating systems nearly all support virtual memory, which isbasically a mapping of pages in the virtual address space onto pagesof physical memory.5.This mapping is defined by (multilevel) page tables. Typically, themapping is set in motion by having the operating system set a controlregister in the CPU that points to the top-level page table.6.Virtualization greatly complicates memory management. In fact, ittook hardware manufacturers two tries to get it right.11.4 I/O VIRTUALIZATION1.The guest operating system will typically start out probing thehardware to find out what kinds of I/O devices are attached. Theseprobes will trap to the hypervisor. Hypervisor will do two things:
135a)The boxes represent pages, and the arrows show the differentmemory mappings.b)The arrows from guest virtual memory to guest physical memoryshow the mapping maintained by the page tables in the guestoperating system.c)The arrows from guest physical memory to machine memory showthe mapping maintained by the VMM.d)The dashed arrows show the mapping from guest virtual memoryto machine memory in the shadow page tables also maintained bythe VMM.e)The underlying processor running the virtual machine uses theshadow page table mappings4.Modern operating systems nearly all support virtual memory, which isbasically a mapping of pages in the virtual address space onto pagesof physical memory.5.This mapping is defined by (multilevel) page tables. Typically, themapping is set in motion by having the operating system set a controlregister in the CPU that points to the top-level page table.6.Virtualization greatly complicates memory management. In fact, ittook hardware manufacturers two tries to get it right.11.4 I/O VIRTUALIZATION1.The guest operating system will typically start out probing thehardware to find out what kinds of I/O devices are attached. Theseprobes will trap to the hypervisor. Hypervisor will do two things:
135a)The boxes represent pages, and the arrows show the differentmemory mappings.b)The arrows from guest virtual memory to guest physical memoryshow the mapping maintained by the page tables in the guestoperating system.c)The arrows from guest physical memory to machine memory showthe mapping maintained by the VMM.d)The dashed arrows show the mapping from guest virtual memoryto machine memory in the shadow page tables also maintained bythe VMM.e)The underlying processor running the virtual machine uses theshadow page table mappings4.Modern operating systems nearly all support virtual memory, which isbasically a mapping of pages in the virtual address space onto pagesof physical memory.5.This mapping is defined by (multilevel) page tables. Typically, themapping is set in motion by having the operating system set a controlregister in the CPU that points to the top-level page table.6.Virtualization greatly complicates memory management. In fact, ittook hardware manufacturers two tries to get it right.11.4 I/O VIRTUALIZATION1.The guest operating system will typically start out probing thehardware to find out what kinds of I/O devices are attached. Theseprobes will trap to the hypervisor. Hypervisor will do two things:
munotes.in

Page 136

1362.One approach is for it to report back that the disks, printers, and so onare the ones that the hardware actually has.i.The guest will then load device drivers for these devices and try touse them.ii.When the device drivers try to do actual I/O, they will read andwrite the device’s hardware device registers.iii.These instructions are sensitive and will trap to the hypervisor,which could then copy the needed values to and from the hardwareregisters, as needed.iv.But here, too, we have a problem. Each guest OS could think itowns an entire disk partition, and there may be many more virtualmachines (hundreds) than there are actual disk partitions.3.The usual solution is for the hypervisor to create a file or region onthe actual diskfor each virtual machine’s physical disk.i.Since the guest OS is trying to control a disk that the real hardwarehas (and which the hypervisor understands), it can convert theblock number being accessed into an offset into the file or diskregion being used for storage and do the I/O.ii.It is also possible for the disk that the guest is using to be differentfrom the real one.11.5 VIRTUAL APPLIANCES1.Virtual machines offer a solution to a problem that has long plaguedusers, especially users of open source software: how to install newapplication programs?2.The problem is that many applications are dependent on numerousother applications and libraries, which are themselves dependent on ahost of other software packages, and so on.3.Furthermore, there may be dependencies on particular versions of thecompilers, scripting languages, and the operating system.4.With virtual machines now available, a software developer cancarefully construct a virtual machine, load it with the requiredoperating system, compilers, libraries, and application code, andfreeze the entire unit, ready to run.5.This virtual machine image can then be put on a CD-ROM or aWebsite for customers to install or download.6.This approach means that only the software developer has tounderstand all the dependencies. The customers get a completepackage that actually works, completely independent of whichoperating system they are running and which other software,packages, and libraries they have installed.munotes.in

Page 137

1377.These „„shrink wrapped
  
  
 appliances”.8.As an example, Amazon’s EC2 cloud has many pre-packaged virtualappliancesavailable for clients, which it offers as convenient softwareservices („„Software as a Service11.6 LET US SUM IT UP1.There are two techniques used to migrate virtual machine:1.Migrate by pausing the virtual machine 2. Live Migration.2.Modern operating systems nearly all support virtual memory,which is basically a mapping of pages in the virtual address spaceonto pages of physical memory. 3. It is the job of the hypervisor tolook after the virtualization of I/O. 4. Virtual machines offer asolution to a problem especially with users of open sourcesoftware on how to install new application programs?11.7 LIST OF REFERENCES•Modern Operating system, Fourth edition, Andrew S. Tanenbaum,Herbert Bos.•https://www.geeksforgeeks.org/generations-of-computer/•Docs.vmware.com11.8 BIBLIOGRAPHY•Modern Operating System by Galvin11.9 UNIT ENDQUESTIONS1.How to migrate Virtual machine quickly?2.What are virtual appliances?3.Explain memory virtualization.4.What is I/O Virtualization?*****munotes.in

Page 138

13812MULTIPLE PROCESSING SYSTEMSUnitStructure12.0Objectives12.1Pre-requisites12.2Virtual machines on multicore CPUs12.3Licensing Issues12.4Clouds12.4.1Characteristics12.4.2Services Offered12.4.3Advantages12.5Multiple Processor Systems12.5.1Multiprocessors12.5.2Multi-computers12.5.3Distributed Systems12.6Let us sum it up12.7List of References12.8Bibliography12.9UnitEndQuestions12.0 OBJECTIVESThe objectives of this chapter is as follows:i)The objective of this chapter is make students learn about the differentVirtualization and Cloud technologies.ii)Understand the characteristics, advantages of Cloud.iii)Different types of Multiprocessor, multicomputer & Distributedsystems.12.1 PRE-REQUISITES1.There are two techniques used to migrate virtual machine:a. Migrate by pausing the virtual machineb. Live Migration.2.Modern operating systems nearly all support virtual memory, which isbasically a mapping of pages in the virtual address space onto pages ofphysical memory.3.It is the job of the hypervisor to look after the virtualization of I/O.munotes.in

Page 139

1394.Virtual machines offer a solution to a problem especially with users ofopen source software on how to install new application programs?12.2 VIRTUAL MACHINES ON MULTICORE CPUS1.It has never been possible for an application designer to first choosehow many CPUs he wants and then write the software accordingly.2.The combination of virtual machines and multicore CPUs creates awhole new world in which the number of CPUs available can be setby the software.3.This is clearly a new phase in computing. Moreover, virtual machinescan share memory.4.A typical example where this is useful is a single server hostingmultiple instances of the same operating systems.5.All that has to be done is map physical pages into the address spacesof multiple virtual machines.6.Memory sharing is already available in deduplication solutions.Deduplication avoids storing the same data twice.7.It is a common technique in storage systems, but is now appearing invirtualization as well.8.In general, the technique revolves around scanning the memory ofeach of the virtual machines on a host and hashing the memory pages.9.Should some pages produce an identical hash, the system has to firstcheck to see if they really are the same, and if so, de-duplicate them,creating one page with the actual content and two references to thatpage. Since the hypervisor controls the nested(or shadow) pagetables, this mapping is straightforward.10. The combination of multicore, virtual machines, hypervisor, andmicrokernels is going to radically change the way people think aboutcomputer systems.11.Current software cannot deal with the idea of the programmerdetermining how many CPUs are needed, whether they should be amulticomputer or a multiprocessor, and how minimal kernels of onekind or another fit into the picture.12.3 LICENSING ISSUES1.Some software is licensed on a per-CPU basis, especially software forcompanies. In other words, when they buy a program, they have theright to run it on just one CPU.2.Does this contract give them the right to run the software on multiplevirtual machines all running on the same physical machine? Manysoftware vendors are somewhat unsure of what to do here.munotes.in

Page 140

1403.The problem is much worse in companies that have a license allowingthem to have ‘n’ machines running the software at the same time,especially when virtual machines come and go on demand.4.In some cases, software vendors have put an explicit clause in thelicense forbidding the licensee from running the software on a virtualmachine or on an unauthorized virtual machine.5.For companiesthat run all their software exclusively on virtualmachines, this could be a real problem. Whether any of theserestrictions will hold up in court and how users respond to themremains to be seen.12.4 CLOUDS1.Virtualization technology played acrucial role in the dizzying rise ofcloud computing. There are many clouds.2.Some clouds are public and available to anyone willing to pay for theuse of resources, others are private to an organization. 3. Likewise,different clouds offer different things. Some give their users access tophysical hardware, but most virtualize their environments.4.Some offer the bare machines, virtual or not, and nothing more, butothers offer software that is ready to use and can be combined ininteresting ways, orplatforms that make it easy for their users todevelop new services.5.Cloud providers typically offer different categories of resources.12.4.1 Characteristics of Cloud:The National Institute of Standards and Technology has listed fiveessentialcharacteristics:a. On-demand self-service:Users should be able to provision resourcesautomatically, without requiring human interaction.b.Broad network access:All these resources should be available overthe network via standard mechanisms so that heterogeneous devicescan make use of them.c.Resource pooling:The computing resource owned by the providershould be pooled to serve multiple users and with the ability to assignand reassign resources dynamicallyd.Rapid elasticity:It should bepossible to acquire and release resourceselastically, perhaps even automatically, to scale immediately with theusers’ demands.e.Measured service:The cloud provider meters the resources used in away that matches the type of service agreed upon.munotes.in

Page 141

14112.4.2 Services offered by Cloud:12.4.2.1Software as a Service (It offers specific software)12.4.2.2Platform as a Service (It creates environment which givesspecific Operating system, database, web server etc.)12.4.2.3Infrastructure as a Service(Same cloud can run differentOperating Systems) We can refer the diagram below for moreunderstanding:
12.4.3 Advantages of CloudFollowing are the advantages of using Cloud:1.Unlimited storage:Clouds provide unlimited storage.2.Flexibility:If your needs increase, it’s easy to scale up your cloudcapacity. Likewise, if you need to scale down again, you can scaledown the cloud capacity again.3.Disaster recovery:Backup and recovery of data is possible.4.Automatic software updates5.Capital-expenditure free:Cloud computing cuts the high cost ofhardware. You simply pay as you use subscription-based model.6.Work from anywhere:With an internet connection, you can workfrom anywhere.7.Security:Your data is stored in the cloud; you can access it no matterwhat happens to your machine12.5 MULTIPLE PROCESSOR SYSTEMS12.5.1.1 Each CPU has its own OS:a.Memory is divided into equal sized partitions, where each partitionbelongs to one CPU.
14112.4.2 Services offered by Cloud:12.4.2.1Software as a Service (It offers specific software)12.4.2.2Platform as a Service (It creates environment which givesspecific Operating system, database, web server etc.)12.4.2.3Infrastructure as a Service(Same cloud can run differentOperating Systems) We can refer the diagram below for moreunderstanding:
12.4.3 Advantages of CloudFollowing are the advantages of using Cloud:1.Unlimited storage:Clouds provide unlimited storage.2.Flexibility:If your needs increase, it’s easy to scale up your cloudcapacity. Likewise, if you need to scale down again, you can scaledown the cloud capacity again.3.Disaster recovery:Backup and recovery of data is possible.4.Automatic software updates5.Capital-expenditure free:Cloud computing cuts the high cost ofhardware. You simply pay as you use subscription-based model.6.Work from anywhere:With an internet connection, you can workfrom anywhere.7.Security:Your data is stored in the cloud; you can access it no matterwhat happens to your machine12.5 MULTIPLE PROCESSOR SYSTEMS12.5.1.1 Each CPU has its own OS:a.Memory is divided into equal sized partitions, where each partitionbelongs to one CPU.
14112.4.2 Services offered by Cloud:12.4.2.1Software as a Service (It offers specific software)12.4.2.2Platform as a Service (It creates environment which givesspecific Operating system, database, web server etc.)12.4.2.3Infrastructure as a Service(Same cloud can run differentOperating Systems) We can refer the diagram below for moreunderstanding:
12.4.3 Advantages of CloudFollowing are the advantages of using Cloud:1.Unlimited storage:Clouds provide unlimited storage.2.Flexibility:If your needs increase, it’s easy to scale up your cloudcapacity. Likewise, if you need to scale down again, you can scaledown the cloud capacity again.3.Disaster recovery:Backup and recovery of data is possible.4.Automatic software updates5.Capital-expenditure free:Cloud computing cuts the high cost ofhardware. You simply pay as you use subscription-based model.6.Work from anywhere:With an internet connection, you can workfrom anywhere.7.Security:Your data is stored in the cloud; you can access it no matterwhat happens to your machine12.5 MULTIPLE PROCESSOR SYSTEMS12.5.1.1 Each CPU has its own OS:a.Memory is divided into equal sized partitions, where each partitionbelongs to one CPU.
munotes.in

Page 142

142b.Each CPU has its own private memory and its own private copy of theoperating system.c.Alternative to this scheme is to allow all the CPUs to share theoperating system code and make private copies of only the datastructures of OS.d.There are 4 aspects of this design,i.When a process makes a system call, the system call iscaught andhandled on its own CPUusing the datastructures in that operatingsystem’s tables.ii.Each operating system has its own tables; it also has itsown set ofprocesses that it schedules by itself.iii.Third, there is no sharing of physical pages. So someCPU isoverburdened and some is idle, as there is no load sharing.iv.No additional memory, so programs cannot grow
12.5.1.2 Master-Slave Multiprocessor:a.Only one copy of OS is present in memory.b.Master CPU can only run the operating system from memory. So here,only CPU1 can run the OS and not any others.c.All system calls from other CPUs are redirected to CPU 1 forprocessing there.d.CPU 1 is the master and all the others are slaves.e.When a CPU goes idle, it asks the operating system on CPU 1 for aprocess to run and is assigned one.f.Thus it can never happen that one CPU is idle while another isoverloaded. g. Similarly, pages can be allocated among all theprocesses dynamicallyand there is only one buffer cache, soinconsistencies never occur.h.The problem with this model is that with many CPUs, the master willbecome a bottleneck.
munotes.in

Page 143

14312.5.1.3 Symmetric Multiprocessor:a.It eliminates the asymmetry in Master-Slave configuration.b.There is one copy of the operating system in memory, but any CPUcan run it.c.It eliminates the master CPU bottleneck, since there is no master.d.No need to redirect system calls to 1CPU as each CPU can run theOS.e.While running a process, the CPU on which the system call was madeprocesses the system call.
12.5.2 Multi-computers:aFollowing are the various inter-connection technologies usedinMulti-computer:
14312.5.1.3 Symmetric Multiprocessor:a.It eliminates the asymmetry in Master-Slave configuration.b.There is one copy of the operating system in memory, but any CPUcan run it.c.It eliminates the master CPU bottleneck, since there is no master.d.No need to redirect system calls to 1CPU as each CPU can run theOS.e.While running a process, the CPU on which the system call was madeprocesses the system call.
12.5.2 Multi-computers:aFollowing are the various inter-connection technologies usedinMulti-computer:
14312.5.1.3 Symmetric Multiprocessor:a.It eliminates the asymmetry in Master-Slave configuration.b.There is one copy of the operating system in memory, but any CPUcan run it.c.It eliminates the master CPU bottleneck, since there is no master.d.No need to redirect system calls to 1CPU as each CPU can run theOS.e.While running a process, the CPU on which the system call was madeprocesses the system call.
12.5.2 Multi-computers:aFollowing are the various inter-connection technologies usedinMulti-computer:
munotes.in

Page 144

144b.SingleSwitch/ Star topology:Every node contains a networkinterface card and all computers are connected to switches/hubs. Fast,expandable but single point failure systems. Failure in switch/hub cantake down entire system.c.Ring Topology:Each node has two wires coming out the networkinterface card, one into the node on the left and one going into thenode on the right. There is no use of switches in this topology.d.Grid/mesh topology:two dimensional design with multiple switchesand can be expanded easily to large size. Its diameter is the longestpath between any two nodes.e.Double Torus:alternative to grid, which is a grid with the edgesconnected. With compare to grid its diameter is less and it is morefault tolerant. The diameter is less as opposite corners communicatesin only two hops.f.Cube:Fig e. shows 2 x 2 x 2 cube which is a regular three-dimensional topology. In general case it could be a n x n x n cube. Noof nodes = 2n. So for 3D cube, 8 nodes can be attached.g.A 4-D Hypercube:Fig (f) shows four–dimensional cube constructedfrom two three–dimensional cubes with the equivalent nodesconnected. An n-dimensional cube formed this way is called ahypercube.Many parallel computers can be building using hypercubetopology.12.5.3 Distributed Systems:a.A distributed system is defined as set of autonomous computers thatappears to its users as a single coherentsystem.b.Users of distributed systemfeel that, they are working with asa singlesystem.c.Distributed system is like multi-computers spread worldwide.d.Each node in distributed system is having its own CPU, RAM,network board. OS, and disk for paging.e.Following are the maincharacteristics of distributed systems:i.A distributed system comprises computers withdifferentarchitecture and different OS. Thesedissimilarities and the waysall these machines communicate are hidden from users.ii.The manner in which distributed system is organized isalsohidden from the users of the distributedsystem.iii. The interaction of users and applicationswith distributed system isin consistent and identical way.iv.It should be always available tothe users andapplications inspiteof failures.Failure handlingshould be hiddenfrom users andapplications.munotes.in

Page 145

14512.6 LET US SUM IT UPITEMMultiprocessorMulticomputerDistributedSystemNode ConfigurationCPUCPU, RAM, netinterfaceCompletecomputerNode peripheralsAll sharedShared exc.Maybe diskFull set pernodeLocationSame rackSame roomPossiblyworldwideInternodecommunicationShared RAMDedicatedinterconnectTraditionalnetworkOperating systemsOne, sharedMultiple, samePossibly atdifferentFile systemsOne SharedOne SharedEach Node HasOwnAdministrationOneorganizationOneorganizationManyorganizations
munotes.in

Page 146

14612.7 LIST OF REFERENCES•Modern Operating system, Fourth edition, Andrew S. Tanenbaum,Herbert Bos.•https://www.geeksforgeeks.org/generations-of-computer/12.8 BIBLIOGRAPHY•Modern Operating System by Galvin12.9 UNIT ENDQUESTIONS1.List and explain different types ofmultiprocessor operating system?2.With neat diagram explain various interconnection technologies usedin multicomputer.3.Define and explain distributed systems with neat diagram.4.List and explain different types of multiprocessor operating system.5.Differentiate between Multiprocessor, Multicomputer and DistributedSystems.6.What are the services and advantages of Cloud computing?7.What is cloud? Write essential characteristic of cloud.*****munotes.in

Page 147

14713LINUX CASE STUDYUnit Structure13.0Objectives13.1History13.1.1 History of UNIX13.1.2 History of Linux13.2OVERVIEW13.2.1 An Overview of Unix13.2.2 Overview of Linux13.3PROCESS in Linux13.4Memory Management13.5Input output in Linux13.6Linux File system13.7Security in Linux13.8Summary13.9List of references13.10Bibliography13.11Unit End Questions13.0OBJECTIVES•To understand principles ofLinux•To learn principles ofProcess Management, Memory Management•To learn principles of I/O in Linux,File System and Security13.1 HISTORY OF UNIX AND LINUX13.1.1 History ofUNIX:I.The UNIX Operating System is derived from MULTICS(Multiplexed Operating System and Computer System).It was begunin mid 1960’s.II.In 1969, Kem Thompson wrote thefirst version of UNIX calledUNICS.It stands for Uniplexed Operating and Computing System.III.In 1973, Ken Thompson teamed up with Dennis Ritchie and rewrotethe Unix kernel in C.munotes.in

Page 148

148IV.Ken Thompson spent a year's sabbatical with the University ofCalifornia at Berkeley. While there he and two graduate students,Bill Joy and Chuck Haley, wrote the first Berkely version of Unix,which was distributed to students.V.This resulted in thesource code being worked on and developed bymany different people.VI.The Berkeley version of Unix is known as BSD, Berkeley SoftwareDistribution. From BSD came the vi editor, C shell, virtual memory,Sendmail, and support for TCP/IP.VII.For several years SVR4 was the more conservative, commercial, andwell supported.VIII. Today SVR4 and BSD look very much alike. Probably the biggestcosmetic difference between them is the way the ps commandfunctions.IX.The Linux operating system was developedas a Unix look alike andhas a user command interface that resembles SVR4.Following figure shows history in better way
13.1.2 History of Linux:Linux is an open source family of Unix-like Linux based kernelapplications, a kernel operating systemthat was first released onSeptember 17, 1991, by Linus Torvalds Linux usually included in theLinux distribution.Popular Linux distributions include Debian, Fedora,and Ubuntu. Commercial distribution includes Red Hat Enterprise Linux
munotes.in

Page 149

149and SUSE Linux Enterprise Server. Because Linux is distributed freely,anyone can create a distribution for any purpose.Popular Linux distributions include Debian, Fedora, and Ubuntu.Commercial distribution includes Red Hat Enterprise Linux and SUSELinux Enterprise Server. Because Linux is distributed freely, anyone cancreate a distribution for any purpose.Linux was originally designed for computers based on the Intelx86 architecture, but has since been deployed to more platforms than anyother operating system. Linux is a leading operating system on servers andother large-scale systems such as keyword computers.The Unix operating system was developed in 1969 at AT &T BellLaboratories in America by Ken Thompson and Dennis Ritchie. Unix'shigh-performance language acquisition has made it easy to be deployedacross different computer platforms.Creation:In 1991, Torvalds became interested in the operating system.Torvalds introduced a switch from its original license, which prohibitedcommercial re-distribution to the GNU GPL. The developers worked tointegrate the GNU components into the Linux kernel, creating a fullyfunctional and free operating system.Commercial and public reproduction:Today the Linux systems are used in throughout computing that isfrom all theembedded systems on almost all the supercomputers, and onthe server installations such as the very much popular the LAMPApplication Stack. The useof Linux distribution on home and enterprisedesktops is growing. Linux are also popular in the netbook market, manydevices install customized Linux distributions, and Google hasreleased itsown Chrome OS designed for netbooks.13.2 OVERVIEW13.2.1 An Overview ofUNIX:The UNIX operating system is designed to allow manyprogrammers to simultaneously access the computer and share itsresources.The operating system controls all commands from all keyboardsand all data generated, and allowseach user to believe that he or she is theonly person working on the computer.munotes.in

Page 150

150The real-time sharing of resources makes UNIX one of the mostpowerful operating systems ever.UNIX was developed by programmers for community ofprogrammers, the functionality it provides is so powerful and flexible thatit can be found in business, science, academia, and industry.The uniqueness of UNIX and Features provided byUNIXare:Multitasking capability:Many computers can only do one thing at a time, and anyone with aPC or laptop can prove it. While opening the browser and opening theword processing program, try to log in to the company's network. Whenarranging multiple instructions, the processor mayfreeze for a fewseconds.Multiuser capability:The same design that allows multitasking allows multiple users to usethe computer. The computer can accept many user commands (dependingon the design of the computer) to run programs, access files andprintdocuments at the same time.UNIX programs:UNIX tools-Hundreds of programs come with UNIX, and theseprograms can be divided into two categories: Integrated utilities essentialfor computer operation, such as command interpreters and Tools that arenot required for UNIX operation, but provide users with additionalfunctions, such as typesetting functions and e-mail.Library of application software:In addition to the applications that come with UNIX, hundreds ofUNIX applications can be purchased from third-party vendors. Althoughthird-party vendors have written some tools for specific applications, thereare hundreds of tools available for UNIX users.Generally, tools are divided into categories for certain functions(such as word processing,business applications, or programming).13.2.2 Overview of Linux|:Linux is aUNIX-like computer OS which is assembled &madeunder the model of free and open source software development anddistribution. The most defined component of Linux is the Linux kernel, anOS kernel was first released on 1991 by Linus Torvalds.munotes.in

Page 151

151A Linux-based system is a modular Unix-like OperatingSystem. Itderives much of its basic design from principles established in Unixduring the 1970 and 1980. Such a system uses a monolithic kernel, theLinux kernel, which handles process control, networking, & peripheral &file system access. Device driversare integrated directly with the kernel orthey added as modules loaded while the system is running.•A bootloader-for example GRUB or LILO. This is a program whichis executed by the computer when it is first turned on, & loads theLinux kernel into memory.•An init program-This is a process launched by the Linux kernel, &is at the root of the process tree, in other words, all processes arelaunched through init. It starts processes such as system services &login prompts (whether graphical or in terminal mode)•Software libraries which contain code which can be used by runningprocesses.On Linux OS using ELF-format executable files, thedynamic linker which manages use of dynamic libraries is "ld-linux.so".•The most commonly used software library on Linux systems is theGNU C. Library. If the OS is set up for the user to compile softwarethemselves, header files will also be included to describe the interfaceof installed libraries.•User interface programs such ascomm.& shells or windowingenvironmentsLinux is a widely ported operating system kernel. Currently mostof the distribution include a graphical user environment, with the 2 mostpopular environments which are GNOME (it basically utilizes additionalshells such as the default GNOME Shell& Ubuntu Unity), & KDEPlasma Desktop.13.3 PROCESS IN LINUXA Linux-based system may be a modular Unix-like OS. It derivesmuch of its basic design from principles established in Unix during the1970s and 1980s. Such a system uses a monolithic kernel,the Linuxkernel, which handles process control, networking,and peripheral andfiling system access. Device drivers are either integrated directly with thekernel or added as modules loaded while the system is running.Various parts of an OS UNIX and 'UNIX-like' operating systems(such as Linux) contains a kernel and a few system programs. There alsoare some application programs for doing work. The kernel is that the heartof the OS. In fact, it's often mistakenly considered to be the OS itself, butit's not. An OS provides more services than a clear kernel. It keeps track offiles on the disk, starts programs and runs them concurrently, assignsmemory and other resources to varied processes, receives packets frommunotes.in

Page 152

152and sends packets to the network, andso on. The kernel does little or nobyitself, but it provides tools with which all services are often built. It alsoprevents anyone from accessing the hardware directly,forcing everyone touse the tools it provides.Thismanner the kernel provides some protectionfor users from one another. The tools provided by the kernel are used viasystem calls. The system programs use the tools provided by the kernel toimplement the varied services required from an OS. System programs, andevery one other programs,run `on top of the kernel', in what's called theuser mode. The difference between system and application programs isone among intent: applications are intended for getting useful things done(or for enjoying, if it happens to be a game), whereas system programs areneeded to urge the system working. A word processing system may be anapplication; mount is a systems program. The difference is usuallysomewhat blurry, however, and is vital only to compulsive categorizers.An OS also can contain compilers and therefore their correspondinglibraries (GCC and the C library especially under Linux), although not allprogramming languages need be a part of the OS. Documentation, andsometimes even games, also can be a part of it. Traditionally, the OS hasbeen defined by the contents of the installation tape or disks; with Linuxit's not as clear since it's spread everywhere the FTP sites of the planet.Important parts of the kernel:The Linux kernel consists of several important parts: processmanagement,memory management, hardware device drivers, filesystemdrivers, network management, and various other bits and pieces. Memorymanagement takes care of assigning memory areas and swap file areas toprocesses, parts of the kernel, and for the buffer cache. Processmanagement creates processes, and implements multitasking by switchingthe active process on the processor. At rock bottom level, the kernelcontains a hardware driver for every quite hardware it supports. There areoften many otherwise similar pieces of hardware that differ in how they'recontrolled by software. The similarities make it possible to possess generalclasses of drivers that support similar operations; each member of thecategory has an equivalent interface to the remainder of the kernel butdiffers in what it must do to implement them. for instance, all disk driverslook alike to the remainder of the kernel, i.e., all of them have operationslike `initialize the drive', `read sector N', and `write sector N'. Somesoftware services provided by the kernel itself have similar properties, andmay therefore be abstracted into classes. for instance, the varied networkprotocols are abstracted into one programming interface, the BSD socketlibrary. Another example is that the virtual filesystem(VFS) layer thatabstracts the filesystem operations faraway from their implementation.Each filesystem type provides an implementation of every filesystemoperation. When some entity tries to use a filesystem, the request goes viathe VFS, which routes the request to the right filesystem driver.munotes.in

Page 153

153Major services during a UNIX :Init the only most vital service during a UNIX is provided by init,init is started because the first process of each UNIX, because the last itemthe kernel does when it boots.When init starts, it continues the bootprocess by doing various start-up chores (checking and mountingfilesystems, starting daemons, etc). When the system is pack up, it's initthat's responsible of killing all other processes, unmounting all filesystemsand stopping the processor, along side anything it's been configured to tryto doSyslog:The kernel and lots of system programs produce error, warning,and other messages.It’soften important that these messages are oftenviewed later, even much later, in order that they should be written to a file.The program doing this is often syslog. Itisoften configured to sort themessages to different files consistent with writer or degree of importance.for instance, kernel messages are often directed to a separate file from theothers, since kernel messages are often more important and wish to beread regularly to identify problems.Both users and system administrators often got to run commandsperiodically.Forinstance, the supervisor might want to run a command towash the directories with temporary files (/tmp and /var/tmp) from oldfiles, to stay the disks from filling up, since not all programs pack up afterthemselves correctly.Thecronservice is about up to try to this. Each user can have acrontab file, where she lists the commands she wishes to execute andtherefore the times they oughtto be executed. The cron daemon takes careof starting the commands when specified.Graphical interface:This arrangement makes the system more flexible, but has thedisadvantage that it's simple to implement a special interface for everyprogram,making the system harder to find out.The graphical environment primarily used with Linux is named theX Window System (X for short). Some popular window managers are:fvwm, icewm, blackbox, and windowmaker. There also are two populardesktop managers, KDE and Gnome.Networking:Networking is that the act of connecting two or more computers inorder that they will communicate with one another. the particular methodsof connecting and communicating are slightly complicated, but the topresult's very useful.munotes.in

Page 154

154UNIX operating systems have many networking features. mostelementary services (filesystems, printing, backups, etc) are often doneover the network.Network logins:Network logins work a touch differently than normal logins. forevery person logging in via the network there's a separate virtual networkconnection, and there are often any number of those counting on theavailable bandwidth.It’stherefore impossible torun a separate getty forevery possible virtual connection13.4 MEMORY MANAGEMENTIt is the process of managing the computer memory. The goal is tokeep track of which parts of memory are in use and which parts are not, toallocate memory to processeswhen they need it and de-allocate it whenthey are done.UNIX operating system is works on two memory management schemes,These are as follows-1.swapping2.demand pagingNon-Contiguous Memory AllocationTechniquesare:1.Paging:Itis a storagemechanism that allows OS to fetch processes fromthe non-volatile memory into the volatile memory in the form of pages.The partitions of nonvolatile are called as pages & volatile memory isdivided into small fixed-size blocks of physical memory, which iscalledframes.Example :Consider a process is divided into 4 pages A0, A1, A2 and A3.Depending upon the availability, these pages may be storedin the main memory frames as shown in the below diagram-A2A3A0A1Main Memorymunotes.in

Page 155

1552. Segmentation:A process is divided into division called Segments. Thesesegments are not of same size. There are types of segmentation:1.Virtual memory segmentation:Every process is divided intomultiple number of segments, which do not reside at any one point intime.2.Simple segmentation:Every process is divided into a number ofsegments, all the processes are loaded in run time.Segment Table:A table which stores the information about every segment of theprocess. It has two columns. Column 1 gives information about size orlength of the segment. Column 2 gives information about the base addressConsider the below diagram for segment table.LimitBase15001500100047005004500Segment TableAccording to the above table, the segments are stored in the mainmemory as:Segment-0Segment-3Segment-2Main MemoryThe advantages of segmentation are•Segment table takes lessspace as compared to Page Table in paging.•It solves the problem of internal fragmentation.The disadvantages of segmentation are•Unequal size is problem for swapping.•Though it solves internal fragmentation, it do suffer from externalfragmentation.Paging vs Segmentation:Paging divides program into fixed size Segmentation divides program intomunotes.in

Page 156

156PagingSegmentationPaging divides program intofixed size pages.Segmentation divides programinto variable size segments.Operating System is responsible.Compiler is responsibleFaster than segmentation.It is slower than pagingIt is closer to operating system.Segmentation is closer to User.Memory Management:Demand Paging:Deciding which pages need to be kept in the main memory andwhich need to be kept in the secondary memory, is going to be difficultbecause we can’t say in advance that a process might require a particularpage at a particular moment of time.So, to beat this problem, there comesa concept called Demand Paging. It advises keeping all pages of theframes in the secondary memory till they are required. We can also saythat, it says that do not load other pages in the main memory till they arerequired. Wheneverany page is referred for the 1st time in the mainmemory, then that page will appear in the secondary memoryPage fault:If the mention page is not available in the main memory then therewill be a gap and this theory is called Page miss or page fault. Then theCPU has to work on the missed page from the secondary memory. If thenumber of page faults is very high then the time to access a page of thesystem will become very high.Thrashing:If number of page faults is equal to the number of mention pagesor the number of page faults are so very high so that the CPU cannotremain vacant in just reading the pages from the secondary memory thenthe important access time will be the time taken by the CPU to read 1word from the secondary memory and it will be very much high. Theprocess is called thrashing. So, assume If the page fault rate is pp %, thetime taken in getting apage from thesecondary memory & againrestarting is S(processing time) and the memory access time is m then theeffective access time can be given as;1.EAT = PF x S + (1 pp) x (m)Page Replacement:The page replacement algorithm tells us that which memory pageis to be changed. This moment of replacement is sometimes called a swapmunotes.in

Page 157

157out or write to disk. Page replacement is to be done when the requestedpage is not been able to access or found in the main memory (page fault)
There are 2 main types of virtual memory, Frame allocation andPage Replacement. Frame allocation is all about how many frames can beallocated to a process while the page replacement tells us aboutdetermining the number of thepages which requires to be replaced incommand to make space for the requested pageWhat If the algorithm is not optimal?1.Due to the absence of frames, many of the pages will be occupyingthe main memory and however more page faults might occur.So, ifthe OS specifies more frames to the process then there can be interiorfragmentation.2.If the page replacement algorithm is not optimal the n there mightalso lead to theproblem of thrashing. If the number of pages that areto be replaced by the requested pages will be referred to in the nearfuture then there will be more number of swap-out & swap-in and soafter the OS has to work on more replacements then usual whichcauses performanceshortage. So, the task of an optimal pagereplacement algorithm is to check the page which can restrict thethrashing.Types of Page Replacement Algorithms:There are various types of page replacement algorithms. Each ofthe algorithms has a different way of working by which the pages can bereplaced.1.Optimal Page Replacement algorithm→The algorithms replaces thepage which will not be mentioned for a long time in future.Anyways itcannot be practically implementable but it can be definitely used as a
munotes.in

Page 158

158benchmark. Other algorithms are balanced to this in terms ofoptimality.2.Least recent used (LRU) page replacement algorithm→Thisalgorithm replaces the page which has not been mentioned for a longertime. This algorithm is just an exchange to the optimal pagereplacement algorithm. In this, we check the past instead of staring atthe future.3.FIFO→Inthis type algorithm, a line is to be maintained. This pagewhich is assigned the frame 1st will be replaced 1st. In other words,the page which staysat the rare end of the line will be changed on theevery page fault.13.5 INPUTLinux operating system considers and works with the belowdevices, by the same way we open and close a file.•Block devices(Hard-disks, Compact Disks, Floppy, Flash Memory)•Serial devices (Mouse, keyboard)•Network DevicesA user can do operations on these devices, as he doesoperations ona file.I/O redirection allows you to alter input source of a command aswell as its output and error messages are sent. And this is possible by the‘<’ and ‘>’ redirection operators.The main advantages with block devices are the fact that they canbe read randomly. Also, serial devices are operated.
158benchmark. Other algorithms are balanced to this in terms ofoptimality.2.Least recent used (LRU) page replacement algorithm→Thisalgorithm replaces the page which has not been mentioned for a longertime. This algorithm is just an exchange to the optimal pagereplacement algorithm. In this, we check the past instead of staring atthe future.3.FIFO→Inthis type algorithm, a line is to be maintained. This pagewhich is assigned the frame 1st will be replaced 1st. In other words,the page which staysat the rare end of the line will be changed on theevery page fault.13.5 INPUTLinux operating system considers and works with the belowdevices, by the same way we open and close a file.•Block devices(Hard-disks, Compact Disks, Floppy, Flash Memory)•Serial devices (Mouse, keyboard)•Network DevicesA user can do operations on these devices, as he doesoperations ona file.I/O redirection allows you to alter input source of a command aswell as its output and error messages are sent. And this is possible by the‘<’ and ‘>’ redirection operators.The main advantages with block devices are the fact that they canbe read randomly. Also, serial devices are operated.
158benchmark. Other algorithms are balanced to this in terms ofoptimality.2.Least recent used (LRU) page replacement algorithm→Thisalgorithm replaces the page which has not been mentioned for a longertime. This algorithm is just an exchange to the optimal pagereplacement algorithm. In this, we check the past instead of staring atthe future.3.FIFO→Inthis type algorithm, a line is to be maintained. This pagewhich is assigned the frame 1st will be replaced 1st. In other words,the page which staysat the rare end of the line will be changed on theevery page fault.13.5 INPUTLinux operating system considers and works with the belowdevices, by the same way we open and close a file.•Block devices(Hard-disks, Compact Disks, Floppy, Flash Memory)•Serial devices (Mouse, keyboard)•Network DevicesA user can do operations on these devices, as he doesoperations ona file.I/O redirection allows you to alter input source of a command aswell as its output and error messages are sent. And this is possible by the‘<’ and ‘>’ redirection operators.The main advantages with block devices are the fact that they canbe read randomly. Also, serial devices are operated.
munotes.in

Page 159

159Another advantage of using block devices is that, if allows accessto randomlocation's on the device. Also data from the device is read witha fixed block size.Input & output to the block devices works on the"elevator algorithm"It says that it works on the same principle, as anelevator would.Mechanical devices like hard-disks are very slow in nature, when itcomes to data input and output compared to system memory (RAM) &Processor.Sometimes applications have to wait for the input and outputrequests to complete, because different applications are in queue for itsinputoutput operations to complete.The slowest part of any Linux system (or any other operatingsystem), is the disk I/O systems. There is a large difference between, thespeed and the duration taken to complete an input/output request of CPU,RAM and Hard-disk. Sometimes if one of the processes running on yoursystem does a lot of read/write operations on the disk, there will be anintense lag or slow response from other processes since they are allwaiting for their respective I/O operations to get completed.Linux Io Commands:Output Redirection:The output from the command normally intended for normaloutput are often easily diverted to a file instead., this capability isunderstood as output redirection.Notice that no output appears at the terminal.Thisis often becausethe output has been redirected from the default standard output device (theterminal) into the required file.If a command has its output redirected to a file and therefore thefile already contains some data, that data are going to be lost. Example
The >> operator are often used to append the output in an existing file asfollows
159Another advantage of using block devices is that, if allows accessto randomlocation's on the device. Also data from the device is read witha fixed block size.Input & output to the block devices works on the"elevator algorithm"It says that it works on the same principle, as anelevator would.Mechanical devices like hard-disks are very slow in nature, when itcomes to data input and output compared to system memory (RAM) &Processor.Sometimes applications have to wait for the input and outputrequests to complete, because different applications are in queue for itsinputoutput operations to complete.The slowest part of any Linux system (or any other operatingsystem), is the disk I/O systems. There is a large difference between, thespeed and the duration taken to complete an input/output request of CPU,RAM and Hard-disk. Sometimes if one of the processes running on yoursystem does a lot of read/write operations on the disk, there will be anintense lag or slow response from other processes since they are allwaiting for their respective I/O operations to get completed.Linux Io Commands:Output Redirection:The output from the command normally intended for normaloutput are often easily diverted to a file instead., this capability isunderstood as output redirection.Notice that no output appears at the terminal.Thisis often becausethe output has been redirected from the default standard output device (theterminal) into the required file.If a command has its output redirected to a file and therefore thefile already contains some data, that data are going to be lost. Example
The >> operator are often used to append the output in an existing file asfollows
159Another advantage of using block devices is that, if allows accessto randomlocation's on the device. Also data from the device is read witha fixed block size.Input & output to the block devices works on the"elevator algorithm"It says that it works on the same principle, as anelevator would.Mechanical devices like hard-disks are very slow in nature, when itcomes to data input and output compared to system memory (RAM) &Processor.Sometimes applications have to wait for the input and outputrequests to complete, because different applications are in queue for itsinputoutput operations to complete.The slowest part of any Linux system (or any other operatingsystem), is the disk I/O systems. There is a large difference between, thespeed and the duration taken to complete an input/output request of CPU,RAM and Hard-disk. Sometimes if one of the processes running on yoursystem does a lot of read/write operations on the disk, there will be anintense lag or slow response from other processes since they are allwaiting for their respective I/O operations to get completed.Linux Io Commands:Output Redirection:The output from the command normally intended for normaloutput are often easily diverted to a file instead., this capability isunderstood as output redirection.Notice that no output appears at the terminal.Thisis often becausethe output has been redirected from the default standard output device (theterminal) into the required file.If a command has its output redirected to a file and therefore thefile already contains some data, that data are going to be lost. Example
The >> operator are often used to append the output in an existing file asfollowsmunotes.in

Page 160

160Input Redirection:Thecommands that normally take their input from the standardinput can have their input redirected from a file in this manner. Forexample, to count the number of lines in the fileusersgenerated above,you can execute the command as followsUpon execution, you will receive the following output. You cancount the number of lines in the file by redirecting the standard input ofthewc command from the file usersNote that there is a difference in the output produced by the twoforms of thewc command. In the first case, the name of the file users islisted with the line count; in the second case, it is not.In the first case, weknows that it is reading its input from the fileusers. In the second case, it only knows that it is reading itsinput fromstandard input so it does not display file name.Redirection Commands:Following is a complete list of commands which you can use forredirectionSr.NoCommand & Description1Pgm > fileOutput of pgm is redirected to file.2Pgm < fileProgram pgm reads its input from file.3Pgm >> fileOutput of pgm is appended to file.
160Input Redirection:Thecommands that normally take their input from the standardinput can have their input redirected from a file in this manner. Forexample, to count the number of lines in the fileusersgenerated above,you can execute the command as followsUpon execution, you will receive the following output. You cancount the number of lines in the file by redirecting the standard input ofthewc command from the file usersNote that there is a difference in the output produced by the twoforms of thewc command. In the first case, the name of the file users islisted with the line count; in the second case, it is not.In the first case, weknows that it is reading its input from the fileusers. In the second case, it only knows that it is reading itsinput fromstandard input so it does not display file name.Redirection Commands:Following is a complete list of commands which you can use forredirectionSr.NoCommand & Description1Pgm > fileOutput of pgm is redirected to file.2Pgm < fileProgram pgm reads its input from file.3Pgm >> fileOutput of pgm is appended to file.
160Input Redirection:Thecommands that normally take their input from the standardinput can have their input redirected from a file in this manner. Forexample, to count the number of lines in the fileusersgenerated above,you can execute the command as followsUpon execution, you will receive the following output. You cancount the number of lines in the file by redirecting the standard input ofthewc command from the file usersNote that there is a difference in the output produced by the twoforms of thewc command. In the first case, the name of the file users islisted with the line count; in the second case, it is not.In the first case, weknows that it is reading its input from the fileusers. In the second case, it only knows that it is reading itsinput fromstandard input so it does not display file name.Redirection Commands:Following is a complete list of commands which you can use forredirectionSr.NoCommand & Description1Pgm > fileOutput of pgm is redirected to file.2Pgm < fileProgram pgm reads its input from file.3Pgm >> fileOutput of pgm is appended to file.munotes.in

Page 161

1614n > fileOutput from stream with descriptor n redirected to file5n >> fileOutput from stream with descriptor n appended to file6n >& mMerges output fromstream n with stream m7n <& mMerges input from stream n with stream m8<< tagStandard input comes from here through next tag at the startof line9ITakes output from one program, or process, and sends it toanother.Note that the file descriptor0is normally standard input (STDIN),1isstandard output (STDOUT), and2is standard error output (STDERR).13.6 LINUX FILE SYSTEMLinux File System or any file system generally is a layer which isunder the operating system that handles the positioning of your data on thestorage. The starting and ending of file is not known by the system. Ever ifyou find any unsupported file system type.Linux File System Directories:/bin: Where core commands of Linux exists forexample ls, mv./boot: Where boot loader and boot files located./dev: Physical drives like USBs DVDs are mounted in this./etc:Contains configurations for the installed packages./home:Here personal folders are allotted to the users to store his folderswith his/her name like/home/like geeks./lib:Here the libraries are located of the installed package. You may findduplicates in different folders since libraries are shared among allpackages unlike windows./media:Here is the external devices like DVDs and USB sticks that aremounted and you can access their files here./root:The home folder for the root user./sbin:Similar to /bin but difference is that the binaries here are for rootuser only.munotes.in

Page 162

162/tmp:Contains the temporary files./usr:Wherethe utilities and files shared between users od linux./var:Contains system logs and other variable data.Linux File System Types:Following are the Linux File System types:Ext:It is an older one which is not used due to its limitations.Ext2:It is the first Linux file system which allows 2 terabytes of dataallowed.Ext3:It is arrived from Ext2 which is more upgraded and has backwardcompatibility.Ext4:It is quite faster which allows larger files to operate with significantspeed.JFS: old file system made by IBM. Working of this is very well withsmall and big files but when used for longer time the files get orrupted.XFS:It is old file system which works slowly with small files.Btrfs:made by oracle. It is not stable as Ext insome distros, but you cansay that it is replacement for it if you too. It has a good performance.Working of file system inLinux:The Linux file system unifies all physical hard drives andpartitions into a single directory structure. It starts at the top-the rootdirectory.Storing file inLinux:In Linux as in MS-DOS and Microsoft Windows, program isstored in files. It can be launched by simply typing its filename. However,this assumes that the file is stored in one of a series of directories knownas path. A directory included in this series issaid to be on a path.Linux File Commands:1.pwd–This command displays the present working directorywhereyou are currently in.
162/tmp:Contains the temporary files./usr:Wherethe utilities and files shared between users od linux./var:Contains system logs and other variable data.Linux File System Types:Following are the Linux File System types:Ext:It is an older one which is not used due to its limitations.Ext2:It is the first Linux file system which allows 2 terabytes of dataallowed.Ext3:It is arrived from Ext2 which is more upgraded and has backwardcompatibility.Ext4:It is quite faster which allows larger files to operate with significantspeed.JFS: old file system made by IBM. Working of this is very well withsmall and big files but when used for longer time the files get orrupted.XFS:It is old file system which works slowly with small files.Btrfs:made by oracle. It is not stable as Ext insome distros, but you cansay that it is replacement for it if you too. It has a good performance.Working of file system inLinux:The Linux file system unifies all physical hard drives andpartitions into a single directory structure. It starts at the top-the rootdirectory.Storing file inLinux:In Linux as in MS-DOS and Microsoft Windows, program isstored in files. It can be launched by simply typing its filename. However,this assumes that the file is stored in one of a series of directories knownas path. A directory included in this series issaid to be on a path.Linux File Commands:1.pwd–This command displays the present working directorywhereyou are currently in.
162/tmp:Contains the temporary files./usr:Wherethe utilities and files shared between users od linux./var:Contains system logs and other variable data.Linux File System Types:Following are the Linux File System types:Ext:It is an older one which is not used due to its limitations.Ext2:It is the first Linux file system which allows 2 terabytes of dataallowed.Ext3:It is arrived from Ext2 which is more upgraded and has backwardcompatibility.Ext4:It is quite faster which allows larger files to operate with significantspeed.JFS: old file system made by IBM. Working of this is very well withsmall and big files but when used for longer time the files get orrupted.XFS:It is old file system which works slowly with small files.Btrfs:made by oracle. It is not stable as Ext insome distros, but you cansay that it is replacement for it if you too. It has a good performance.Working of file system inLinux:The Linux file system unifies all physical hard drives andpartitions into a single directory structure. It starts at the top-the rootdirectory.Storing file inLinux:In Linux as in MS-DOS and Microsoft Windows, program isstored in files. It can be launched by simply typing its filename. However,this assumes that the file is stored in one of a series of directories knownas path. A directory included in this series issaid to be on a path.Linux File Commands:1.pwd–This command displays the present working directorywhereyou are currently in.
munotes.in

Page 163

1632.ls–This command will list the content of present directory.3.ls-l–This command is used to show formatted listing of filesanddirectory
4.ls-la–This command will list all the content of present directoryincluding the hidden files and directories.
1632.ls–This command will list the content of present directory.3.ls-l–This command is used to show formatted listing of filesanddirectory
4.ls-la–This command will list all the content of present directoryincluding the hidden files and directories.
1632.ls–This command will list the content of present directory.3.ls-l–This command is used to show formatted listing of filesanddirectory
4.ls-la–This command will list all the content of present directoryincluding the hidden files and directories.
munotes.in

Page 164

1645.mkdir–This command will create a new directory
6.rmdir–This command will delete specified directory, provided it isempty
7. cd–This command is used to change directory8.cd /-This command takes us to root directory9.cd ..–This command takes us one level up the directorytree.10. rm filename–This command deletes specified file.11. rm-r directoryname–This command deletes the specified directoryalong with it’s contents
1645.mkdir–This command will create a new directory
6.rmdir–This command will delete specified directory, provided it isempty
7. cd–This command is used to change directory8.cd /-This command takes us to root directory9.cd ..–This command takes us one level up the directorytree.10. rm filename–This command deletes specified file.11. rm-r directoryname–This command deletes the specified directoryalong with it’s contents
1645.mkdir–This command will create a new directory
6.rmdir–This command will delete specified directory, provided it isempty
7. cd–This command is used to change directory8.cd /-This command takes us to root directory9.cd ..–This command takes us one level up the directorytree.10. rm filename–This command deletes specified file.11. rm-r directoryname–This command deletes the specified directoryalong with it’s contents
munotes.in

Page 165

16512. cp file1 file2–This command copies contents of filefile1into filefile213.mv–This command is used to rename files and directories14. cat>filename–This command is used to create a file and open it inwrite mode
15. cat filename–This command is used to display content of a fileLinuxCommands:Linux is anopen-source free OS. It supports all administrativetasks through the terminal. This also includes file manipulation, packageinstallation and user management.File Commands:•ls = Listing the entire directory•ls-at = Show formatted listing of hiddenfiles•ls-lt = Sort the formatted listing by time modified•cd dir = To change the directory user is in
16512. cp file1 file2–This command copies contents of filefile1into filefile213.mv–This command is used to rename files and directories14. cat>filename–This command is used to create a file and open it inwrite mode
15. cat filename–This command is used to display content of a fileLinuxCommands:Linux is anopen-source free OS. It supports all administrativetasks through the terminal. This also includes file manipulation, packageinstallation and user management.File Commands:•ls = Listing the entire directory•ls-at = Show formatted listing of hiddenfiles•ls-lt = Sort the formatted listing by time modified•cd dir = To change the directory user is in
16512. cp file1 file2–This command copies contents of filefile1into filefile213.mv–This command is used to rename files and directories14. cat>filename–This command is used to create a file and open it inwrite mode
15. cat filename–This command is used to display content of a fileLinuxCommands:Linux is anopen-source free OS. It supports all administrativetasks through the terminal. This also includes file manipulation, packageinstallation and user management.File Commands:•ls = Listing the entire directory•ls-at = Show formatted listing of hiddenfiles•ls-lt = Sort the formatted listing by time modified•cd dir = To change the directory user is in
munotes.in

Page 166

166•cd = Shift to home directory•pwd = To see which directory user is working•mkdir dir = Creating a directory to work on•cat >file = Places the standard input into the file•more file = Shows output of the content of the file•head file = Shows output of the first 10 files of file•tail file = Shows output of the last 10 files of file•tail-f file = Shows output content of the file as it grows, starting withthelast 10 lines•touch file = Used to create or upload a file•rm file = For deleting a file•rm-r dir = For deleting an entire directory•rm-f file = This will force remove the file•rm-rf dir = This will force remove a directory•cp file1 file2 = It’ll copythe contents of file1 to file2•cp-r dir1 dir2 = it’ll copy the contents of dir1 to dir2; Also create thedirectory if not present.•mv file1 file2 = It’ll rename or move file1 to file2, only if file2 isexisting•ln-s file link = It creates symbolic-link to a fileProcess Management:•ps = It displays the currently working processes•top = It displays all running process•kill pid = It’ll kill the given process (as per specific PID)•killall proc = It kills all the process named proc•pgkill pattern = It willkill all processes matching the pattern given•bg = It lists stopped or bg jobs, resume a stopped job in thebackground•fg = It brings the most recent job to foreground13.7 SECURITY IN LINUX13.7.1 Security features:Minimal set of security features were provided by kernel.Discretionary access control: Authentication is performed outside thekernel by user-level applications such as login. Authentication Allowssystem administrators to redefine access control policies. Customize themunotes.in

Page 167

167way Linux authenticates users specify encryption algorithms that protectsystem resources.13.7.2 Authentication:Default authentication:User enters username and password vialogin. Passwords are hashed (using MD5 or DES).Encryption cannot bereversed and stored in /etc/passwd or /etc/shadow. Pluggableauthentication modules (PAMs) can reconfigure the system at run time toinclude enhanced authentication techniques. Example: Disallow termsfound in a dictionary and require users to choose new passwords regularly.It supports smart cards, Kerberos and voice authentication13.7.3 Access Control Methods:Access control attributes specify file permissions and file attributesFilepermissions:Combination of read, write and/or execute permissionsspecified for three categories: user, group and otherFileattributes:Additional security mechanism supported by some filesystems allow users to specify constraints on file access beyond read writeand execute. Examples: append-only, immutableLinux security module (LSM) is a framework that allows a systemadministrator to customize the access control policy using loadable kernelmodules. Kernel uses hooks inside the access control verification codetoallow LSM to enforce its access control policy. Example: SELinux whichis developed by NSA, It Replaces Linux’s default discretionary accesscontrol policy with a mandatory access control (MAC) policy.Privilege inheritance:Normally a process executes with same privilegesas the user who launched it .Some applications require process to executewith other user privilegesExample:passwd–setuid and setgid allow process to run with theprivileges of the file owner–Improper use of setuid and setgidcan lead tosecurity breaches–LSMCapabilities allow administrator to assign privileges to applications asopposed to users to prevent this situation.13.8 SUMMARY•Linux is an open source family of Unix-like Linux based kernelapplications, a kernel operating system that was first released onSeptember 17, 1991, by Linus Torvalds Linux usually included in theLinux distribution.munotes.in

Page 168

168•Many of the available software programs, utilities, games available onLinux are freeware or open source. Even such complex programs suchas Gimp,OpenOffice, StarOffice are available for free or at a low cost.•GUI makes the system more flexible, but has the disadvantage that it'ssimple to implement a special interface for every program, making thesystem harder to find out.•The Linux kernel consists of several important parts: processmanagement, memory management, hardware device drivers,filesystem drivers, network management, and variousother bits andpieces.•Memory management takes care of assigning memory areas and swapfile areas to processes, parts of the kernel, and for the buffer cache.Process management creates processes, and implements multitaskingby switching the activeprocess on the processor.•Linux File System or any file system generally is a layer which isunder the operating system that handles the positioning of your data onthe storage.•The Linux file system unifies all physical hard drives and partitionsinto a single directory structure. It starts at the top-the root directory.•Kernel provides a minimal set of security features13.9 LIST OF REFERENCES1.Modern Operating Systems, Andrew S. Tanenbaum,Herbert,Pearson4 th, 20142.Operating Systems–Internals and Design Principles, WillaimStallings, Pearson 8 th, 20093.Operating System-Concepts Abraham Silberschatz, Peter B.Galvineg Gagne Wiley ,8 th Edition.4.Operating Systems Godbole and Kahate McGraw Hill 3 rd Edition.13.10 BIBLIOGRAPHYhttps://www.tutorialspoint.comhttps://www.geeksforgeeks.orghttps://www.javatpoint.comhttps://guru99.comwww.slideshare.net13.11UNIT ENDQUESTIONS1.Explain Architecture of Linux.2.Explain memory management in Linux3.Write short note on Processmanagement in Linux4.Write short note on security in Linux*****munotes.in

Page 169

16914ANDROID CASE STUDYUnit Structure14.0Objectives14.1Android History14.2Android Overview14.2.1 Features14.2.2 Android Architecture14.3Android Programming14.4Process14.4.1 Introduction14.4.2 Process in the application14.4.3 Structure of process14.4.4 States of process14.4.5 Process lifecycle14.4.6 Interprocess communication14.5Android memory management14.5.1 Introduction14.5.2 Garbage collection14.5.3 How to improve memory usage14.5.4 How to avoid memory leaks14.5.5 Allocate and reclaim app memory14.6File system14.6.1 Flash memory-android OS file system14.6.2 Media-based android file system14.6.3 Pseudo file systems14.6.4 Android / android application file structure14.6.5 Files in android studio and explained below14.7Security in android14.7.1 Authentication14.7.2 Biometrics14.7.3 Encryption14.7.4 Keystore14.7.5 Trusty trusted execution environment (TEE)14.7.6 Verified boot14.8Summarymunotes.in

Page 170

17014.9List of references14.10Bibliography14.10Unit EndQuestions14.0 OBJECTIVES•To understand principles of Android•To learn principles of Process, Memory Management•To learnprinciples of File System and Security14.1 ANDROID HISTORYAndroid OSis recognized by a consortium of developers: known asthe Open Handset Alliance, with the main funder and commercialmarketer being Google. Itwas being developed by Google for all thetablets, and smartphones. Android OS first industrialize by AndroidIncorporated, which is located in Silicon Valley before it was developedby Google in 2005. Versions of android are as follows:VersionsDescription1)Android versions 1.0 to1.1: The early daysAndroid made this versionauthorized in the year 2008, withAndroid 1.0. This version includeda group Google apps; like Gmail,Maps, Calendar, and YouTube.2)Android 1.5 CupcakeThe first official public code forAndroid didn't appear until; thisversion 1.5 Cupcake was releasedin April 2009. New few featuresandenhancements, compared to the firsttwo versions i.e.; including abilityto upload videos to YouTube; and away fora phone's screen display toautomatically rotate to the rightpositions.3) Android 1.6 DonutGoogle released this version ofAndroid 1.6 Donut in September2009. It included the support forcarriers that used CDMA-basednetworks and phones to be sold byall carriers around the world.Others are; Quick Search Box, andquick toggling between the Camera,Camcorder, and Gallery tostreamline the media-captureexperience & even Power Controlmunotes.in

Page 171

171widget for the Wi-Fi, Bluetooth,GPS, etc.4) Android 2.0-2.1 ÉclairGoogle launched the second versionof Android and named it as éclair inOctober 2009.First version ofAndroid; with a text-to-speechsupport feature & also included:multiple account support,navigation with Google Maps. Thefirst smartphone with Éclair versionwas Motorola Droid, which wasalso the first one, that was sold byVerizon wireless company.5) Android 2.2 FroyoReleased in May 2010, also called―frozen yogurt‖. New features,including Wi-Fi mobile hotspotfunctions, push notifications viaAndroid Cloud to DeviceMessaging (C2DM) service, flashsupport and also Wi-Fi mobilehotspot functions were introduced.6) Android 2.3 GingerbreadLaunched in Sept. 2010, iscurrently, the oldest versions of theOS that Google. Android devicesare currently running on thisversion. The first mobile phone wasthe Nexus S mobile to addbothGingerbread andNFC hardware, co-developed byGoogle and Samsung. It alsointroduced features like selfie, byadding in support for multiplecameras and video chat supportwithin Google Talk.7) Android 3.0 HoneycombThis Version introduced in Feb,2011, Motorola Xoom tablet alongwith, was released by Google onlyfor tablets and other mobile deviceswith larger displays than normalsmartphones. Honeycomb wouldoffer specific features that couldnot be handled by the smallerdisplays found on smartphones atthe time.8) Android 4.0 Ice CreamSandwichLaunched in October 2011. First tointroduce the feature to unlock thephone using its camera. Otherfeatures are support for all the on-screen buttons, the ability tomunotes.in

Page 172

172monitor the mobile and Wi-Fidatausage, and swipe gestures todismiss notifications and browsertabs.9) Android 4.1-4.3 Jelly BeanGoogle released versions 4.2 and4.3, both under the Jelly Bean label,in Oct. 2012 and July 2013.Features includesoftware updates,notificationsthat showed morecontent or action buttons, alongwith full support for the Androidversion of Google's Chrome webbrowser. Google now made Search,and "Project Butter to speed up andimprove touch responsiveness10) Android 4.4 KitKatOfficially launchedin Sept. 2013,codename is―Key Lime Pie‖. Ithelped to expand the overall market&was optimized to run on thesmartphones that had as little as 512MB of RAM. This allowed manymakers to get the latest version &installed it on a muchcheaperhandset.11) Android 5.0 LollipopReleased in the first month of2014.This included the support fordual-SIM Card Feature, HD Voicecalls, Device Protection to keepthieves locked out of your phoneeven after a factory reset.12)Android 6.0Marshmallow Initiallycalled asMacadamia Nut Cookie, but laterwas released as Marshmallow inMay 2015. Features are app drawer& the first version that had nativesupport for unlocking thesmartphone with biometrics, TypeC support & Android pay was alsothere. Google’s Nexus6P andNexus 5X were the first handsets.13) Android 7.0 NougatReleased in August 2016.Multitasking features that designedfor smartphones with biggerscreens. It included a split-screenfeature and fast switching betweenapplications. Other changes aremade by Google such as switchingto a new JIT compiler that couldspeed. Pixel, and Pixel XL, and LGmunotes.in

Page 173

173V20 were released with thisversion.14) Android 8.0 Oreo(August 21, 2017)Second time Google used atrademarked name for it’s Androidversion, the first was KitKat.Android 8.0 Oreo launched inAugust 2017. Itincluded, visualchanges such as native support forpicture-in-picture mode feature,new autofill APIs;help in bettermanaging the passwords and filldata, notification hannels.15)Android 9.0 Pie (August 6,2018)15)Android 9.0 Pie(August 6, 2018)Released in August 2018. Newfeatures & updates such as batterylife. Home-button was added inthis version. When swiped up itbrings the apps that were usedrecently, a search bar, andsuggestions of five apps at thebottom of the screen. New optionadded of swiping left to see thecurrently running applications.16) Android 10 (September3, 2019)Finally, Google opted to drop thetradition of naming the Androidversion after sweets, desserts, andcandies. It was launched inSeptember 2019. Several newfeatures were added such as supportfor the upcoming foldable smartphones with flexible displays.Android 10 also has a dark modefeature, along with the newlyintroduced navigation control usinggestures, the feature for smart replyfor all the messaging apps, and asharing menu that is more effective.14.2 ANDROID OVERVIEWAndroid is an operating system based on the Linux kernel andother open-source software such as smartphones and table ts. Androidapproach to application development for mobiledevices which meansdevelopers need only develop for Android and their applications should beable to run on different devices powered by Android. The source code forAndroid is available free and open-source software licenses.munotes.in

Page 174

17414.2.1Features:Android is an operating system as well as supports great features. Few ofthem are listedbelow:1.Beautiful UI:Android OS provides a beautiful and intuitive userinterface.2.Connectivity:Supports a large group of networks like GSM/EDGE,CDMA, UMTS,Bluetooth, WiFi, LTE, and WiMAX.3.Storage:Uses SQLite, lightweight relational database storagefor datastorage. It is really helpful when there is limited mobile memorystorage to be considered.4.Media support:It Includes support for a large number of mediaformats, Audio as well as for Video, like H.263, H.264, MPEG 4 SP,AMR, AMR WB, AAC, MP3, JPEG, PNG, GIF & BMP.5.Messaging:Both SMS and MMS are supported.6.WebBrowser:Based on Open Source WebKit, now known asChrome.7.Multi-Touch:Supports multi-touch screen.-8.Multi-Task:Supports application multitasking.i.e, task to another andsame time various applications can run simultaneously.9.Resizable widgets: Widgets are resizable, so users can reuseit to showmore content or tosave space.10.Multi-Language: Supports single direction and bi-irectionaltext.11.Hardware Support:Accelerometer Sensor, Camera, DigitalNCompass,Proximity Sensor & GPS, and a lot more.14.2.2 Android Architecture:Android is a stack of componentsof thesoftware which is divided into five layers that are shown below in thediagram:
17414.2.1Features:Android is an operating system as well as supports great features. Few ofthem are listedbelow:1.Beautiful UI:Android OS provides a beautiful and intuitive userinterface.2.Connectivity:Supports a large group of networks like GSM/EDGE,CDMA, UMTS,Bluetooth, WiFi, LTE, and WiMAX.3.Storage:Uses SQLite, lightweight relational database storagefor datastorage. It is really helpful when there is limited mobile memorystorage to be considered.4.Media support:It Includes support for a large number of mediaformats, Audio as well as for Video, like H.263, H.264, MPEG 4 SP,AMR, AMR WB, AAC, MP3, JPEG, PNG, GIF & BMP.5.Messaging:Both SMS and MMS are supported.6.WebBrowser:Based on Open Source WebKit, now known asChrome.7.Multi-Touch:Supports multi-touch screen.-8.Multi-Task:Supports application multitasking.i.e, task to another andsame time various applications can run simultaneously.9.Resizable widgets: Widgets are resizable, so users can reuseit to showmore content or tosave space.10.Multi-Language: Supports single direction and bi-irectionaltext.11.Hardware Support:Accelerometer Sensor, Camera, DigitalNCompass,Proximity Sensor & GPS, and a lot more.14.2.2 Android Architecture:Android is a stack of componentsof thesoftware which is divided into five layers that are shown below in thediagram:
17414.2.1Features:Android is an operating system as well as supports great features. Few ofthem are listedbelow:1.Beautiful UI:Android OS provides a beautiful and intuitive userinterface.2.Connectivity:Supports a large group of networks like GSM/EDGE,CDMA, UMTS,Bluetooth, WiFi, LTE, and WiMAX.3.Storage:Uses SQLite, lightweight relational database storagefor datastorage. It is really helpful when there is limited mobile memorystorage to be considered.4.Media support:It Includes support for a large number of mediaformats, Audio as well as for Video, like H.263, H.264, MPEG 4 SP,AMR, AMR WB, AAC, MP3, JPEG, PNG, GIF & BMP.5.Messaging:Both SMS and MMS are supported.6.WebBrowser:Based on Open Source WebKit, now known asChrome.7.Multi-Touch:Supports multi-touch screen.-8.Multi-Task:Supports application multitasking.i.e, task to another andsame time various applications can run simultaneously.9.Resizable widgets: Widgets are resizable, so users can reuseit to showmore content or tosave space.10.Multi-Language: Supports single direction and bi-irectionaltext.11.Hardware Support:Accelerometer Sensor, Camera, DigitalNCompass,Proximity Sensor & GPS, and a lot more.14.2.2 Android Architecture:Android is a stack of componentsof thesoftware which is divided into five layers that are shown below in thediagram:
munotes.in

Page 175

175All these layers are responsible for different roles and features that have beendiscussed below.Linux Kernel:This layer provides a level of abstraction betweenhardware and itcontains allthe essential hardware drivers like camera, keypad, display.This layer is the foundation of the android platform.Hardware Abstraction Layer:It provides an abstraction between hardware and the rest ofthesoftware stack.Libraries:Libraries are present above the Linux kernel including open-sourceweb browser engine WebKit it is the well-known library, SQLite databasewhich is useful for storage and sharing of application data, libraries toplay, and record audio and video, SSL libraries are responsible for Internetsecurity, etc.Android Runtime:This layer provides a key component called Dalvik VirtualMachine which is a kind of Java Virtual Machine.JVM is speciallydesigned and optimized for Android and designed torun apps in aconstrained environment that has limited muscle power in terms ofbattery, processing, and memory. It contains a set of libraries that enablesdevelopers to write code for android apps using java programming.Application Framework:It provides a higher level of services to applications in the form ofjava classes. Developers are allowed to make the use of these services intheir applications.Android framework includes key services ofter are as follows:1)Activity Manager:It controls all aspects of the application lifecycleand activity stack.2)Content Providers:Allows applications to publish andsharetheirdata with other applications.3)Resource Manager:Provides access to non-code embedded resourcessuch as strings, color settings,and also user interface layouts.4)Notifications Manager:Allows applications to display notificationsto the user.munotes.in

Page 176

1765)View System: Extensible set of views used to create application userinterfaces.Applications:At the top, thelayer you will find all android applications. Thislayer uses all the layers below it for the proper functioning of the mobileapp, such applications are Contacts Books, Calendar, Browser, Games,and many more.So Android holds layered or we can say a group of functionalitiesas software stack that makes Android work very fluently in any device.14.3 ANDROID PROGRAMMINGIf we want to develop Android apps, it is essential to pick alanguage. To differentiate between the various Android programminglanguages it may be a little complex. Tochoose which one to start with itrequires an understanding of their strength and weakness.The best way to develop an Android app is to download AndroidStudio. There is a piece of software called an Integrated DevelopmentEnvironment(IDE). It is offered as a package with the Android SDK,which is nothing but a set of tools used to facilitate Android development.It will give you everything you need in one place to get up and get going.Features such as the visualdesigner make the process easier.Powerful features are being added to give developers access to things likecloud storage. While Java is the official.language for Android but there areso many other languages that can be used for Android App Development.Below mentioned are these programming languages which are currentlyused for Android development:1.Java:•Java is the official.language for Android App Development and it isthe most used language as well. Apps in the Play Store are most ofbuilt withJava and it is also the most supported language by Google.Java has a great online community for support in case of anyproblems.•Java was developed by Sun Microsystems in 1995, and it is usedfor awide range of programming.applications. Java code is runby a virtualmachine.That runs on Android devices and interprets the code.•However, Java is complicated.Languagefor a beginner to use as itcontains complex topics like constructors, null pointer, exceptions,concurrency, checked exceptions, etc. Android Software DevelopmentKit(SDK) increases the complexity at a greater extent.munotes.in

Page 177

177•Development using java also requiresa basic understanding ofconcepts like Gradle, Android Manifest, and the markup languageXML.2. Kotlin:•Kotlin is a cross-platform programming language that is used as analternative to Java for Android App Development. It has beenintroduced as a secondary-official‖Java language in 2017.•It can inter-operate with Java and it runs on the Java Virtual Machine.•The only sizable difference is that Kotlin removes thesuperfluousfeaturesof Java such as null pointer exceptions and removes thenecessity ofending every line with a semicolon.•In short, It is much simpler for beginners to try as compared to Javaand it can also be used as an entry point‖for Android AppDevelopment.3.C++:•C++ is used for Android App Development using the Android NativeDevelopment Kit(NDK). An app cannot be created using C++ and theNative Development Kit is used to implement parts of the app in C++native code. This helps in using C++ code libraries for the app asrequired.•While C++ is useful for Android App Development in some cases, it ismuch more difficult to set up and is much less flexible. Forapplications like 3D games, this will use out some extra performancefrom an Android device, which means that you’ll be able to uselibraries written in C or C++.•It may also lead to more bugs because of the increased complexity.So,it is better to use Java as compared to C++ as it does not provideenough gain to offset the efforts required.4.C#:•C# is a little bit similar to Java and so it is ideal for Android AppDevelopment. Like Java, C# also implements garbage collectionsothere are fewer chances of memory leaks. And C# also has a cleanerand simpler syntax than Java which makes coding with itcomparatively easier.•Earlier, the biggest drawback of C# was that itcould run only onWindows systems as it used the .NET Framework. However, thisproblem was handled by Xamarin.•Android is a cross-platform implementation of the Common LanguageInfrastructure. Now, Xamarin. The android tool can be used to writenative Android apps and share the code across multiple platforms.munotes.in

Page 178

1785.Python:•It is used for Android App Development even though Android doesn’tsupport native Python development. This is done using various toolsthat convert the Python apps into Android Packages that can be run onAndroid devices.•An example of this can be Kivy that is an open-source Python libraryused for developing mobile apps. It supports Android and alsoprovides rapid app development. However, a downside to this is thatthere won’ t be native benefits for Kivy as it isn’t natively supported.6.Corona:•It is a software development kit that is used for developing Androidapps using Lua. It contains two operational modes, i.e. CoronaSimulator and Corona Native. The Corona Simulator is used to buildapps directly whereas the Corona Native is used to integrate the Luacode with an Android Studio project to build an app using nativefeatures.•While Lua is a little limited as compared to Java, it is also muchsimpler and easy to learn. It is mostly used to create graphicsapplications and games but is by no means limited to that.•We need to use a text editor like Notepad++ to enter your code andyou can run said code on an emulator without even needing to compilefirst. When we are ready to create an APK and deploy, we willbe ableto do this using an online tool.7.Unity:•Unity is a "game engine," which means it provides things like physicscalculations and 3D graphics rendering and an IDE like AndroidStudio.•It is an open-source tool, which makes it incredibly easy to create ourgames, and the community is strong, which means we get a lot ofsupport. With just a few lines of code, we have a basic platform gameset up in less than an hour. It's multiplatform and is used by manygame studios.•It is a great way to learn object-oriented programming concept as theobjects are an object.•This is used to become a game developer.•For a complete beginner, it is not the entry point to Androiddevelopment–but for a small company wanting to create an app foriOS and Android, it makes sense and there’s plenty of support andinformation out there to help you out.munotes.in

Page 179

1798.PhoneGap:•The last simple option you can choose to develop Android appsprogram.•PhoneGap is powered by Apache Cordova and it allows you to createapps using the samecode you would normally use to create a website:TML, CSS, and JavaScript. This is then shown through a "WebView"but packaged like an app. It acts like a mediator, which allowsdevelopers to access the basic features of the phone, such as thecamera.•Thisis not real Android development, though, and the only realprogramming will be JavaScripConclusion:•There are a lot of apps such as Chat messengers, Music players,Games. Calculators. etc. that can be created using the above languages.•No language iscorrect for Android Development.•So, it's upon you to make the correct choice of language based on yourobjectives and preferences for each project.Databases that can be used with Android:1.SQLite:•SQLite is a relational database, a lite version of SQLdesigned formobile.It is an in-process library that implements a self-contained,zero-configuration, transactional SQL database engine. Its anembedded SQL Database engine without any separate server process,unlike any other SQL database.•SQLite supports all the relational database features.•It is an open-source compact library that is by default present in twomain Mobile OS i.e. Android and iOS.•We can store SQLite both on disk as well as in memory. Each databasefile is a single disk file and it can be used for cross-platform. Itrequires very little memory to operate and is very fast.2.Firebase:•With Firebase, we can focus our time and attention on developing thebest possible applications for our business. The operation and internalfunctions arevery solid. They have taken care of the FirebaseInterface. We can spend more time in developing high-quality appsthat users want to use.There are the following features which we can develop:•Cloud Messaging: Firebase allows us to deliver and receivemessagesin a more reliable way across platforms.munotes.in

Page 180

180•Authentication: Firebase has little friction with acclaimedauthentication.•Hosting: Firebase delivers web content faster.•Remote Configuration: It allows us to customize our app on the go.•Dynamic Links: Dynamic Links are smart URLs that dynamicallychange behavior for providing the best experience across differentplatforms.•These links allow app users to take directly to the content of theirinterest after installing the app-no matter whether they are completelynew or lifetime customers.•Crash Reporting: It keeps our app stable.•Real-time Database: It can store and sync app data in real-time.•Storage: We can easily store the file in the database.3. Realm DB:The realm is a relational database management system which islike a conventional database that data can be queried, filtered, andpersisted but also have objects which are life and fully reactive.Realm database is developed by Realm and specially designed torun on mobile devices, Like SQLite,Realm is also serverless and cross-platform. It can be stored in the disk as well as in memory.Realm has so many advantages over native SQLite, like:•As we work with the real object there is no need to copy, modify, andsave the object from the database.•The realm is much faster than SQLite. Realm can query up to 57record/sec, whereas SQLite can do only up to 20 record/sec.•Data is secured with transparent encryption and decryption.•Realm database has a reactive architecture, which means it can bedirectly connected to UI, if data changes it will automatically refreshand appear on the screen.•One application can have multiple Realm database, both local andremote Can set different permissions for different users.4.ORMLite:•It is a lighter version ofObject Relational Mapping which providessome simple functionality for persisting java object to SQL database.It is ORM wrapper over any mobile SQL related database.munotes.in

Page 181

181•It is used to simplify complicated SQL operations by providing aflexible query builder.It also provides powerful abstract DatabaseAccess Object (DAO) classes.•It is helpful in big size applications with complex queries because ithandles "compiled" SQL statements for repetitive query tasks. It alsosupports for configuring of tables and fields without annotations andsupports native calls to Android SQLite databases APIs.•It doesn’t fulfill all the requirements like it is bulky compared toSQLite or•Realm, slower than SQLite and Realm but faster than most of the otherORMs present in themarket.14.4 PROCESS14.4.1 Introduction:A process is a-program in execution.It is generally used to accomplish atask, a process needs resources. For instance, CPU file, memory, etc.Resources are allocated to processes in two stages•The stage whenthe process was created•Dynamically allocate the process while they are runningA process is more than coding or program of code; it also includescurrent activity, the content of processor's registers, etc. A program canalso be called a passive entity and a process can also be called an activeentity. It also contains a feature which is known as a program counterwhich is responsible for specifying the next instruction to be executed.E.g.Word processor can be thought of as a process. So when we talkabout a passive entity it would be like a file containing a set of instructionssaved on a disk which is also known as an executable file, whereas aprocess is meant to be an active entity which is backed by a programcounter which in turn specifies the next instruction to be executed alongwith a set of associated resources. In other words, the program convertsinto a process when it is loaded into memory.14.4.2 Process in the application:All components of the same application run in the same processand most applications do not change this. However, if Developer finds thatDeveloperneeds to control which process a certain component belongs to,the developer can do so in the manifest file. The manifest entry for eachtype of component element like1.2.3.4.5.-to set a default value that applies to all components.munotes.in

Page 182

182It supports an android: process attribute that can specifya processin which that component should run. Developers can set this attribute sothat each component runs in its processor so that somecomponents share aprocess while others do not. Developers can also set android: process sothat components of different applications run in the same process means itprovided that the applications share the same Linux user ID and are signedwith the samecertificates.Sometimes, Android might decide to shut down a process at somepoint, when memory is low and required by other processes that are moreimmediately serving the user. Application components running in theprocess that's killed are consequently destroyed and a process is startedagain for those components when there's again work for them to do.Whiledetermining which processes to kill, the Android device weighs theirrelative importance to the user.Forinstance, it simply shuts down aprocesshosting activities that can be not seen on the display screen, incomparison to a process hosting seen activities. The selection of whetheror not to terminate a process, consequently, depends on the state of thecomponents running in that procedure.14.4.3 Structure ofProcess:•Stack:contains temporary data such as parameters of the function andreturn addresses also local variables.•Heap:is dynamically allocated memory during process run time.•Data:includes global variables•Text:includes the current activity which is represented by the value ofthe program counter and the contents of the processor's registers.14.4.4 States ofprocess:The Major transition states of the process are as follows:1.New:Process is created2.Running:Instructions are executed3.Ready: When the Process is ready to get executed and is waiting to getassigned to the processor4.Waiting:Any event of the line need to be performed hence theprocess is waiting for that event to occur.5.Terminated:Execution or process is accomplished The transitionstates that are represented above found on all systems but certainoperating systems also more finely delineate states of the process.Thenames vary on different operating systems. There can be more thanone process that is available which is in the ready or waiting state butat any instant of time, only one process can be in running state at oneprocessor of the operating system.munotes.in

Page 183

18314.4.5 ProcessLifecycle:The Android system tries to maintain aprocess for as long as itspossible but eventually needs to remove the old processes to reclaimmemory for more important processes. To find which processes to keepand which to kill, the system puts each process into an "importancehierarchy" based on the components running in the process and the state ofcomponents. Processes that have the lowest importance are eliminatedfirst, then those with the next lowest importance, and so on, as necessaryto recover system resources.There are a total of fivelevels in the hierarchy. The following listsshow the different types of processes in order of importance:1.Foreground process:A process that is required for what the user is doing. A process isconsidered to be in the foreground if any of the following conditions aretrue:•It hosts an Activity that the user is interacting with (the onResume() method).•It hosts the Service that's bound to the activity that the user isinteracting with.•It hosts the Service that's running "in the foreground"—the servicehas called startForeground().•It hosts the Service that's executing one of its lifecycle callbacks(onCreate()or onStart(), or onDestroy())•It hosts another BroadcastReceiver that's executing its onReceive()method.2.Generally, only a fewforeground processes exist at any given time.They are killed only as a last resort—if memory is so low that theycannot all continue to run. Generally, at that point, the device hasreached a memory paging state, so killing some foreground processesis required to keep the user interface responsive.3.Visible process:A process that does not have any foreground components, but stillcan affect what the user sees on the screen. A process is considered to bevisible if all the following conditions are true: It hosts an Activity that isnot in the foreground, but it is still visible to its user (its onPause() methodcalled). This could occur, for eg if the foreground activity starts a dialog,which would allow the previous activity to be seen behind it. It hosts theService that is bound to be visible (or foreground) activity. A visibleprocess is considered extremely important and will not be killed unlessdoing so is required to keep all foreground processes running.munotes.in

Page 184

1844. Service process:A process that is running on the service that has been started withthe startService() method and does not fall into either of the two highercategories. Although service processes are not directly tied to anything theuser sees, they are generally doing what the user cares about ( like playingmusic in the background or downloading data on the network), and so thesystem keeps them running unless there's not enough memory toremember them along with all foreground and visible processes in them.5. Background process:Aprocess that is holding an activity which is not currently visibleto the user (the activity's onStop() called). These processes have no directimpact on their user experience, and the system can kill them at any timeto reclaim memory for a foreground, visible, or service process. Usually,many background processes are running, so they are kept in the leastrecently used list to ensure its process with the activity that was mostrecently seen by the user is last to be killed. If the activity implements thelifecycle methods correctly, and saves its current state, killing its processwill have no visible effect on its user experience because when its usernavigates back to its activity, the activity restores all of its visible states.6. Empty process:Aprocess that does not hold any active application components isthe only reason to keep the process alive is for its caching purposes, toimprove startup time for its next time a component needs to run it. Thesystem sometimes kills these processes to balance the system resourcesbetween its process caches and the underlying kernel cachesAndroid ranks process at the highest level it can, based on theimportance of their components which are currently active in the process.For eg. if the process hosts aservice and a visible activity, the process isranked as a visible process, not a service process.In addition to it, any process's ranking might increase becauseother processes are dependent on it so a process that is serving anotherprocess can benever ranked lower than another process it is serving. Forexample, if the content provider in process A is serving a client in processB, or if the service in process A is bound to a component in process B,process A is considered at least important as process B.Because when a process is running the service is ranked higherthan the process with its background activities, an activity that is long-running operation might do well to start the service for that operation,rather than simply creating the worker thread-particularly if the operationis likely to outlast the activity. Fore.g.an activity that's uploading thepicture to a web site will be starting a service to perform the upload so themunotes.in

Page 185

185upload will continue in the background even if the userleaves the activity.Using the service guarantees that all the operations will be having at least"service process" priority, regardless of what happens to the activity. Thisis the same reason why the broadcast receivers should always employservices rather than simply put time-consuming operations in a thread.14.4.6 Inter process Communication:Android hosts a variety of applications and is designed in a waythat removes any duplication or redundancy of functionalities in differentapplications or to allow functionality to be discovered, etc.There are two major techniques related to the inter process communicationand they are namely;Intents:These enable the application to select an Activity based on theaction you want to invoke and the data on which they operate. Path to anapplication is needed to use its functions and exchange data with it. Withintent objects, data can be used to pass objects in both directions. Itenables high-level communication.Remote methods:By this we mean the remote procedure calls with theseAPIs can be accessed remotely. With this calls the methods to appear to belocal which are executed in another process.Android app avoids interprocess communication. It providesfunctions in terms of packages loaded by applications that require them.For applications to exchange data applications need to use file system orother traditional Unix/Linux mechanisms,14.5 ANDROID MEMORY MANAGEMENT14.5.1Introduction:In Android memory management instead of providing swap space,it uses paging and a map which means at your application touches cannotbe paid until you release all your preferences now in Android the Dalvikvirtual Machine heap size for the application process limited and the sizeof 2MB and the maximum allocation islimited to 36 MB examples oflarge applications are photo editor, camera, gallery, and home screen.The background application processes in Android are stored in theLRU cache.Accordingto the cat strategy, it will kill processes in thesystem when it runs slow and it will also consider the application which isthe largest consumer.If kind wants to make an app run and live longer in the backgroundonly to deallocate unnecessary memory in the four more into themunotes.in

Page 186

186background system will generate an error message or terminate theapplication.14.5.2 Garbage Collection:The Dalvik virtual machine maintains track of memory allocation.Once it gets to know that memory is no longer used by any program iffreeze into aheap without any participation from the programmer. it hasbasic two basic goalsi.e.to find objects in a program that cannot be usedin the future and second is to reclaim the resources used by the particularobjects. Android memory heap is purely basedon the life and size of anobject been allocated.The duration of garbage collection depends upon the generation ofobjects and collecting and how many active objects are there in each of thegenerations.The memory heap of android may be a generalizedone, meaningthat there are different allocations that it tracks, supported the expected lifeand size of an object being allocated. For example, recently allocatedobjects belong within the Young Generation.Each heap generation has its dedicated upper limit on the quantityof memory that objects there can occupy. Any time a generation starts torefill, the system executes a garbage pickup event to release memory.Even though garbage pickup is often quite fast, it can still affectyour app'sperformance. You don't generally control when a garbagepickup event occurs from within your code.When the standards are satisfied, the system stops executing themethod and begins garbage pickup. If garbage pickup occurs within themiddle of an intensive processing loop like animation or during musicplayback, it can increase the time interval.14.5.3 How to improve memory usage:1.One should take care of the design pattern with fractions it can help tobuild a more flexible software architect. In the mobile world,abstraction may involve side effects for its extra code to be executed,which will cost more time and memory. Unless abstraction canprovide application is a significant benefit.2.Avoid using "enum". Do not use the enum because it doubles thememory allocation than ordinary static constant.3.Instead of HashMap try to use the optimized sparse array,sparseboolean array, and long sparse array containers. Hashmapallocates and entry object during every mapping which is amemoryinefficient action, also the low performancebehavior,munotes.in

Page 187

187"autoboxing/unboxing" is spread all over the usage.Instead, sparsearray-like containers map keys into the plane array.4.We should avoid creating unnecessary objects. Do notallocatememory especially for the short term temporary objects if you canavoid it and garbage collection will occur less when fewerobjects arecreated.14.5.4How to avoid memory leaks:1.After creating a database one should always close the cursorand if onewants to keep the cursor open for a long time in must be used carefullyand close as another database task is finished.2.To call unregisterReceiver() after calling registerReceiver ().3.If you declare static member variabledrawable inactivity thencallview.setBackground(drawable) in onCreate(), a new activity instancewill be created and the old activity instance can never be deallocatedbecause drawable has a set the view as callback and you has referenceto an activity4.To avoid this kind of leakage do not keep long references tocontactsactivityand id3 using the context-application instead ofcontextactivity.5.Threads in java are the root of garbage collection that is a DVM keepsdata friends to all activity threads in the runtime systemandthreadsare left running will never be eligible for garbage collection.14.5.5Allocate and reclaim app memory:•Dalvik Debug Monitor Service (DDMS) is a debugging tool includedin the Android studio.•IDE to the applications running on thedevice is connected by DDMS.•Every application runs in its process in Android studio, each one ofwhich hosts it is on a virtual machine (VM) and each process listens toa debugger on the various port.•When it begins, DDMS connects to ADB (Android DebugBridgewhich is a command-line utility included with Google’s AndroidSDK.).•This will notify the DDMS when the device is connected ordisconnected the device when connected, a Virtualmachine(VM)monitoring service is created between ADB and DDMS,which will tell the DDMS when a Virtual Machine on the device isstarted or terminated.•The Dalvik heap is constrained to a single virtual memory range forevery app process. This defines that the logical heap size grows as itneeds a limit that the system definesfor each app.munotes.in

Page 188

188•The logical size of the heap is not like the amount of physical memoryused by the heap.•When we are inspecting our app's heap, a value called the ProportionalSet Size (PSS) is computed by Android, that accounts for both dirtyand clean pages which are shared with other processes—but only in anamount that's proportional to how many apps shared by that RAM.•This (PSS) total is what the system considers to be the physicalmemory footprint. For more information regarding PSS, see theInvestigating Your RAM Usage guide.•The Dalvik heap enables a compact of the logical size of the heap,meaning Android does not defragment the heap to the close-up space.•Android can only shrink by the logical heap size when there is unusedspace at the end of theheap. Therefore, the system can still reduce thephysical memory used by the heap.After the garbage collection process, Dalvik walks the heap and finds theunused pages, then returns these pages to the kernel using the advice.14.6 FILE SYSTEMThe Android Operating System is a popular and universally usedoperating system for smartphones lately. While on the user's end it mightappear simple and easy to use the Android File Systems Applications tendto be rather complicated and have several users scratching their head inamusement in daily. Let us now take a detailed look at the file systems andwhat they have to offer to the users as Android.This informative piece is for people who are thinking to developROMs, Apps, and a lot of other things on the Android operating system.Without wasting a minute more let us begin with a detailed look at theAndroid file system. We would not just be naming the file systems inandroid we would also give you a brief explanation about a particular filesystem in detail understanding.14.6.1 Flash Memory-Android OS File System:1.exFAT:Created by Microsoft for flash memory, the exFAT filesystem is not a part of the standard Linux kernel. However, it stillprovides support for Android devices in some cases. It stands forExtended File Allocation Table Application.2.F2FS:Users of Samsung smartphones are bound to have come acrossthis type of file system Application if they have been using thesmartphone for a while. F2FS stands for Flash-Friendly File SystemApplication, which is an Open Source Linux file system. This wasintroduced by Samsung 4 years ago in the year 2012.munotes.in

Page 189

1893.JFFs2:It stands for the Journal Flash File System version 2. This isthe default flash file system for the Android Application Open SourceProject kernels. This version of the Android File System has beenaroundsince the Android Ice Cream Sandwich Operating system wasreleased.14.6.2 Media-based Android File System:1.EXT2/3/4:Ext, which stands for the extended file systems, are thestandards for the Linux file system. The latest out of these is theEXT4, which has now been replacing the YAFFS2 and the JFFS2 filesystems on Android smartphones.2.MS-DOS:Microsoft Disk Operating System is known to be one ofthe oldest names in the world of Operating Systems, and it helps FAT12, FAT 16, and FAT 32 file systems to run seamlessly.3.vFAT:An extension to the aforementioned FAT 12, FAT 16, andFAT 32 file systems, the vFAT is a kernel module seen alongside themsDOS module. External SD cards that help expand the storage spaceare formatted using VFAT.14.6.3Pseudo File Systems:1.CGroup:Cgroup stands for Control Group. It is a pseudo-file systemwhich allows access and meaning to various kernel parameters.Cgroups are very important for the Android File System as theAndroid OS makes use of these control groups for user accountingand CPU Control.2.Rootfs:Rootfs acts as the mount point, and it is a minimal filesystem. It is located at the mount point "-".3.Process:The process file system has files that showcase the livekernel data.Sometimes thisfile system Application development alsoreflects several kernel data structures. These numbers directories arereflective of process IDs for all the currently running tasks now.4.Systems:Usually mounted on the /sys directory. The sysfs file systemapphelps the kernel identify the devices. Upon identifying a newdevice, the kernel builds an object.5.Tmpfs:A temporary file system, tmpfs is usually mounted on /devdirectory. Data on this is lost when the device is rebooted.14.6.4 Android | Android Application File Structure :It is very important to know about the basics of the AndroidStudioApplication file structure. In this, some important files/folders, andtheir significance is explained for the easy understanding of the androidstudio work environment. In the below image, several important files aremarked here below in diagram picture:munotes.in

Page 190

19014.6.5 Files in Android Studio and Explained below:1.AndroidManifest.xml:Every project in Android includes a manifestfile, which is manifest.xml stored in the root directory of its projecthierarchy. The AndroidAppmanifest file is an important part of ourapp because it defines the structure and metadata of our application itscomponents and its requirements. The file includes nodes for each ofthe Activities, Services providers Content ProvidersApplication andBroadcast Receiver App that make the application and using IntentFilters and Permissions, determines how they co-ordinate with eachother and other Android applications.2.Java:The Java folder contains the Java source code files inApplication. These files are used as a controller for a controlled layoutfile. It gets the data from the layout file App and after processing thatdata output Android Application will be shown in the UI layout. Itworks on the backend of an Android application.3.Drawable:A Drawable folder contains a resource type file (somethingthat can be drawn). Drawables may take a variety of files like BitmapNine Patch, Vector (XML), Shape, Layers, States, Levels, and Scale.4.Layout:A layout defines the visual structure for the user interface,such as the UI for an Android application. This Layout folder storesLayout files that are written inXML language we can addadditional layout objects or widgets as child elements to graduallybuild a view hierarchy that defines your layout file.
19014.6.5 Files in Android Studio and Explained below:1.AndroidManifest.xml:Every project in Android includes a manifestfile, which is manifest.xml stored in the root directory of its projecthierarchy. The AndroidAppmanifest file is an important part of ourapp because it defines the structure and metadata of our application itscomponents and its requirements. The file includes nodes for each ofthe Activities, Services providers Content ProvidersApplication andBroadcast Receiver App that make the application and using IntentFilters and Permissions, determines how they co-ordinate with eachother and other Android applications.2.Java:The Java folder contains the Java source code files inApplication. These files are used as a controller for a controlled layoutfile. It gets the data from the layout file App and after processing thatdata output Android Application will be shown in the UI layout. Itworks on the backend of an Android application.3.Drawable:A Drawable folder contains a resource type file (somethingthat can be drawn). Drawables may take a variety of files like BitmapNine Patch, Vector (XML), Shape, Layers, States, Levels, and Scale.4.Layout:A layout defines the visual structure for the user interface,such as the UI for an Android application. This Layout folder storesLayout files that are written inXML language we can addadditional layout objects or widgets as child elements to graduallybuild a view hierarchy that defines your layout file.
19014.6.5 Files in Android Studio and Explained below:1.AndroidManifest.xml:Every project in Android includes a manifestfile, which is manifest.xml stored in the root directory of its projecthierarchy. The AndroidAppmanifest file is an important part of ourapp because it defines the structure and metadata of our application itscomponents and its requirements. The file includes nodes for each ofthe Activities, Services providers Content ProvidersApplication andBroadcast Receiver App that make the application and using IntentFilters and Permissions, determines how they co-ordinate with eachother and other Android applications.2.Java:The Java folder contains the Java source code files inApplication. These files are used as a controller for a controlled layoutfile. It gets the data from the layout file App and after processing thatdata output Android Application will be shown in the UI layout. Itworks on the backend of an Android application.3.Drawable:A Drawable folder contains a resource type file (somethingthat can be drawn). Drawables may take a variety of files like BitmapNine Patch, Vector (XML), Shape, Layers, States, Levels, and Scale.4.Layout:A layout defines the visual structure for the user interface,such as the UI for an Android application. This Layout folder storesLayout files that are written inXML language we can addadditional layout objects or widgets as child elements to graduallybuild a view hierarchy that defines your layout file.
munotes.in

Page 191

1915.Mipmap:Mipmap Android folder contains the Image Asset file thatcan be used in Android Studio Application. Generate the followingicon types like Launcher icons, Action bar, and tab icons andNotification icons there.6.Colors.xml:Colors.xml file contains color resources of the AndroidApplication. Different color values are identified by a unique namethat can be used in the Android application.7.Strings.xml:The strings.xml file contains string resources of theAndroid application the different string value is identified by a uniquename that can be used in the Android application program file alsostores string array by using XML language in Application.8.Styles.xml:Here styles.xml file contains resources of the theme stylein the Android application. It is written in XML language for allactivities in general in android.9.build.gradle:This defines and implements the module-specific buildconfigurations. We add dependencies need in the Android applicationhere in the Gradle module.File System provides an interface to a file system and is the factoryfor objects to access files and otherobjects in the file system. The defaultfile system Application, obtained by invoking the method, providesdefines methods to create file systemsthat provide access to other types offile systems.A file system of Android is the factory for severalTypes:•GetPathmethod:It converts a system-dependentpath string,returning a Path object that may be used to locate and access a file.•GetPathMatchermethod:Itis used to create aPathMatcherthatperforms match operations on paths.•GetFileStoresmethod: It returns an iterator over the underlyingFileStore.•GetUserPrincipalLookupServicemethod:ItreturnsUserPrincipalLookupService to lookup users /groups by name of theservices mentioned.•Watch service method:It creates a WatchService that may be used towatch objects for changes and events.File systems vary in some cases, the file system of androidApplication is a single hierarchy of files with one top-level root directory.In other cases, it may have several distinct file hierarchies, each with itstop-level root directory Application. ThegetRootDirectoriesmethod maybe used to iterate over the root directories in the system. A file system istypically composed of one or more underlying File Store thatprovides thestorage for the files. File stores can also vary in the features they support,munotes.in

Page 192

192and the file attributes andmeta-datathat they associate with files in thisApplications.A file system is open upon creation and can be closed by invokinga close method. Once closed any further attempt to access objects in thefile system causesClosedFileSystemExceptionto be thrown. File systemscreated by the defaultFileSystemProvidercannot be closed in the Androidapplication.AFileSystemcan provide read-only or read and write access to thefile system. Whether or not a file system provides read-only access isestablished when the File System is created and can be tested by invokingit is a read-only method.Attempts to write to file stores utilizing an object associated with aread-only file system throws ReadOnlyFileSystemException.The android application provides many kinds of storage forapplications to store their data. These storage places are sharedpreferences, internal and external storage SQLite storage browser, andstorage via the network connection.Internal storage is the storage of the private data on the devicememory in the file system.By default, these filesare private and are accessed by only yourapplication and get deleted, when the user deletes your androidapplication.14.7 SECURITY IN ANDROIDThe security features provided in Android are:14.7.1 Authentication:Android uses the concept of user-authenticated cryptographickeys which requires cryptographic key storage facilities, service providers,and user authenticators.On devices that possess a fingerprint sensor, the users can addmore than one fingerprint to unlock the phone and accomplish differenttasks. The Gatekeeper subsystem is used to perform the authentication ofpattern/password in the TrustedExecution Environment (Trusty). Android9 & higher versions also include Protected Confirmation which allowstheuser to formally confirm critical transactions.14.7.2 Biometrics:Android 9 and up consists of a BiometricPrompt API that allowsthe developers to integrate biometric authentication within theirapplications. Only strong biometrics can integratewith BiometricPrompt.munotes.in

Page 193

19314.7.3 Encryption:After the device is encrypted, all the data created by the user isautomatically encrypted before committing it to the disk and also all thereads automatically decrypts the data before sending it back to theirrespective calling process. Encryption gives assurances that if anunauthorized user tried to access the data, they would not be able to readthe content of the data.14.7.4 Keystore:The Keystore system allows you to store all cryptographic keysinto a container to make it more difficult for the hacker to extract it fromthe device. Once keys are stored in the KeyStore, they can be used forvarious cryptographic procedures with the key remaining non-exportable.It offers features such as to restrict how andwhen the keys can be used,such as demanding authentication of users for key use and restrictingusage of keys only in some cryptographic methods.14.7.5 Trusty Trusted Execution Environment (TEE):Trusty has access to the full power of a device’s mainprocessorand memory but remains completely isolated. Trusty's isolated positionprotects it from various malicious applications installed by the user andalso from potential vulnerabilities that would be discovered in Android.14.7.6 Verified Boot:Verified boot cryptographically verifies all executable code anddata which is part of the Android version that is being booted before it canbe used. It ensures that all executable code comes from a trusted source,rather than from a hacker. It establishes a full chain of trust, starting froma hardware-protected root of trust to the bootloader, to the boot partitionand also other verified partitions.14.8 SUMMARY•Android made this version authorized in the year 2008, with Android1.0.•Finally, Googleopted to drop the tradition of naming the Androidversion after sweets, desserts, and candies. It was launched inSeptember 2019.•Android OS provides a beautiful and intuitive user interface.•Android is a stack of components of the software which is divided intofive layers.•Android ranks process at the highest level it can, based on theimportance of their components which are currently active in theprocess. Fore.g.if the process hosts a service and a visible activity,the process is ranked as a visible process, not a service process.munotes.in

Page 194

194•The Dalvik virtual machine maintains track of memory allocation.Once it gets to know that memory is no longer used by any program iffreeze into a heap without any participation from the programmer.14.9 LIST OF REFERENCES1.Modern Operating Systems, Andrew S. Tanenbaum, Herbert , Pearson4th Edition, 20142.Operating Systems–Internals and Design Principles, WillaimStallings, Pearson 8th Edition, 20093.Operating System-Concepts Abraham Silberschatz, Peter B. GalvinegGagne Wiley ,8 th Edition4.Operating Systems Godbole and Kahate McGraw Hill 3 rd Edition14.10 BIBLIOGRAPHYhttps://www.tutorialspoint.com/https://www.geeksforgeeks.org/https://www.javatpoint.com/java-tutorialhttps://guru99.com14.11UNIT ENDQUESTIONS1.Explain Architecture of Android.2.Explain how database connect in Android3.Explain memory management in Android3.Write shortnote on Process management in Android4.Write short note on security in Linux*****munotes.in

Page 195

19515WINDOWS CASE STUDY OBJECTIVESUnit Structure15.0Objectives15.1History of Windows15.2Programming Windows15.3System Structure15.4Process and Threads in Windows15.5Memory Management in Window15.6Windows IO Management15.7Windows NT File System15.8Windows Power Management15.9Security in Windows15.10Summary15.11List of References15.12Bibliography15.13Unit EndQuestions15.0 OBJECTIVES•To understand principles of Windows Operatig system•To learn principles of Process, Memory Management•To learn principles of IO Management, File System and Security15.1 HISTORY OF WINDOWS1) Windows 1.0 (1985)TheWindows 1 was released inNovember 1985 and wasMicrosoft’s first true effort at agraphical user interface in 16-bit.It was prominent because it reliedheavily on practice of a mousebefore the mouse was a sharedcomputer input device. To helpusers become familiar with thisodd input system, Microsoftincluded a game, Reverse (visiblein the screenshot) that relied onmouse control, not the keyboard,
19515WINDOWS CASE STUDY OBJECTIVESUnit Structure15.0Objectives15.1History of Windows15.2Programming Windows15.3System Structure15.4Process and Threads in Windows15.5Memory Management in Window15.6Windows IO Management15.7Windows NT File System15.8Windows Power Management15.9Security in Windows15.10Summary15.11List of References15.12Bibliography15.13Unit EndQuestions15.0 OBJECTIVES•To understand principles of Windows Operatig system•To learn principles of Process, Memory Management•To learn principles of IO Management, File System and Security15.1 HISTORY OF WINDOWS1) Windows 1.0 (1985)TheWindows 1 was released inNovember 1985 and wasMicrosoft’s first true effort at agraphical user interface in 16-bit.It was prominent because it reliedheavily on practice of a mousebefore the mouse was a sharedcomputer input device. To helpusers become familiar with thisodd input system, Microsoftincluded a game, Reverse (visiblein the screenshot) that relied onmouse control, not the keyboard,
19515WINDOWS CASE STUDY OBJECTIVESUnit Structure15.0Objectives15.1History of Windows15.2Programming Windows15.3System Structure15.4Process and Threads in Windows15.5Memory Management in Window15.6Windows IO Management15.7Windows NT File System15.8Windows Power Management15.9Security in Windows15.10Summary15.11List of References15.12Bibliography15.13Unit EndQuestions15.0 OBJECTIVES•To understand principles of Windows Operatig system•To learn principles of Process, Memory Management•To learn principles of IO Management, File System and Security15.1 HISTORY OF WINDOWS1) Windows 1.0 (1985)TheWindows 1 was released inNovember 1985 and wasMicrosoft’s first true effort at agraphical user interface in 16-bit.It was prominent because it reliedheavily on practice of a mousebefore the mouse was a sharedcomputer input device. To helpusers become familiar with thisodd input system, Microsoftincluded a game, Reverse (visiblein the screenshot) that relied onmouse control, not the keyboard,munotes.in

Page 196

196to get people used to moving themouse everywhere and clickingonscreen elements.2) Windows2.0 (1987)●Two years after the releaseof Windows 1, Microsoft’sWindows 2 substituted it inDecember 1987.●The control panel, wherenumerous system settings andformation options werecollected together in oneplace, was existing inWindows 2 andsurvives tothis day.●Microsoft Word and Excelalso made their first arrivalsrunning on Windows 2.3) Windows 3.0–3.1 (1990–1994)●Windows 3 presented theskill to run MS-DOSprogrammers in windows,which took multitasking toheritage programmers,andsustained 256 colorsbringing a more modern,colorful look to the interface.●More important-at least tothe sum total of human timewasted-it presented thecard-moving TimeSink (andmouse use trainer) Solitaire.4) Windows 3.1●Minesweeper alsomade itsoriginal arrival. Windows 3.1required 1MB of RAM to runand allowed maintained MS-DOS programs to be skillfulwith a mouse for the firsttime. Windows 3.1 was alsothe first Windows to beistributed on a CDROM,although once connected on ahard drive it only took up 10to 15MB (a CD can naturallystore up to 700MB).
196to get people used to moving themouse everywhere and clickingonscreen elements.2) Windows2.0 (1987)●Two years after the releaseof Windows 1, Microsoft’sWindows 2 substituted it inDecember 1987.●The control panel, wherenumerous system settings andformation options werecollected together in oneplace, was existing inWindows 2 andsurvives tothis day.●Microsoft Word and Excelalso made their first arrivalsrunning on Windows 2.3) Windows 3.0–3.1 (1990–1994)●Windows 3 presented theskill to run MS-DOSprogrammers in windows,which took multitasking toheritage programmers,andsustained 256 colorsbringing a more modern,colorful look to the interface.●More important-at least tothe sum total of human timewasted-it presented thecard-moving TimeSink (andmouse use trainer) Solitaire.4) Windows 3.1●Minesweeper alsomade itsoriginal arrival. Windows 3.1required 1MB of RAM to runand allowed maintained MS-DOS programs to be skillfulwith a mouse for the firsttime. Windows 3.1 was alsothe first Windows to beistributed on a CDROM,although once connected on ahard drive it only took up 10to 15MB (a CD can naturallystore up to 700MB).
196to get people used to moving themouse everywhere and clickingonscreen elements.2) Windows2.0 (1987)●Two years after the releaseof Windows 1, Microsoft’sWindows 2 substituted it inDecember 1987.●The control panel, wherenumerous system settings andformation options werecollected together in oneplace, was existing inWindows 2 andsurvives tothis day.●Microsoft Word and Excelalso made their first arrivalsrunning on Windows 2.3) Windows 3.0–3.1 (1990–1994)●Windows 3 presented theskill to run MS-DOSprogrammers in windows,which took multitasking toheritage programmers,andsustained 256 colorsbringing a more modern,colorful look to the interface.●More important-at least tothe sum total of human timewasted-it presented thecard-moving TimeSink (andmouse use trainer) Solitaire.4) Windows 3.1●Minesweeper alsomade itsoriginal arrival. Windows 3.1required 1MB of RAM to runand allowed maintained MS-DOS programs to be skillfulwith a mouse for the firsttime. Windows 3.1 was alsothe first Windows to beistributed on a CDROM,although once connected on ahard drive it only took up 10to 15MB (a CD can naturallystore up to 700MB).munotes.in

Page 197

1975) Windows 95 (1995)●As the name implies,Windows 95 reached inAugust 1995 and with itbrought the first ever Startbutton and Start menu. It alsopresented the idea of "plug andplay" connect a peripheral andthe operating system catchesthe suitable drivers for it andmakes it work.●Windows 95 also presented a32 bit environment, the taskbar andengrossed onmultitasking. MS-DOS stillplayed an significant role forWindows 95, whichcompulsoryit to run someprogrammers and elements.Internet Explorer also made itsdebut on Windows 95, but wasnot installed by default needfulthe Windows 95 Plus! Pack•6) Windows 98 (1998)●Released in June 1998,Windows 98 made onWindows 95 and transportedwith it IE 4, Outlook Express,Windows Address Book,Microsoft Chat andNetShowPlayer, which was substitutedby Windows Media Player 6.2in Windows 98 SecondEdition in 1999.●USB support was much betterin Windows 98 and led to itsextensive adoption, includingUSB hubs and USB mice.7) Windows 2000 (2000)●Theinitiative twin of ME,Windows 2000 was free inFebruary 2000 and wasformed on Microsoft’sbusiness-orientated systemWindows NT and later becamethe foundation for WindowsXP.
1975) Windows 95 (1995)●As the name implies,Windows 95 reached inAugust 1995 and with itbrought the first ever Startbutton and Start menu. It alsopresented the idea of "plug andplay" connect a peripheral andthe operating system catchesthe suitable drivers for it andmakes it work.●Windows 95 also presented a32 bit environment, the taskbar andengrossed onmultitasking. MS-DOS stillplayed an significant role forWindows 95, whichcompulsoryit to run someprogrammers and elements.Internet Explorer also made itsdebut on Windows 95, but wasnot installed by default needfulthe Windows 95 Plus! Pack•6) Windows 98 (1998)●Released in June 1998,Windows 98 made onWindows 95 and transportedwith it IE 4, Outlook Express,Windows Address Book,Microsoft Chat andNetShowPlayer, which was substitutedby Windows Media Player 6.2in Windows 98 SecondEdition in 1999.●USB support was much betterin Windows 98 and led to itsextensive adoption, includingUSB hubs and USB mice.7) Windows 2000 (2000)●Theinitiative twin of ME,Windows 2000 was free inFebruary 2000 and wasformed on Microsoft’sbusiness-orientated systemWindows NT and later becamethe foundation for WindowsXP.
1975) Windows 95 (1995)●As the name implies,Windows 95 reached inAugust 1995 and with itbrought the first ever Startbutton and Start menu. It alsopresented the idea of "plug andplay" connect a peripheral andthe operating system catchesthe suitable drivers for it andmakes it work.●Windows 95 also presented a32 bit environment, the taskbar andengrossed onmultitasking. MS-DOS stillplayed an significant role forWindows 95, whichcompulsoryit to run someprogrammers and elements.Internet Explorer also made itsdebut on Windows 95, but wasnot installed by default needfulthe Windows 95 Plus! Pack•6) Windows 98 (1998)●Released in June 1998,Windows 98 made onWindows 95 and transportedwith it IE 4, Outlook Express,Windows Address Book,Microsoft Chat andNetShowPlayer, which was substitutedby Windows Media Player 6.2in Windows 98 SecondEdition in 1999.●USB support was much betterin Windows 98 and led to itsextensive adoption, includingUSB hubs and USB mice.7) Windows 2000 (2000)●Theinitiative twin of ME,Windows 2000 was free inFebruary 2000 and wasformed on Microsoft’sbusiness-orientated systemWindows NT and later becamethe foundation for WindowsXP.munotes.in

Page 198

198●Microsoft’s automaticinforming played a significantrole in Windows2000andbecame the first Windows tosupport hibernation.8) Windows ME (2000):●Released in September 2000, itwas the consumer-aimedoperating system looped withWindows 2000 meant at theenterprise market. It presentedsome vital concepts toconsumers,with moreautomated system recoverytools.●IE 5.5, Windows MediaPlayer 7 and Windows MovieMaker all made their presencefor the first time.Autocomplete also seemed inWindows Explorer, but theoperating system wasdishonorable for being buggy,failingto install properly andbeing generally poor.9) Windows XP (2001)●It was built on Windows NTsimilar Windows 2000, butbrought the consumer-friendly basics from WindowsME. The Start menu and taskbar got a visual renovation,bringing the acquaintedgreenStart button, blue task bar andvista wallpaper, along withseveral shadow and othervisual effects.•●Its majorproblem wassecuritythough it had afirewallconstructed in, itwas turned off by default.Windows XP’s vast approvalturned out tobe a boon forhackers andcriminals, who browbeaten itsflaws, especially in InternetExplorer,pitilessly-leadingBill Gates to pledge aTrustworthy Computing‖initiative and the ensuingissuance of to Service Packupdates thattough XPagainst attack substantially.
198●Microsoft’s automaticinforming played a significantrole in Windows2000andbecame the first Windows tosupport hibernation.8) Windows ME (2000):●Released in September 2000, itwas the consumer-aimedoperating system looped withWindows 2000 meant at theenterprise market. It presentedsome vital concepts toconsumers,with moreautomated system recoverytools.●IE 5.5, Windows MediaPlayer 7 and Windows MovieMaker all made their presencefor the first time.Autocomplete also seemed inWindows Explorer, but theoperating system wasdishonorable for being buggy,failingto install properly andbeing generally poor.9) Windows XP (2001)●It was built on Windows NTsimilar Windows 2000, butbrought the consumer-friendly basics from WindowsME. The Start menu and taskbar got a visual renovation,bringing the acquaintedgreenStart button, blue task bar andvista wallpaper, along withseveral shadow and othervisual effects.•●Its majorproblem wassecuritythough it had afirewallconstructed in, itwas turned off by default.Windows XP’s vast approvalturned out tobe a boon forhackers andcriminals, who browbeaten itsflaws, especially in InternetExplorer,pitilessly-leadingBill Gates to pledge aTrustworthy Computing‖initiative and the ensuingissuance of to Service Packupdates thattough XPagainst attack substantially.
198●Microsoft’s automaticinforming played a significantrole in Windows2000andbecame the first Windows tosupport hibernation.8) Windows ME (2000):●Released in September 2000, itwas the consumer-aimedoperating system looped withWindows 2000 meant at theenterprise market. It presentedsome vital concepts toconsumers,with moreautomated system recoverytools.●IE 5.5, Windows MediaPlayer 7 and Windows MovieMaker all made their presencefor the first time.Autocomplete also seemed inWindows Explorer, but theoperating system wasdishonorable for being buggy,failingto install properly andbeing generally poor.9) Windows XP (2001)●It was built on Windows NTsimilar Windows 2000, butbrought the consumer-friendly basics from WindowsME. The Start menu and taskbar got a visual renovation,bringing the acquaintedgreenStart button, blue task bar andvista wallpaper, along withseveral shadow and othervisual effects.•●Its majorproblem wassecuritythough it had afirewallconstructed in, itwas turned off by default.Windows XP’s vast approvalturned out tobe a boon forhackers andcriminals, who browbeaten itsflaws, especially in InternetExplorer,pitilessly-leadingBill Gates to pledge aTrustworthy Computing‖initiative and the ensuingissuance of to Service Packupdates thattough XPagainst attack substantially.munotes.in

Page 199

19910) Windows Vista (2007)●Windows XP remained thecourse for close to six yearsbefore being substituted byindows Vista in January2007. Vista efficient the lookand feel of Windows withmore emphasis on transparentelements, searchand security.Its growth, under thecodename Longhorn‖,was troubled, withdetermined elementsuncontrolled in order toget it into production.●It was buggy, loaded the userwith hundreds ofrequirements for apppermissions under UserAccount Control‖theconsequence of theTrustworthy Computingcreativity which now meantthat users had to approve ordisapprove efforts byprograms to make variouschanges.●It also ran gradually on oldercomputers in spite of thembeing thought as VistaReady‖-a labelling that sawit sued since not all versionsof Vista could run on PCswith that label.11) Windows 7 (2009)●It was sooner, more stableand easier to practice,becoming the operatingsystem most users andbusiness would advancementto from Windows XP,forgoing Vista completely.●Windows 7 saw Microsoft hitin Europe with antitrustinquiries over the pre-installing of IE, which led toa browser ballot screen beingshown to original users
19910) Windows Vista (2007)●Windows XP remained thecourse for close to six yearsbefore being substituted byindows Vista in January2007. Vista efficient the lookand feel of Windows withmore emphasis on transparentelements, searchand security.Its growth, under thecodename Longhorn‖,was troubled, withdetermined elementsuncontrolled in order toget it into production.●It was buggy, loaded the userwith hundreds ofrequirements for apppermissions under UserAccount Control‖theconsequence of theTrustworthy Computingcreativity which now meantthat users had to approve ordisapprove efforts byprograms to make variouschanges.●It also ran gradually on oldercomputers in spite of thembeing thought as VistaReady‖-a labelling that sawit sued since not all versionsof Vista could run on PCswith that label.11) Windows 7 (2009)●It was sooner, more stableand easier to practice,becoming the operatingsystem most users andbusiness would advancementto from Windows XP,forgoing Vista completely.●Windows 7 saw Microsoft hitin Europe with antitrustinquiries over the pre-installing of IE, which led toa browser ballot screen beingshown to original users
19910) Windows Vista (2007)●Windows XP remained thecourse for close to six yearsbefore being substituted byindows Vista in January2007. Vista efficient the lookand feel of Windows withmore emphasis on transparentelements, searchand security.Its growth, under thecodename Longhorn‖,was troubled, withdetermined elementsuncontrolled in order toget it into production.●It was buggy, loaded the userwith hundreds ofrequirements for apppermissions under UserAccount Control‖theconsequence of theTrustworthy Computingcreativity which now meantthat users had to approve ordisapprove efforts byprograms to make variouschanges.●It also ran gradually on oldercomputers in spite of thembeing thought as VistaReady‖-a labelling that sawit sued since not all versionsof Vista could run on PCswith that label.11) Windows 7 (2009)●It was sooner, more stableand easier to practice,becoming the operatingsystem most users andbusiness would advancementto from Windows XP,forgoing Vista completely.●Windows 7 saw Microsoft hitin Europe with antitrustinquiries over the pre-installing of IE, which led toa browser ballot screen beingshown to original usersmunotes.in

Page 200

200allowing them to choose,which browser to connect onfirst boot.12) Windows 8 (2012)●Released in October 2012,Windows 8 was Microsoft’smost essential renovation ofthe Windows interface,scrapping the Start buttonand Start menu in favour of amore touch-friendly Startscreen.The new smooth interfacesaw programmer icons andlive tiles, which showed at-a-glance info normally relatedwith widgets substitute thelists of programmers andicons. A desktop was stillcomprised, which look likeWindows 7An allowed pointrelease to●Windows 8 presented inctober 2013, Windows 8.1noticeable a shift towardsyearly software updates fromMicrosoft and comprised thefirst step in Microsoft’s Uturn around its novel visualinterface.13) Windows 8.1 (2013)●Windows 8.1 reintroducedthe Start button, which tookup the Start screen from thedesktop view of Windows8.1. Users couldalso select toboot straight into the desktopof Windows 8.1, whichwasmore appropriate for thoseusing a desktop computerwith a mouse and consolethan the touch focused Startscreen.14) Windows 10 (2015)●Windows 10 stays acomputer operating systembyMicrosoft as fragment of itsWindows family of operatingsystems. It was recognizedasThresholdwhen it was being
200allowing them to choose,which browser to connect onfirst boot.12) Windows 8 (2012)●Released in October 2012,Windows 8 was Microsoft’smost essential renovation ofthe Windows interface,scrapping the Start buttonand Start menu in favour of amore touch-friendly Startscreen.The new smooth interfacesaw programmer icons andlive tiles, which showed at-a-glance info normally relatedwith widgets substitute thelists of programmers andicons. A desktop was stillcomprised, which look likeWindows 7An allowed pointrelease to●Windows 8 presented inctober 2013, Windows 8.1noticeable a shift towardsyearly software updates fromMicrosoft and comprised thefirst step in Microsoft’s Uturn around its novel visualinterface.13) Windows 8.1 (2013)●Windows 8.1 reintroducedthe Start button, which tookup the Start screen from thedesktop view of Windows8.1. Users couldalso select toboot straight into the desktopof Windows 8.1, whichwasmore appropriate for thoseusing a desktop computerwith a mouse and consolethan the touch focused Startscreen.14) Windows 10 (2015)●Windows 10 stays acomputer operating systembyMicrosoft as fragment of itsWindows family of operatingsystems. It was recognizedasThresholdwhen it was being
200allowing them to choose,which browser to connect onfirst boot.12) Windows 8 (2012)●Released in October 2012,Windows 8 was Microsoft’smost essential renovation ofthe Windows interface,scrapping the Start buttonand Start menu in favour of amore touch-friendly Startscreen.The new smooth interfacesaw programmer icons andlive tiles, which showed at-a-glance info normally relatedwith widgets substitute thelists of programmers andicons. A desktop was stillcomprised, which look likeWindows 7An allowed pointrelease to●Windows 8 presented inctober 2013, Windows 8.1noticeable a shift towardsyearly software updates fromMicrosoft and comprised thefirst step in Microsoft’s Uturn around its novel visualinterface.13) Windows 8.1 (2013)●Windows 8.1 reintroducedthe Start button, which tookup the Start screen from thedesktop view of Windows8.1. Users couldalso select toboot straight into the desktopof Windows 8.1, whichwasmore appropriate for thoseusing a desktop computerwith a mouse and consolethan the touch focused Startscreen.14) Windows 10 (2015)●Windows 10 stays acomputer operating systembyMicrosoft as fragment of itsWindows family of operatingsystems. It was recognizedasThresholdwhen it was beingmunotes.in

Page 201

201industrializedand proclaimed ata pressevent on 30 September2014.Windows 10 is aMicrosoftoperating system forpersonalcomputers,tablets,encircled devices and internet ofthingsdevices.Microsoft freeWindows 10 in July 2015as afollow-up toWindows 8. Thecompanyhas said it will updateWindows 10 in eternityratherthan release a new,completeoperatingsystemasabeneficiary.15.2 PROGRAMMING WINDOWSMicrosoft Windows is a multi-tasking operating system thatlicenses many applications, pointy to here on out as processes. Everyprocess in Windows is stated some quantity of time, recognized as a timeslice, where the application is definite the right to control the systemwithout being intermittent by the other processes. The runtimesuperiorityand the quantity of time assigned to a process are acknowledged by thescheduler. The scheduler is measured as the manager of this multi-taskingoperating system, making sure that each process is quantified the time andthe importance it requires trusting on the current state of the system.Windows is what is acknowledged as an event-driven operating system.When that key is pushed Windows will record an event to the request thatthe key is down.15.2.2 How Windows ProgramWork:To make abasic application, you will originally require a compilerthat performs on a Microsoft Windows operating system. Even howeveryou can apply Win32 on many languages involving Pascal (namelyBorland Delphi), we will use only one language. Actually the Win32library is written in C, which is also the keylanguage of the Microsoft Windows operating systems.Generating a Win32Program:All Win32 programs chiefly seem the same and act the same but,just like C++ programs, there are slight changes in terms of starting aprogram, trusting on the compiler you are utilizing. Here we will bechallenging our programs on Borland C++ Builder, Microsoft Visual C++,and Microsoft Visual C++.NET.For a important Win32 program, the contents of a Win32 programare alike.You will feel a difference only when you begin addition some
201industrializedand proclaimed ata pressevent on 30 September2014.Windows 10 is aMicrosoftoperating system forpersonalcomputers,tablets,encircled devices and internet ofthingsdevices.Microsoft freeWindows 10 in July 2015as afollow-up toWindows 8. Thecompanyhas said it will updateWindows 10 in eternityratherthan release a new,completeoperatingsystemasabeneficiary.15.2 PROGRAMMING WINDOWSMicrosoft Windows is a multi-tasking operating system thatlicenses many applications, pointy to here on out as processes. Everyprocess in Windows is stated some quantity of time, recognized as a timeslice, where the application is definite the right to control the systemwithout being intermittent by the other processes. The runtimesuperiorityand the quantity of time assigned to a process are acknowledged by thescheduler. The scheduler is measured as the manager of this multi-taskingoperating system, making sure that each process is quantified the time andthe importance it requires trusting on the current state of the system.Windows is what is acknowledged as an event-driven operating system.When that key is pushed Windows will record an event to the request thatthe key is down.15.2.2 How Windows ProgramWork:To make abasic application, you will originally require a compilerthat performs on a Microsoft Windows operating system. Even howeveryou can apply Win32 on many languages involving Pascal (namelyBorland Delphi), we will use only one language. Actually the Win32library is written in C, which is also the keylanguage of the Microsoft Windows operating systems.Generating a Win32Program:All Win32 programs chiefly seem the same and act the same but,just like C++ programs, there are slight changes in terms of starting aprogram, trusting on the compiler you are utilizing. Here we will bechallenging our programs on Borland C++ Builder, Microsoft Visual C++,and Microsoft Visual C++.NET.For a important Win32 program, the contents of a Win32 programare alike.You will feel a difference only when you begin addition some
201industrializedand proclaimed ata pressevent on 30 September2014.Windows 10 is aMicrosoftoperating system forpersonalcomputers,tablets,encircled devices and internet ofthingsdevices.Microsoft freeWindows 10 in July 2015as afollow-up toWindows 8. Thecompanyhas said it will updateWindows 10 in eternityratherthan release a new,completeoperatingsystemasabeneficiary.15.2 PROGRAMMING WINDOWSMicrosoft Windows is a multi-tasking operating system thatlicenses many applications, pointy to here on out as processes. Everyprocess in Windows is stated some quantity of time, recognized as a timeslice, where the application is definite the right to control the systemwithout being intermittent by the other processes. The runtimesuperiorityand the quantity of time assigned to a process are acknowledged by thescheduler. The scheduler is measured as the manager of this multi-taskingoperating system, making sure that each process is quantified the time andthe importance it requires trusting on the current state of the system.Windows is what is acknowledged as an event-driven operating system.When that key is pushed Windows will record an event to the request thatthe key is down.15.2.2 How Windows ProgramWork:To make abasic application, you will originally require a compilerthat performs on a Microsoft Windows operating system. Even howeveryou can apply Win32 on many languages involving Pascal (namelyBorland Delphi), we will use only one language. Actually the Win32library is written in C, which is also the keylanguage of the Microsoft Windows operating systems.Generating a Win32Program:All Win32 programs chiefly seem the same and act the same but,just like C++ programs, there are slight changes in terms of starting aprogram, trusting on the compiler you are utilizing. Here we will bechallenging our programs on Borland C++ Builder, Microsoft Visual C++,and Microsoft Visual C++.NET.For a important Win32 program, the contents of a Win32 programare alike.You will feel a difference only when you begin addition somemunotes.in

Page 202

202objects known as incomes. To create a Win32 program by means ofBorland C++ Builder, you must make a console application by means ofthe Console Wizard.Running Several ProgramsSimultaneously:If you function with many programs concurrently you know howtedious is to run and launch them one by one! Actually, you need to findthem between the other applications you have connected on your Window,click their icons and remain for them to open. This operation is fairlyintolerable. That’s why you require a specific shortcut (batch file) which isable to start all of them in one click! This trick will let you reach a great,time saving consequence. Opening numerous applications in a matterof acouple seconds! Stop glancing your computer folders, stop looking for theright icon. Handle all from the same place.1.Click Start.2.Click All Programs.3.Click Accessories.4.Click Notepad to open it.5.Now write the following code:Start ""(confirm to leave a space before and after “”) shadowed by theabsolute path of the program you wish to open in quote.Example: Start " "C:\Users\YourName\AppData\Local\Google\Chrome\Application\chrome.exe‖6.Right after that press Enter and writeadditional line like the oneoverhead so as to open a new application.7.Guarantee to write each Start‘‘ ’’command on a new line so that aline will encircle a Start‘‘ ’’command only, or else the batch filewon’t work and you won’t be able to open many programs!8.Now save the file with any name you wish and settle to save it as .batextension (and not as .txt).3. Code and Resources:Resources are distinct as the data that you can comprise to the applicationsexecutable file resources can be:•standard: icon, cursor, menu, dialog box, bitmap, improved metafile,font,•accelerator table, message-table entry, string-table entry, or version.•custom: any kind of data that doesn’t fall into the previous category(for instance a mp3 file or a dictionarydatabase).munotes.in

Page 203

203Accessing Resources from Code:The keys that know resources if they are clear during XAML arealso used to improve specific resources if you request the resource in code.The meekest manner to recuperate a resource from code is to call alsotheFindResource or the TryFindResource method from framework-levelobjects in your application.Creating Resources withCode:If you want to produce a whole WPF application in code, youmight also wish to make any resources in that application in code.Toreach this, create a new Resource Dictionary example, and then add all theresources to the dictionary by means of succeeding calls to ResourceDictionary.Add. Then, use the Resource Dictionary thus produced to setthe Resources property on an component that is current in a page scope, orthe Application.Resources.Different Data Types used in ResourceFile:Microsoft Windows applications regularly depend on files thatcomprise nonexecutable data, such as Extensible Application MarkupLanguage (XAML),images, video, and audio. Windows PresentationFoundation (WPF) offers superior provision for configuring, identifying,and consuming these types of data files, which are called application datafiles. This provision revolves around a specific set of application data filetypes, comprising•Resource Files:Data files that are compiled into one or the other anexecutable or library WPF assembly.•Content Files:Standalone data files that have an open associationwith an executable WPF assembly.•Site of OriginFiles:Standalone data files that have no suggestionwith an executable WPF assembly.6. Compiling Windows Program:Click the Run button (displayed below with the arrow) andpause afew seconds. This compiles the program to an EXE fileand runs it:When the program runs, you will detect the dialogon the screen. Itappears like this:munotes.in

Page 204

204Steps in creating a Dos ProgrammeSource Code(Example C)C CompilerObjective File(Example OBJ)LinkerFinished Programme(Example . EXE)Above figure shows a flow diagram for the formation of aWindows programs. In this figure the source code file gets converts toobjective file from the compilers same as in the DOS. In windowsprograms the linker gets a few extra info from a small file called the"module definition file" with the file name extension ".DEF".This file tellsthe linker how to collect the program. The linker combines the moduledefinition file info and the object file to make an incomplete .EXE file.The incomplete .EXE file absences the resource data. The main variancebetween Windows programs and DOS programs is in the compilation ofthe resource data file with the extension of ".RES". In DOS programsthere is no resource data but in windows program the resource data isaddedto the incomplete.EXE file to create the complete executableprogram. The resource data is essentially stuck onto the end of theprogram's code and develops part of the programs file. In addition toadding the resource data the resource compiler writes theWindowsversion number into the program file.15.3SYSTEM STRUCTURE•User application program cooperates with system hardware throughOperating System•Operating system is such a composite structure; it should be shapedwith utmost care so it can be usedand adapted easily.
204Steps in creating a Dos ProgrammeSource Code(Example C)C CompilerObjective File(Example OBJ)LinkerFinished Programme(Example . EXE)Above figure shows a flow diagram for the formation of aWindows programs. In this figure the source code file gets converts toobjective file from the compilers same as in the DOS. In windowsprograms the linker gets a few extra info from a small file called the"module definition file" with the file name extension ".DEF".This file tellsthe linker how to collect the program. The linker combines the moduledefinition file info and the object file to make an incomplete .EXE file.The incomplete .EXE file absences the resource data. The main variancebetween Windows programs and DOS programs is in the compilation ofthe resource data file with the extension of ".RES". In DOS programsthere is no resource data but in windows program the resource data isaddedto the incomplete.EXE file to create the complete executableprogram. The resource data is essentially stuck onto the end of theprogram's code and develops part of the programs file. In addition toadding the resource data the resource compiler writes theWindowsversion number into the program file.15.3SYSTEM STRUCTURE•User application program cooperates with system hardware throughOperating System•Operating system is such a composite structure; it should be shapedwith utmost care so it can be usedand adapted easily.
204Steps in creating a Dos ProgrammeSource Code(Example C)C CompilerObjective File(Example OBJ)LinkerFinished Programme(Example . EXE)Above figure shows a flow diagram for the formation of aWindows programs. In this figure the source code file gets converts toobjective file from the compilers same as in the DOS. In windowsprograms the linker gets a few extra info from a small file called the"module definition file" with the file name extension ".DEF".This file tellsthe linker how to collect the program. The linker combines the moduledefinition file info and the object file to make an incomplete .EXE file.The incomplete .EXE file absences the resource data. The main variancebetween Windows programs and DOS programs is in the compilation ofthe resource data file with the extension of ".RES". In DOS programsthere is no resource data but in windows program the resource data isaddedto the incomplete.EXE file to create the complete executableprogram. The resource data is essentially stuck onto the end of theprogram's code and develops part of the programs file. In addition toadding the resource data the resource compiler writes theWindowsversion number into the program file.15.3SYSTEM STRUCTURE•User application program cooperates with system hardware throughOperating System•Operating system is such a composite structure; it should be shapedwith utmost care so it can be usedand adapted easily.
munotes.in

Page 205

205•An easy way to do this is to make the operating system in parts. Eachof these parts should be well distinct with clear inputs, outputs andfunctions.•There are two types of Structures in Windows OS:-15.3.1 Simple Structure:DIAGRAM:
•Many operating systems have modest structure.•MS-DOS-written to deliver the most functionality in the smallestspace•Not distributed into modules•Although MS-DOS has some structure, its interfaces and levels offunctionality are not well detached15.3.2 Layered Structure:
205•An easy way to do this is to make the operating system in parts. Eachof these parts should be well distinct with clear inputs, outputs andfunctions.•There are two types of Structures in Windows OS:-15.3.1 Simple Structure:DIAGRAM:
•Many operating systems have modest structure.•MS-DOS-written to deliver the most functionality in the smallestspace•Not distributed into modules•Although MS-DOS has some structure, its interfaces and levels offunctionality are not well detached15.3.2 Layered Structure:
205•An easy way to do this is to make the operating system in parts. Eachof these parts should be well distinct with clear inputs, outputs andfunctions.•There are two types of Structures in Windows OS:-15.3.1 Simple Structure:DIAGRAM:
•Many operating systems have modest structure.•MS-DOS-written to deliver the most functionality in the smallestspace•Not distributed into modules•Although MS-DOS has some structure, its interfaces and levels offunctionality are not well detached15.3.2 Layered Structure:
munotes.in

Page 206

206The operating system is divided into a numeral of layers (levels),each constructed on top of lower layers. The lowest layer (layer 0), is thehardware; the highest (layer N) is the user interface.Advantage:Ease ofconstruction and debugging.Difficulty:Defining the various layers OS inclines to be less well-organized thanother applications.15.4 PROCESS AND THREADS IN WINDOWSAn application contains of one or more processes. Aprocess, inthe humblest terms, is an executing program. One or more threads run inthe perspective of the process. Athreadis the basic unit to which theoperating system assigns processor time.A thread can achieve any part of the process code, with partscurrently being executed by another thread. Eachprocessbrings theresources desirable to execute a program. A process consumes a virtualaddress space, executable code, open grips to system objects, a securitysetting, a unique process identifier, environment variables, an importanceclass, minimum and maximum working set sizes, and at smallest onethread of execution. Each process is continuing with a single thread, oftencalled theprimary thread, but can make additional threads from any of itsthreads.Athreadis the object within a process that can be planned forexecution. All threads of a process part its virtual address space andsystem resources. In adding, each thread supports exception handlers, ascheduling importance, thread local storage, a unique thread identifier, anda set of creations the system will use to save the thread setting until it isscheduled.
206The operating system is divided into a numeral of layers (levels),each constructed on top of lower layers. The lowest layer (layer 0), is thehardware; the highest (layer N) is the user interface.Advantage:Ease ofconstruction and debugging.Difficulty:Defining the various layers OS inclines to be less well-organized thanother applications.15.4 PROCESS AND THREADS IN WINDOWSAn application contains of one or more processes. Aprocess, inthe humblest terms, is an executing program. One or more threads run inthe perspective of the process. Athreadis the basic unit to which theoperating system assigns processor time.A thread can achieve any part of the process code, with partscurrently being executed by another thread. Eachprocessbrings theresources desirable to execute a program. A process consumes a virtualaddress space, executable code, open grips to system objects, a securitysetting, a unique process identifier, environment variables, an importanceclass, minimum and maximum working set sizes, and at smallest onethread of execution. Each process is continuing with a single thread, oftencalled theprimary thread, but can make additional threads from any of itsthreads.Athreadis the object within a process that can be planned forexecution. All threads of a process part its virtual address space andsystem resources. In adding, each thread supports exception handlers, ascheduling importance, thread local storage, a unique thread identifier, anda set of creations the system will use to save the thread setting until it isscheduled.
206The operating system is divided into a numeral of layers (levels),each constructed on top of lower layers. The lowest layer (layer 0), is thehardware; the highest (layer N) is the user interface.Advantage:Ease ofconstruction and debugging.Difficulty:Defining the various layers OS inclines to be less well-organized thanother applications.15.4 PROCESS AND THREADS IN WINDOWSAn application contains of one or more processes. Aprocess, inthe humblest terms, is an executing program. One or more threads run inthe perspective of the process. Athreadis the basic unit to which theoperating system assigns processor time.A thread can achieve any part of the process code, with partscurrently being executed by another thread. Eachprocessbrings theresources desirable to execute a program. A process consumes a virtualaddress space, executable code, open grips to system objects, a securitysetting, a unique process identifier, environment variables, an importanceclass, minimum and maximum working set sizes, and at smallest onethread of execution. Each process is continuing with a single thread, oftencalled theprimary thread, but can make additional threads from any of itsthreads.Athreadis the object within a process that can be planned forexecution. All threads of a process part its virtual address space andsystem resources. In adding, each thread supports exception handlers, ascheduling importance, thread local storage, a unique thread identifier, anda set of creations the system will use to save the thread setting until it isscheduled.
munotes.in

Page 207

20715.5 MEMORY MANAGEMENT IN WINDOWMemory management is the process of directing and organizingcomputer memory, passing on portionscalled blocks to various runningprograms to optimize overall system performance. Memory managementbe located in hardware, in the OS (operating system), and in programs andapplications.Memory management is the functionality of an operating systemthathandles or manages primary memory and moves processes back andforth between main memory and disk throughout execution.15.5.1 Importance of Memory Management:Single contiguous allocation: Simplest allocation method used by MS-DOS. All memory is presented to a process.Partitioned allocation: Memory is separated in different blocks orpartitions. Each procedure is allocated according to the condition.Paged memory management: Memory is divided into fixed sized unitscalled page frames, used in a virtual memory environment.Segmented memory management: Memory is separated in differentsegments (a segment is a logical grouping of the process’ data or code). Inthis managing, assigned memory doesn’t have to be contiguous. A processis separated into segments and individual segments have page15.5.3 32-bitWindows Os Memory Management:The 64-bit Windows Operating System addressable memory spaceis shared among active applications and the kernel as shown in FigureThe kernel address space contains aSystem Page Table Entry(PTE) area (kernel memory thread stacks), Paged Pool (page tables, kernelobjects), System Cache (file cache, registry), and Non Paged Pool(images, etc.)
munotes.in

Page 208

208The default 64-bit Windows Operating System (OS) configurationoffers up to16 TB (2^54) of addressable memory space separatedsimilarly among the kernel and the user applications.With 16 TB of physical memory available, 8 TB virtual address(VA) space will be allocated to the kernel and 8 TB VA space to userapplicationmemory. The kernel virtual address space is shared acrossprocesses.Each 64-bit process has its own space while each 32-bit application runs ina virtual 2 GB Windows-On-Windows (WOW).15.5.5Windows Uses Fifo:•First in First Out Page Replacement Algorithm (F.I.F.O.).•The eldest page is selected for replacement.•It suffers from be lady’s anomaly.•Page fault rate may increase after we increase amount of frame.•It has low performance.•It has maximum number of page faults.15.5.6 Catching in Window:15.5.6.1Caching:Cache is a sort of memory that is used to increase the rapidity ofinformation access. Normally, the data required for any process resides inthe main memory. Though, it is moved to the cache memory temporarily ifit is used commonly enough. The process of storing and retrievinginformation from a cache is known as caching.15.5.6.2 Advantages of Cache Memory:Some of the advantages of cache memory are as follows:•Cache memory is faster than main memory as it is located on theprocessor chip itself. Its speed is similar to the processor registers andso often required information is stored in the cache memory.•The memory access time is significantly fewer for cachememory as itis quite fast. This leads to faster execution of any process.•The cache memory can store information temporarily as long as it isoften required. After the use of any data has ended, it can be removedfrom the cache and replaced by new data from the main memory.15.6 WINDOWS IO MANAGEMENTA computer comprises of several devices that offer input andoutput (I/O) to and from the outside world. The Windows kernel-mode I/Omunotes.in

Page 209

209manager manages the communication among applications and theinterfaces providing by device drivers.15.6.1 FileBuffering:This covers the various considerations for application control offile buffering, also known as unbuffered file input/output (I/O). Filebuffering is usually handled by the system behind the scenes and isconsidered part of file caching within the Windowsoperating system.Although the termscachingandbufferingare sometimes usedinterchangeably, this topic uses the termbufferingspecifically in thecontext of explaining how to interact with data that is not being cached(buffered) by the system, whereit is then mostly out of the direct controlof user-mode applications. When opening or creating a file through theCreateFile function, the FILE_FLAG_NO_BUFFERING flag can bespecified to disable system caching of information being read from orwritten tothe file. While this gives whole and direct control over data I/Obuffering, in the instance of files and similar devices there are dataalignment requirements that must be considered.15.6.2 File Caching:By default, Windows caches file information that is read fromdisks and written to disks. This indicates that read operations read file datafrom an area in system memory identified as the system file cache, insteadof from the physical disk. Similarly, write operations write file data to thesystem file cache instead of to the disk, and this type of cache is stated toas a write-back cache. Caching is achieved per file object.The time at which a block of file information is flushed is partlybased on the quantity of time it has been kept in the cache and the amountof time later the data was last edited in a read operation. This confirms thatfile data that is often read will stay available in the system file cache forthe maximum amount of time.As shown by the solid arrows in the above figure, a256 KB regionof data is read into a 256 KB cache "slot" in system address space when itis first demanded by the cache manager through a file read operation. Auser-mode process then copies the information in this slot to its individualaddress spacemunotes.in

Page 210

21015.6.3 Synchronous and Asynchronous I/O:There are two types of input/output (I/O) synchronization:synchronous I/O and asynchronous I/O. Asynchronous I/O is also denotedto as overlapped I/O.Insynchronous file I/O, a thread starts an I/O operation anddirectly enters a wait state till the I/O request has finished. A threadperformsasynchronous file I/Osends an I/O request to the kernel bycalling an proper function. If the request is acknowledged by the kernel,the calling thread remains processing another job tillthe kernel signs to the thread that the I/O operation is finish. It thendisturbs its current job and processes the information from the I/Ooperation as required.15.7 WINDOWS NT FILE SYSTEMNTFS (NT file system, sometimes New Technology File System)is the file system that the Windows NT operating system uses for storingand recovering files on a hard disk. NTFS is the Windows NTcorresponding of the Windows 95 file allocation table (FAT) and the OS/2High Performance File System (HPFS). However, NTFS offers a numberof enhancements over FAT and HPFS in terms of performance,extendibility, and security.Notable features of NTFS include:•Use of a b-tree directory structure to keep path of fileclusters•Info about a file's clusters and extra information is kept with eachcluster, not just a governing table•Support for very large files (up to 2 to the 64th power or around 16billion bytesin extent)
21015.6.3 Synchronous and Asynchronous I/O:There are two types of input/output (I/O) synchronization:synchronous I/O and asynchronous I/O. Asynchronous I/O is also denotedto as overlapped I/O.Insynchronous file I/O, a thread starts an I/O operation anddirectly enters a wait state till the I/O request has finished. A threadperformsasynchronous file I/Osends an I/O request to the kernel bycalling an proper function. If the request is acknowledged by the kernel,the calling thread remains processing another job tillthe kernel signs to the thread that the I/O operation is finish. It thendisturbs its current job and processes the information from the I/Ooperation as required.15.7 WINDOWS NT FILE SYSTEMNTFS (NT file system, sometimes New Technology File System)is the file system that the Windows NT operating system uses for storingand recovering files on a hard disk. NTFS is the Windows NTcorresponding of the Windows 95 file allocation table (FAT) and the OS/2High Performance File System (HPFS). However, NTFS offers a numberof enhancements over FAT and HPFS in terms of performance,extendibility, and security.Notable features of NTFS include:•Use of a b-tree directory structure to keep path of fileclusters•Info about a file's clusters and extra information is kept with eachcluster, not just a governing table•Support for very large files (up to 2 to the 64th power or around 16billion bytesin extent)
21015.6.3 Synchronous and Asynchronous I/O:There are two types of input/output (I/O) synchronization:synchronous I/O and asynchronous I/O. Asynchronous I/O is also denotedto as overlapped I/O.Insynchronous file I/O, a thread starts an I/O operation anddirectly enters a wait state till the I/O request has finished. A threadperformsasynchronous file I/Osends an I/O request to the kernel bycalling an proper function. If the request is acknowledged by the kernel,the calling thread remains processing another job tillthe kernel signs to the thread that the I/O operation is finish. It thendisturbs its current job and processes the information from the I/Ooperation as required.15.7 WINDOWS NT FILE SYSTEMNTFS (NT file system, sometimes New Technology File System)is the file system that the Windows NT operating system uses for storingand recovering files on a hard disk. NTFS is the Windows NTcorresponding of the Windows 95 file allocation table (FAT) and the OS/2High Performance File System (HPFS). However, NTFS offers a numberof enhancements over FAT and HPFS in terms of performance,extendibility, and security.Notable features of NTFS include:•Use of a b-tree directory structure to keep path of fileclusters•Info about a file's clusters and extra information is kept with eachcluster, not just a governing table•Support for very large files (up to 2 to the 64th power or around 16billion bytesin extent)
munotes.in

Page 211

211•An access control list (ACL) that lets a server administrator controlwho can access detailed files•Integrated file compression•Support for names created on Unicode•Support for long file names in addition to "8 by 3" names•Data security on equally removable and fixed disks15.7.1 Architecture of WindowsNT:•The design of Windows NT, a streak of operating systems formed andshifted by Microsoft, is a layered scheme that contains of two keyconstituents, user mode and kernel mode.•To procedure input/output (I/O) requests, they use packet-driven I/O,whichutilizes I/O request packets (IRPs) and asynchronous I/O.•Kernel mode in Windows NT has full admission to the hardware andsystem resources of the computer. The Windows NT kernel is a hybridkernel; the architecture includes a simple kernel, hardware abstractionlayer (HAL), drivers, and a range of services (collectively namedExecutive), which all occur in kernel mode.•User mode in Windows NT is made of subsystems accomplished ofpassing I/O requests to the suitable kernel mode device drivers byusing theI/O manager.•The kernel is also answerable for initializing device drivers at bootup.•Kernel mode drivers occur in three levels: highest level drivers,intermediate drivers and low-level drivers.•Windows Driver Model (WDM) exists in the intermediate layerandwas mostly aimed to be binary and source compatible betweenWindows 98 and Windows 2000.•The lowest level drivers are either legacy Windows NT device driversthat control a device straight or can be a plug and play (PnP) hardwarebus.Diagram
211•An access control list (ACL) that lets a server administrator controlwho can access detailed files•Integrated file compression•Support for names created on Unicode•Support for long file names in addition to "8 by 3" names•Data security on equally removable and fixed disks15.7.1 Architecture of WindowsNT:•The design of Windows NT, a streak of operating systems formed andshifted by Microsoft, is a layered scheme that contains of two keyconstituents, user mode and kernel mode.•To procedure input/output (I/O) requests, they use packet-driven I/O,whichutilizes I/O request packets (IRPs) and asynchronous I/O.•Kernel mode in Windows NT has full admission to the hardware andsystem resources of the computer. The Windows NT kernel is a hybridkernel; the architecture includes a simple kernel, hardware abstractionlayer (HAL), drivers, and a range of services (collectively namedExecutive), which all occur in kernel mode.•User mode in Windows NT is made of subsystems accomplished ofpassing I/O requests to the suitable kernel mode device drivers byusing theI/O manager.•The kernel is also answerable for initializing device drivers at bootup.•Kernel mode drivers occur in three levels: highest level drivers,intermediate drivers and low-level drivers.•Windows Driver Model (WDM) exists in the intermediate layerandwas mostly aimed to be binary and source compatible betweenWindows 98 and Windows 2000.•The lowest level drivers are either legacy Windows NT device driversthat control a device straight or can be a plug and play (PnP) hardwarebus.Diagram
211•An access control list (ACL) that lets a server administrator controlwho can access detailed files•Integrated file compression•Support for names created on Unicode•Support for long file names in addition to "8 by 3" names•Data security on equally removable and fixed disks15.7.1 Architecture of WindowsNT:•The design of Windows NT, a streak of operating systems formed andshifted by Microsoft, is a layered scheme that contains of two keyconstituents, user mode and kernel mode.•To procedure input/output (I/O) requests, they use packet-driven I/O,whichutilizes I/O request packets (IRPs) and asynchronous I/O.•Kernel mode in Windows NT has full admission to the hardware andsystem resources of the computer. The Windows NT kernel is a hybridkernel; the architecture includes a simple kernel, hardware abstractionlayer (HAL), drivers, and a range of services (collectively namedExecutive), which all occur in kernel mode.•User mode in Windows NT is made of subsystems accomplished ofpassing I/O requests to the suitable kernel mode device drivers byusing theI/O manager.•The kernel is also answerable for initializing device drivers at bootup.•Kernel mode drivers occur in three levels: highest level drivers,intermediate drivers and low-level drivers.•Windows Driver Model (WDM) exists in the intermediate layerandwas mostly aimed to be binary and source compatible betweenWindows 98 and Windows 2000.•The lowest level drivers are either legacy Windows NT device driversthat control a device straight or can be a plug and play (PnP) hardwarebus.Diagram
munotes.in

Page 212

21215.7.2Layout of NTFS volume:The Windows NT file system (NTFS) offers a grouping ofperformance, dependability, and compatibility not found in the FAT filesystem.The Windows NT file system (NTFS) offers a grouping ofperformance, dependability, andcompatibility not found in the FAT filesystem.
15.7.2 Layout of NTFS volume:The Windows NT file system (NTFS) offers a grouping ofperformance, dependability, and compatibility not found in the FAT filesystem.Sections of layout of NTFS:•Partition Boot Sector•Master File Table (MFT)•System Files•File Area15.8 WINDOWS POWER MANAGEMENTThe Windows operating system offers a complete and system-wideset of power management features. This enables systems to spread batterylife and saveenergy, decrease heat and noise, and help ensure informationreliability. The power management functions and messages retrieve thesystem power status, notify applications of power management events, andnotify the system of each application's power necessities.
munotes.in

Page 213

21315.8.1 Edit Plan Setting in Windows:
15.8.2 Why Do we need Power Management?:Windows power management makes computers rapidly availableto users at the touch of a button or key. It also ensures that all elements ofthe system-applications,devices, and user interface—can take advantageof the vast improvements in power management technology andcapabilities.15.8.3 What are the benefits of Power Management?:Eliminates start up and shutdown delays. The computer need notmake a full systemboot when exiting the sleep state or a complete systemshutdown when the user initiates the sleep state. Allows automated tasksto run while the computer is in the sleep state. The Task Scheduler allowsthe user to schedule applications to run; scheduledevents can run evenafter the system is in the sleep state. Enables perdevice powermanagement. Enables users to generate power outlines, set alarms, andrequire battery options through the Power Options application in ControlPanel. The operating system manages all power management activities,based on power policy settings. For more information, see the help fileinvolved with the Power Options application. Progresses power efficiency.Power efficiency is mainly important on portable computers. Reducingsystem power consumption translates directly to lower energy costs andlonger battery life.
munotes.in

Page 214

21415.8.4 System Power Status:The system power status specifies whether the source of power fora computer is a system battery or AC power. For computers that usebatteries, the system power status also specifies how much battery liferemains and whether the battery is charging.As of now, we are going to discuss only Six states of System Power:•Working State (S0)•Sleep State (Modern Standby)•Sleep State (S1–S3)•Hibernate State (S4)•Soft Off State (S5)•Mechanical Off State (G3)15.9 SECURITY IN WINDOWSOne of the basic beliefs of Windows Security is that each processruns on behalf of a user. So, each process running is connected with asecuritycontext.security context is a bit of cached data about a user,counting her SID, group SIDs,privileges. A security principal is an entitythat can be positively recognized and confirmed via a technique known asauthentication. Security principals in Windows are allocated on a process-by-process basis, via a little kernel object called a token. Each user,computer or group account is a security principal on th e system runningWindows Server 2003, Windows 2000, and Windows XP. Securityprincipal obtain permissions to access resources such as files and folders.There are 3 types of Security Principals1) User principals2) Machine principals3) Service principalsSecurity Identifier: (SID)Users reference their accounts by usernames but the Operatingsystem, internally, references accounts by their security identifier. SID’sare unique in their scope (domain or local) and are never reused. So, theyare used to uniquely identify user and group account in Windows. Bydefault, the operating system SID comprises of various parts SAccess Token:A token is a kernel object that caches part of a user'ssecurity profile, containing the user SID, group SIDs, and privileges. Atoken containsof the following components. accountID, groupID, Rights,Owner, Primary group, Source, Type, Impersonation level, statistics,Restricted SID’s, SessionIDmunotes.in

Page 215

215Account Security:User accounts are core unit of Network security. InWin Server 2003 & Win2000, domain accounts are kept in ActiveDirectory directories databases, where as in local accounts, they are keptin Security Accounts Manager database. The passwords for the accountsare stored and maintained by System Key. Though the accounts areprotected by default, we can secure them even further. Go toAdministrative tools in control panel (only when you are logged in as anadmin) and click on Local Security and Settings".Account Lock out policies:Account lockout period: Locks out theaccount after a specific period (1-99,999 minutes). This feature is onlyexists is Win Ser 2003, Win 2000, but not in Windows XP.Password Policies:Enforce password History: Enforces passwordhistory(0-24) Maximum password age: Set max password age(0-999)Minimum password age: Set min password age(0 to 999) Minimumpassword length: set min password length(0 to 14) Password must meetdifficulty necessities: forces user to set complex alpha numeric passwords.Loading password using reversible encryption for users in the domain: Weallow this if we want the password to be decrypted and related to pain textusing procedures like Challenge Handshake authentication Protocol(CHAP) or Shiva password Authentication Protocol(SPAP)Rights :Rights are actions or operations that an account can or cannotachieve. User Rights are of two types:a) Privileges:b)LOGON rightsWhere are the passwords stored on the system?:The system stores the passwords at machine’s password strash, i.e.,under HKLM/Security/Policy/Secretes. Type at9:23am /interactiveregedit.exe, substituting whatever time is appropriate: Make it one minutein the future.) Once regedit fires up, carefully look at the subkeys underHKLM/Security/Policy/Secrets. You're looking at the machine's passwordstash, more formally known as the LSA private data store. The operatingsystem also, by default, caches (store locally), the last 10 passwordsThereare registry settings to turn this feature off or restrict the number ofaccounts cached.a)Location:KEY_LOCAL_MACHINE\Software\Microsoft\WindowsNT\Current Version\Winlogon\b)Type:REG_SZc)Key:CachedLogonsCount Default Value 10d)Recommendedvalue:0-50 depending on your security needs.munotes.in

Page 216

21615.10SUMMARYIn this Chapter, we learn History ofwindows, Process and thread,System structure,Memory Management in Windows, Windows IOManagement, Windows NT file System, Windows Power Managementand Security in Windows.15.11 LIST OF REFERENCES1.Modern Operating Systems, Andrew S. Tanenbaum, Herbert , Pearson4 th, 20142.Operating Systems–Internals and Design Principles, WillaimStallings, Pearson 8e, 20093.Operating System-Concepts Abraham Silberschatz, Peter B. GalvinegGagne Wiley ,8e4.Operating Systems Godbole and Kahate McGraw Hill 3e15.12 BIBLIOGRAPHYhttps://www.tutorialspoint.com/https://www.geeksforgeeks.org/https://www.javatpoint.com/java-tutorialhttps://guru99.comhttps://docs.microsoft.com/https://www.installsetupconfig.com/15.13UNIT ENDQUESTIONS1.Explain Architecture of windows.2.Write short note on memory management in windows3.Explain Process management in windows4.Writeshort note on security in Windows*****munotes.in