* Page Size The page size is actually usually hardwired, so we often don?t have a choice. But we can always simulate smaller page sizes with larger ones. You just use the larger page place of the smaller one when you page fault / evict. Pros and Cons of having a Larger Page Size Most modern systems use a 4K page size. That?s a little small, it makes more sense to increase it. The smallest ever page size was the DEC VAX at 512 bytes. Page sizes generally don?t get higher than 64K. Some machines that copy data all day have up to 4M page sizes. * less overhead - same overhead per fault, but fewer faults. * More internal fragmentation. * Each TLB entry covers more "address space", meaning fewer TLB misses. * Smaller page tables. (fewer entries) * More total memory needed to cover same active locali- ties. * Greater delay (transfer time) for page fault. * Less overhead for running replacement algorithm (fewer page frames to look at.) Some ghetto visual aids: | |\ | \ | \ / | ---/ -------------- Y: Page Faults, X: Page Size | ---\ | / - | / | / | / |/ ------------- Y: Performance, X: Page Size Page Faults will never realistically ever reach 0. You just can?t do dat. * What can we and can?t we page? The OS? * Can't page the pages that do page fetch and replacement. These pages are said to be wired down.. Also related pieces of code - e.g. dispatcher. * Also can't page out anything where immediate response is required. * Interrupt and trap service routines. * Real time control code. Now I/O and VM. Question: Why is I/O so fundamentally dissociated from Virtual Memory? Answer: I/O is connected to the memory bus, which leads directly to memory. Translation is hard, you would need to have a really smart I/O device to handle translation of the virtual addresses. It wouldn?t necessarily be slower, but way more complicated. Your I/O device would also be OS-dependent. Your laptop is built in Taiwan. * What happens if we're doing I/O while using virtual memory? * I/O system deals only with real addresses. * So OS must translate virtual buffer address to real buffer address. * We'd better make sure it doesn't get paged out. * Usually done with a lock bit. "Locks" the page into memory. (placed in core map). * Must be careful of page boundaries - since I/O is done with real addresses. * Break transfer into several, each inside a page * Transfer is non-contiguous (if IO System is smart enough) * Make sure I/O buffers are page aligned. * Put in contiguous area of real memory, managed by OS. Studying paging algorithms ? this is hard. For several reasons. * Could use math model- but no such model is acceptable. * Could use random number driven simulation - same problem as mathematical model (it sucks). * Could do experiments on real system - but difficult, time consuming, need access to machine, may not be reproduci- ble. Best: * Use Trace Driven Simulation - get a program address trace, and use it to experiment with (simulate) various algorithms. * Program address trace is the sequence of virtual ad- dresses generated by a program. * Note that the virtual address trace is independent of the page replacement algorithm, or anything else. There are several ways to accomplish the program address trace * Write machine interpreter - generates trace as it ex- ecutes. * Use execute instruction on IBM 370 * Use hardware monitor * Use trace trap facility - trap on every instruction. * Use microcode modifications. * Instrument the object code or assembly code to write trace records either for every instruction, or for every load, store and branch. * Get page fault on every reference and generate trace record. * Comparison of Algorithm Performance There?s a plot here of the different algorithms, with the Y axis being Page Faults and the X axis being the number of faults. FIFO, Random, Clock, LRU, Working Set, Optimal are all on this plot. It?s be really hard to reproduce here in ascii but here?s a ghetto try anyway: | |\ \ \ | \ \ \ | \ \ ----------- FIFO, RAND | \ \ ___________ LRU, Clock | \ | --------------working set | ------- Opt ------------------------ * Is there anything we can do to minimize page faults? In short, the answer is not really? * Write our algorithms in an efficient manner. * Matrix transpose as example. (Divide matrix into submatrices. Store it that way - doesn't work if only do algorithm that way. They multiply, add and transpose fine.) Used in math programming problems. * Do program restructuring - reorganize pieces of a program within the pages, so that the number of faults is minim- ized. * Combine memory allocation with scheduling to produce good results. Thumper trucks that smack the ground. DONE WITH PAGING! I/O Devices lecture starts here, Prof. Smith didn?t actually make it that far. The Terminal: A crappy antique. Comprised of a screen and keyboard that were hooked up to an actual computer. The terminal and keyboard are actually two distinct entities; sometimes they weren?t even connected to each other! Prof Smith: Does anyone know how slow these things where? Keith: Yes. The tella-type could do like 10 characters a second. It was slow. What?s wrong with airlines these days?