CS162 Lecture, Monday 1/31/2005 Past and present announcements: - The CS162 Reader has two parts, a basic reader(approx. $25) and nachos source (approx. $13) code. Buy the readers at Copy Central on Hearst/Euclid near the North Gate. - Bookstore will ship textbook back, so buy them if you want them now. - Read handout before asking questions. - You may post your hw1 statistics on the newsgroup. - If you have not signed up for lecture notes, contact the professor. - Section 103 has been moved to Thursday 5-6pm at 187 Dwinelle and will be hosted by Adrian. Lecture Notes: Note, that this lecture note is intended to accompany the lecture overhead slides that should be posted online soon. Do not rely on this solely for your studies. "Foreground / Background Scheduling" - Foreground/Background Scheduling has two queues. - The Foreground queue has higher priority than Background queue. - The CPU will fetch jobs from the Foreground queue until it is empty and only then will it fetch jobs from the Background Queue. Jobs that run for a long time might get kicked into the background queue. - You can have any scheduling algorithm in each queue. - How to assign jobs to each queue can be based on many considerations, i.e. based on short job vs. long jobs etc. -----<--------------------- | Foreground | | __________ | | | | | | | | -----|--->| | | | |-----| | Jobs | |_|__|__|__| | | Finished Jobs ------>[] __________ [CPU]-------> | | | | | | | | -----|--->| | | | |-----| | | |_|__|__|__| | | | | Background | -----<--------------------- “Multilevel Foreground / Background Scheduling” - Variation on Foreground / Background Scheduling, but with additional queues. - Jobs can be assigned to a queue (group level) depending on criteria. “Exponential Queue” or “Multilevel Feedback Queue” - New jobs are given a short time slice, and if it does not complete then decrease its level and double the time slice for the next round. Longer time slices is to lower the switching overhead. - CTSS Systems from MIT used 4 queues ------------------<------------------ | __________ | | | | | | | | |-------->| | | | |--->-| | Jobs | |_|__|__|__| | | ------>[] __________ -------[CPU]----------------> Finished Jobs | | | | | | | |-------->| | | | |--->-| | |_|__|__|__| | | __________ | | | | | | | | |-------->| | | | |--->-| | |_|__|__|__| | | . | | . | | . | | __________ | | | | | | | | |-------->| | | | |--->-| |_|__|__|__| Question: How do we adjust scheduling algorithms to get good performance? Short Answer: By avoiding starvation. “Fair Share Scheduling” - Keeps a history of processes run time (CPU usage) and give highest priority to jobs with low CPU time usage and vice versa. “BSD Unix Scheduling” - Implements Multilevel Feedback Queue with 128 queues in 4 group levels. - Each job is run with a quantum. When the quantum expires, the job is put back into the back of the queue the job was taken from. - The quantum is set at 0.1 seconds in BSD 4.3. This was found to be the longest a quantum could be without “jerky” response. - Starvation occurs when a job gets kicked down too far. - A higher priority process is only run at the end of the current quantum. - User priority = PUSER + PCU + PNICE * PUSER: based on the user type * PCPU: weighted load factor that increases as this process uses CPU time * PNICE: is a number that can be used to reduce job priority - Priority levels: 0-49 are reserved from system processes and 50-127 for user processes. Joke: KISS = Keep it Simple Stupid. Another Joke about how to crucify students that cheat. “DEC Scheduling Algorithm” - The VAC came out in the 70’s and cost half a million dollars. A few years later the DEC Workstation was released and ran almost as fast as the VAC. It only cost 2% of what the VAC cost at $10k. So the workstation was then used for timesharing and the response time was really slow. Professor Smith’s theory on this is that the DEC’s quantum was adjusted to really large. “Scheduling Countermeasures” (How to make the scheduling algorithm perform poorly) - Scheduling Algorithms are arbitrary, but most successful ones assume many short jobs and few long ones. So make long jobs look like short jobs. - Do spurious I/O to get higher priority. Professor notes: “do not do this as it is inconsiderate”. Professor also discussed briefly on how to do Discrete Event Simulation for Assignment 1. One way of doing this is to have an event list of schedules to happen and loop over the list: get event, update statistics, update system state, event list and repeat on next event in the list. Courtesy of the TA’s it has been announced in class that it is ok to post result on the newsgroup. Also your result should be close to the actual result within 1-2% due to different assumption in corner cases. This information is probably no longer pertinent when you see this. Anyhow, for more information see lecture notes and the assignment. The professor also mentioned specialized languages for simulation that has statistics and module built in. This means that simulations can be put together much faster, but it will run slower due to all the features included. Next Major Topic: “Independent and Cooperating Processes” (Synchronization) - Synchronization deals with correctness. “Independent Processes” - State is not shared in any way - Deterministic and Reproducible - Can stop and start with no “bad” effects Examples of non-independent processes: - If one process generates input to another or receives input from another process. - If it is part of same command, then it is probably not independent. - Sharing of files or other resources. “Cooperating Processes” - During Machine run-time, the machine is not deterministic. - Suppose we start with a quiescent machine with the same starting condition, will the same things happen? * If you do I/O, can you guarantee same angular position on the head? * Writing to disk requires the freelist to have the same blocks in the same size and in the same order. * If you use the clock, then the clock must be set to exact same value. IN THE END, WHAT WE WANT IS SAME MACRO BEHAVIOR (i.e. result), BUT NOT NECESSARILY SAME MICRO BEHAVIOR! Micro behavior should not affect your result. - Why would we allow cooperating processes? * To allow multiple users. * To share files, for instance databases or bank accounts. * Overlap, i.e. one read many computations. * To divide a job into many sub-jobs. Motivating Example: Suppose we have to 2 processes running in parallel with the following code Process A Process B i = 0; i = 0; while (i < 10) { while (i > -10) { i = i+1; i = i-1; } } printf(“A”); printf(“B”); Note, that the variable i in both snippet of code is shared. In addition all reads are considered atomic. In this case, unless we know the order of switching we won’t know the sequence of operations. If you use a hyper-threading cpu, the process might never end. “Atomic operation” - An operation happens in its entirety or not at all. It cannot be interrupted in the middle or be half done. - In most computers you can assume load and store operations are atomic. - If you turn off interrupts on computers, it is sort of atomic. Another motivating example: Suppose there are 2 roommates that are incredibly unobservant, has no peripheral vision and it is early in the morning. They both want milk and look in the fridge. If it is empty they will try and buy milk, but they don’t want too much milk (2) or no milk (0). They just want one (1) milk in the fridge. Joke: We don’t want too much milk, but we never hear about too much BEER! Now consider the 4 following code snippets below, where each has a bug in it. A & B: if (no milk) { if (no note) { leave note buy milk remove note } } BUG: There can be an interruption at any time, thus A could see No Milk, then B takes over run its process entirely and then A resumes. At which point there will be too much milk. WE NEED MUTUAL EXCLUSION. Mutual Exclusion is to allow only one process to do something. This part of the process is called critical section. Critical sections cannot be interrupted. To get this we use some sort of locking mechanism. In the above case this was the note, although it did not succeed. So in short, we lock it before usage. We unlock it when we are done and we wait if it is locked. A: if(no note) { if(no milk){ buy milk leave note } } B: if (note) { if(no milk) { buy milk remove note } } BUG: The bug here is that B relies on the fact that A will buy the milk. This could lead to indefinite waiting, due to B taking a vacation. So it fails a requirement; if there is no milk somebody will buy one unit of milk. A leave note A if(no note B) { if(no milk) { buy milk } } remove note A B leave note B if (no note A) { if (no milk) { buy milk } } remove note B BUG: The bug here is that both A and B can leave their note at the same time. Who then will buy the milk? A leave note A while(note B) { wait } if(no milk) { buy milk } remove note A B leave note B if(no note A) { if(no milk) { buy milk } } remove note B BUG: This solution sort of work. But if there is a failure in the critical section of B such that B goes to buy milk but instead go somewhere else and does not bring back milk. In addition, this solution does not allow for more than two roommates. This might be hard to do for multiple roommates. This concludes the lecture notes, however you can look up many of these topics in the textbook. If professor have not talked about it then you don’t need to know it. The purpose is to let you look at stuff that was unclear. If the book and professor don’t agree, then remember, the professor writes the exam!