Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Final Exam Notes 1550, Study Guides, Projects, Research of Computer Communication Systems

A description of the final exam notes

Typology: Study Guides, Projects, Research

2021/2022

Uploaded on 04/25/2023

unknown user
unknown user 🇺🇸

2 documents

1 / 8

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
CS 1550 Operating Systems - Final Exam
Terms in this set (53)
Translation lookaside buffer
(TLB)
A cache for storing pages that exhibit locality
Types of locality Temporal and spacial
Software-Managed TLB
All 16 elements belong to one process. When the
process switches, the OS will swap out the old
process's cache with the new process's cache.
Hardware-Managed TLB
Tell the hardware to handle the caching. Each cache
entry has to have a process tag. The cache would be
"polluted" with entries from different processes.
3 reasons for cache "misses" Compulsory (never seen before), conflict (hash
collision), and capacity (entry has been evicted)
Inverted page table Instead of a PTE per page, make one per frame
Demand paging Only load pages when requested
Pre-paging Guess what pages a process will likely need and
load them
"Optimal" page replacement
algorithm
Evict the page that won't be needed until furthest in
the future
pf3
pf4
pf5
pf8

Partial preview of the text

Download Final Exam Notes 1550 and more Study Guides, Projects, Research Computer Communication Systems in PDF only on Docsity!

CS 1550 Operating Systems - Final Exam

Terms in this set (53)

Translation lookaside buffer (TLB) A cache for storing pages that exhibit locality Types of locality Temporal and spacial Software-Managed TLB All 16 elements belong to one process. When the process switches, the OS will swap out the old process's cache with the new process's cache. Hardware-Managed TLB Tell the hardware to handle the caching. Each cache entry has to have a process tag. The cache would be "polluted" with entries from different processes. 3 reasons for cache "misses" Compulsory (never seen before), conflict (hash collision), and capacity (entry has been evicted) Inverted page table Instead of a PTE per page, make one per frame Demand paging Only load pages when requested Pre-paging Guess what pages a process will likely need and load them "Optimal" page replacement algorithm Evict the page that won't be needed until furthest in the future

"Not Recently Used" page replacement algorithm Evict the page that is the oldest, preferring pages that are not dirty (unreferenced clean; unreferenced dirty; referenced clean; referenced dirty) What is the problem with the "FIFO" page replacement algorithm? It has a bad notion of time: it uses page load time instead of page use time. "Second Chance" page replacement algorithm If the oldest page has a reference bit of 1, give it a second chance by making it newest and marking it as unreferenced. "Clock" page replacement algorithm Same as Second Chance, but uses a circular queue. To mark a page as newest, just move the pointer past it. "Least Recently Used" page replacement algorithm Evict the least recently used page, based on a timestamp What is the problem with the "Least Recently Used" page replacement algorithm? Storing and reading timestamps of the granularity that we need would take too much time "Aging Scheme" page replacement algorithm Approximation of LRU. Periodically shift all referenced bits onto an 8-bit counter and set the referenced bits to 0. To evict a page, just choose the page that's counter is smallest. Working Set The set of pages used by the k most recent memory references. w(k, t) is the size of the working set at time t.

When is virtual memory touched?

  • Process creation
  • During process execution (translating pages into physical frames)
  • Page fault time
  • Process termination time How can you deal with running out of address space? Create I- and D-spaces (instruction and data) that have separate address spaces. This is segmentation. Each address is contextualized into different segments. Block devices Indexed, random access Character devices Stream (not indexed, not random access) Device controllers The electronic component of an I/O unit, in contrast with the physical component Ports (memory mapped I/O) Give each device a separate address space. Con: you have to use special instructions, which means you have to use syscalls to access that other memory location. Pro: having to use syscalls means the OS gets time to run code. Memory mapping for I/O This allows our address to be backed by something other than physical frames. Con: less address space. Con: since the OS isn't involved, it can't do sanity checks on modified values. Pro: no context switches. Interrupt Controller A queue for serializing incoming interrupts

DMA Controller (pros/cons) Direct Memory Access. Pro: doesn't deliver interrupts to the OS until the operation is complete. Cons: might require more power, costs money to manufacture. I/O Software Goals

  • Device independence
  • Uniform naming (device files handled like any other file)
  • Synchronous vs. asynchronous transfers
  • Buffering (how much and where?)
  • Sharable vs. dedicated devices I/O Software Layers User-level I/O software & libraries Device-independent OS software Device drivers Interrupt handlers Hardware Four types of buffering Unbuffered; buffering in user space; buffering in kernel; double buffering in kernel Why use double buffering instead of single? In single buffering, user space buffer might generate a page fault. Now filling up the user space buffer takes too long to fill. In double, once you fill up one buffer, start transferring it to user space and start filling up the second buffer. RAID 0 Instead of writing one file to a single disk, split the file into multiple chunks and write each section to a different disk. Since this is done in parallel, we have saved time. If one disk goes bad, zero files are saved.

LOOK

Record the min and the max. When we reach one of the extremes, reverse directions. Fairness note: jobs in the center of the disk have a lower variance of wait time than those at the outside (true for SCAN and LOOK). C-SCAN Since SCAN and LOOK both weight things to the center of the disk, we could instead just scan to the top without reading along the way. This full-disk seek doesn't have to be as precise as the other reads, and it can be faster. C-LOOK Adds the min/max look, just like LOOK did for SCAN. This reduces the total number of cylinders scanned over, but it does mean that the full scan has to be a bit more precise. What are some reasons why we might find ourselves in the OS besides preemption or blocking?

  • Hardware interrupts (e.g., devices; not only for the current process)
  • Non-blocking syscalls
  • Page faults Soft timers We don't need a hard deadline. Essentially, this is just a hard timer with an unreasonable quantum length. BIOS Basic Input Output Service; the firmware that runs when you press the power button Branching factor of the root in an inode file system Size of block / size of block address

What's the formula for calculating how many levels in a tree you need to store a file of a certain size? log[base branching factor](file size / block size) Problems with using linked lists for free space tracking

  • Freeing a block can result in an infinite loop
  • Linked lists will make you reallocate the most recently used blocks (harder to recover deleted files)
  • It's possible to double-free a block. Two people requesting blocks could get the same block. File Block Cache Store file blocks in RAM; evict based on LRU. If this cache is "write-through," then whenever you write to a file it gets written to both the cache in RAM and on disk. The other option would be only to write from the cache to the disk when that page is evicted, but this could result in data loss.