chapter 9 virtual memory (part 1)
chapter 9 virtual memory (part 1)
标签(空格分隔): CS:APP
prologue
As what we said about processes in chapter 8, their behaviors are highly related to the access to memory. Considering the multiple processes making use of CPU and memory simultaneously, the CPU and memory are out of luck and these processes slow down though.
And the disadvantgaes can be shown in terms of the limited memory and underlying corruption which is as result of simultaneous accesses from different processes.
Virtual memory was proposed to solve these problem by:
- treating main memory as a cache for an address space
stored on disk, keeping the only active areas in main memory and transferring the data back and forth between disk and memory as needed. - simplifying memory management by assigning an unique address space to each process
- protecting the address space of each process from corruption by other processes.
Another precise version of interpretation:
- no enough memory
- user address space stomps memory (may produce the holes between two separate segment)
- lacking of protect from corruption
This chapter is split into two parts :
The First part demonstrates how virtual memory works;
The second part demonstrates how virtual memory is used and managed by applications;
1. physical and virtual address
physical address : main memory is organised as an linear array of M contiguous bytes in computer system and each byte has its unique physical address. Even for human being and CPU, it is the most natural way to access the byte we need, just like calling someone's specified name and then someone responses you. This approach is called physcial addressing.
As a prerequisite, computer system always adopt an approach of adding a new interface between two layers as computer network shows to perform a new functionality.
virtual memory : Unlike physcial addressing, CPU instead generates an virtual address to one hardware called memory menagement unit within CPU, which is used to converting the virtual address to the corresponding physical address, then this physcial address is sent to memory as physcial addressing does.
2. address spaces
To describe physical and virtual addresses more concretely, the notation called addres spaces emerges here:
form a linear array of addresses for virtual addresses.
the size of address space is characterised by the number of bits required to describe the largest address of the address space.
[This point is introduced in the context of the address wires in 8086 processor.]
???
the importance of address space main shows how to distinguish the data object from its address.
3. VM as a tool for caching
Caching indicates the process of taking bytes from the lower layer to the upper layer.
The content of each cell in disk should be cached in main memory, and the data on the disk is partitioned into fixed-size blocks that serves as the transfer units back and forth between the disk and the main memory.
So VM is paritioned into fixed-size blocks as well, and the blocks are called "virtual page", abbreviated for "VPs". so is physicla address spaces, its pages are called "PPs" abbreviated for physical pages.
These virtual pages are partitioned into three disjoint subsets:
- unallocated : have not been allocated to the physcial memory yet, no data is associated with it, and the page doesn't occupy any space on disk.
- uncached : allocated pages that are not being cached in physical memory.
- cached : alloacted pages that are being cached in pshycial memory.
3.1 DRAM cache organization
According to the difference between SRAM and DRAM, we only use SRAM to indicate the cache L1,L2,L3 between the CPU and the main memory but DRAM to indicate the caches in the main memory.
Due to the guge penalty of misses and the expense of accessing the first byte, which is bigger than reading the successive bytes in the sector,
the virtual pages always are large and associative.
3.1.2 page tables
VM has to determine whether or not the page is cached is in the main memory(DRAM).
Nothing changes if the page is in the DRAM, but if this page misses, system must determine where the virtual page is stored in the disk, select the victim page in DRAM, and load the virtual page to the physical page in DRAM.
this process is implemented by a combination of operating system software, address translation hardware in the MMU and page table restored in the physical memory.
- operating system software : maintain the content of page table and transfer the page back and forth between disk and DRAM.
- address translation hardware : hardware reads page table eaach time it converts virtual address to its physical address.
- page table : an array of entries each of which consists of a valid bit of one bit and address bits.
valid bit: if this page is in the main memory, it is set to 1, otherwise it is 0.
address bit : if valid bit is 1, the address bits are set to the beginning of its physcial page IN DRAM. If 0, the address bits are set to null address(not allocatd)
3.1.3 page hit
what happens when the page CPU accesses actually is in the main memory is called page hit.
3.1.4 page fault
The previous chapter already illustrates something about it, the page fault is classified into the sub-type fault of exceptions.
Obviously, it needs a combination of services provided by the exceptional handler, operating system, and VM. BTW, as a result of the huge penalty of page fault because of accessing time from disk is 1000000 times longer than from DRAM, the virtual page is always huge.
the valid bit in the page table indicates whether or not this page is in the DRAM. If not, this situation will trigger an exception handler(in the kernel) to solve it- select the victim page and then copy the virtual page stored in the disk to the victim page in the main memory. At the same time, the exceptional handler should modify the page table to reflect the fact that the victim page no longer exists in the main memory but the caches page has already been cached in the main memory.
(demand page)
3.1.5 allocate pages
- creating the room for this page
- update its corresponding page table entry
3.1.6 locality to the rescue again
an initial overhead of copying the working set or active set keep us away from another huge expense of the successive instruction.
the better temporal locality is, the better the VM performs

浙公网安备 33010602011771号