Explain the concept of Direct mapping Cache memory with the help of an diagram/example. How is this Direct mapping cache different to that of 2 way set associative cache memory?
Cache Memory - Direct Mapped Cache
If each block from main memory has only one place it can appear in the cache, the cache is said to be Direct Mapped. Inorder to determine to which Cache line a main memory block is mapped we can use the formula shown below
Cache Line Number = (Main memory Block number) MOD (Number of Cache lines)
Let us assume we have a Main Memory of size 4GB (232), with each byte directly addressable by a 32-bit address. We will divide Main memory into blocks of each 32 bytes (25). Thus there are 128M (i.e. 232/25 = 227) blocks in Main memory. We have a Cache memory of 512KB (i.e. 219), divided into blocks of each 32 bytes (25). Thus there are 16K (i.e. 219/25 = 214) blocks also known as Cache slots or Cache lines in
cache memory. It is clear from above numbers that there are more Main memory blocks than Cache slots.
NOTE: The Main memory is not physically partitioned in the given way, but this is the view of Main memory that the cache sees. NOTE: We are dividing both Main Memory and cache memory into blocks of same size
i.e. 32 bytes. A set of 8k (i.e. 227/214 = 213) Main memory blocks are mapped onto a single Cache slot. In order to keep track of which of the 213 possible Main memory blocks are in each Cache slot, a 13-bit tag field is added to each Cache slot which holds an identifier in the range from 0 to 213 – 1. All the tags are stored in a special tag memory where they can be searched in parallel. Whenever a new block is stored in the cache, its tag is stored in the corresponding tag memory location.
If each block from main memory has only one place it can appear in the cache, the cache is said to be Direct Mapped. Inorder to determine to which Cache line a main memory block is mapped we can use the formula shown below
Cache Line Number = (Main memory Block number) MOD (Number of Cache lines)
Let us assume we have a Main Memory of size 4GB (232), with each byte directly addressable by a 32-bit address. We will divide Main memory into blocks of each 32 bytes (25). Thus there are 128M (i.e. 232/25 = 227) blocks in Main memory. We have a Cache memory of 512KB (i.e. 219), divided into blocks of each 32 bytes (25). Thus there are 16K (i.e. 219/25 = 214) blocks also known as Cache slots or Cache lines in
cache memory. It is clear from above numbers that there are more Main memory blocks than Cache slots.
NOTE: The Main memory is not physically partitioned in the given way, but this is the view of Main memory that the cache sees. NOTE: We are dividing both Main Memory and cache memory into blocks of same size
i.e. 32 bytes. A set of 8k (i.e. 227/214 = 213) Main memory blocks are mapped onto a single Cache slot. In order to keep track of which of the 213 possible Main memory blocks are in each Cache slot, a 13-bit tag field is added to each Cache slot which holds an identifier in the range from 0 to 213 – 1. All the tags are stored in a special tag memory where they can be searched in parallel. Whenever a new block is stored in the cache, its tag is stored in the corresponding tag memory location.
When a program is first loaded into Main memory, the Cache is cleared, and so while a program is executing, a valid bit is needed to indicate whether or not the slot holds a block that belongs to the program being executed. There is also a dirty bit that keeps track of whether or not a block has been modified while it is in the cache. A slot that is modified must be written back to the main memory before the slot is reused for another block. When a program is initially loaded into memory, the valid bits are all set to 0. The first instruction that is executed in the program will therefore cause a miss, since none of the program is in the cache at this point. The block that causes the miss is located in the main memory and is loaded into the cache. This scheme is called "direct mapping" because each cache slot corresponds to an explicit set of main memory blocks. For a direct mapped cache, each main memory block can be mapped to only one slot, but each slot can receive more than one block. The mapping from main memory blocks to cache slots is performed by partitioning an main memory address into fields for the tag, the slot, and the word as shown below:
The 32-bit main memory address is partitioned into a 13-bit tag field, followed by a 14- bit slot field, followed by a 5-bit word field. When a reference is made to a main memory address, the slot field identifies in which of the 2^14 cache slots the block will be found if it is in the cache. Set Associative mapping scheme combines the simplicity of Direct mapping with the flexibility of Fully Associative mapping. It is more practical than Fully Associative mapping because the associative portion is limited to just a few slots that make up a set. In this mapping mechanism, the cache memory is divided into 'v' sets, each consisting of
'n' cache lines. A block from Main memory is first mapped onto a specific cache set, and then it can be placed anywhere within that set. This type of mapping has very efficient ratio between implementation and efficiency. The set is usually chosen by
Cache set number = (Main memory block number) MOD (Number of sets in the cache memory)
If there are 'n' cache lines in a set, the cache placement is called n-way set associative i.e. if there are two blocks or cache lines per set, then it is a 2-way set associative cache mapping and four blocks or cache lines per set, then it is a 4-way set associative cache mapping.
Let us assume we have a Main Memory of size 4GB (2^32), with each byte directly addressable by a 32-bit address. We will divide Main memory into blocks of each 32 bytes (2^5). Thus there are 128M (i.e. 2^32/2^5 = 2^27) blocks in Main memory. We have a Cache memory of 512KB (i.e. 2^19), divided into blocks of each 32 bytes (2^5). Thus there are 16K (i.e. 2^19/2^5 = 214) blocks also known as Cache slots or Cache lines in cache memory. It is clear from above numbers that there are more Main memory blocks than Cache slots.
Cache Size = (Number of Sets) * (Size of each set) * (Cache line size)
So even using the above formula we can find out number of sets in the Cache memory
i.e.
2^19 = (Number of Sets) * 2 * 2^5
Number of Sets = 2^19 / (2 * 2^5) = 2^13.
'n' cache lines. A block from Main memory is first mapped onto a specific cache set, and then it can be placed anywhere within that set. This type of mapping has very efficient ratio between implementation and efficiency. The set is usually chosen by
Cache set number = (Main memory block number) MOD (Number of sets in the cache memory)
If there are 'n' cache lines in a set, the cache placement is called n-way set associative i.e. if there are two blocks or cache lines per set, then it is a 2-way set associative cache mapping and four blocks or cache lines per set, then it is a 4-way set associative cache mapping.
Let us assume we have a Main Memory of size 4GB (2^32), with each byte directly addressable by a 32-bit address. We will divide Main memory into blocks of each 32 bytes (2^5). Thus there are 128M (i.e. 2^32/2^5 = 2^27) blocks in Main memory. We have a Cache memory of 512KB (i.e. 2^19), divided into blocks of each 32 bytes (2^5). Thus there are 16K (i.e. 2^19/2^5 = 214) blocks also known as Cache slots or Cache lines in cache memory. It is clear from above numbers that there are more Main memory blocks than Cache slots.
Cache Size = (Number of Sets) * (Size of each set) * (Cache line size)
So even using the above formula we can find out number of sets in the Cache memory
i.e.
2^19 = (Number of Sets) * 2 * 2^5
Number of Sets = 2^19 / (2 * 2^5) = 2^13.
Explain the concept of Direct mapping Cache memory with the help of an diagram/example. How is this Direct mapping cache different to that of 2 way set associative cache memory?
Reviewed by enakta13
on
September 15, 2012
Rating: