Concurrent implementation of caches
暂无分享,去创建一个
Caches are typically used in the practice for making the respond time faster when of accessing remote data. Every case when caches are used, local copies of the slow data are created, and if we want to access them again, it is just enough to find in the local copy. A typical field of the application of caches is the domain name resolution. The user usually only knows the name of a server, but the internet protocol uses the IP number for identification. So the user has to ask the IP address of the desired server. He/she will get the answer mostly from a local cache which stores the data from a remote server. Because the cache only contains a copy, it can differ from the original data. Time limited cache resolve this problem with a time to live (TTL) value attached to the original data records. When an element’s TTL expired, it has to be updated or removed from the cache. Traditional solutions consist a data structure (e.g. hash tables, search trees) and a read/write lock for mutual exclusion in concurrent way. Several thread can read the cache in parallel, but just one thread can write into it. Accordingly when a thread writes into the cache no other thread can read or write. Because the time-limited caches’ records have TTL va lue, periodically has to clear or update the expired elements. These actions are performed by a dedicated thread. In the paper the authors show caches improved with several locks and additional structures for helping the cleaning action. Several possible concurrent implementations will be described, and compared by their theoretical and experimental running times, furthermore in following sections will be described solutions based on chained hash tables. Some parts of our improvements are reusable in the case of other data structures.
[1] Robert E. Tarjan,et al. Design and Analysis of a Data Structure for Representing Sorted Lists , 1978, SIAM J. Comput..
[2] Clifford Stein,et al. Introduction to Algorithms, 2nd edition. , 2001 .