Page 142 - ITU Kaleidoscope 2016
P. 142

2016 ITU Kaleidoscope Academic Conference









          subscriber path, Optimal Cache Placement based on Content
          Popularity (OCPCP) [18] - where the popularity of incoming
          content is calculated on the basis of cached contents and the
          new content is cached based on its popularity value, Network
          Coding based Cache Management (NCCM) [19] - jointly con-
          siders content routing and caching strategy through Linear
          Network Coding (LNC), WAVE [20] - caches contents based
          on their access count, Most Popular Caching (MPC) [21] –
          caches contents based on their popularity values, Dynamic
          Fine-Grained Popularity-based Caching (D-FGPC) [22] – the
          modification of MPC, however, unlike MPC in FGPC the
          threshold value for content caching is changed dynamically
          based on content popularity values. Moreover, some experi-
          mental assessment of cache management in ICN are presented
          in [23]. Some other research on caching has been proposed
          in [24–31].
          All of the mentioned strategies investigated only one aspect
          of the caching (i.e., either content placement or replacement),
          and none of them covers both aspects. Therefore, a flexible  Figure 1. An example topology
          caching strategy is needed so that to place the contents at the
          best possible position and (on the arrival of a new content)
          replace one of the cached contents. Actually, the contents  is, if(C p >= V t ), where C p is the content popularity and
          are cached in the random access memory (RAM) while the  V t is the threshold value. The V t is kept 10 in the CPCE to
          available static random access memory (SRAM) or dynamic  avoid flooding as all incoming contents have the C p value
          random access memory (DRAM) is limited in the size. The  of 1. Therefore, a content in the CPCE is only considered
          DRAM, which is a volatile memory and needs to be refreshed  popular when its C p reaches 10. Furthermore, if the cache of
          regularly [32] is currently available at 10GB maximum [33,  all on-path routers overflows and a new content arrives, the
          34]. In other words, cache size is the biggest constraint in  CPCE uses the Least Recently Used (LRU) policy to replace
          ICN caching. However, in most of the existing available  one of the cached contents from each router except that the
          strategies, the maximum cache space is occupied. In addition,  router which has the maximum outgoing interfaces, denoted
          if the memory of a router becomes full and a new content  by R max , to accommodate the new arrived content. The rea-
          arrives, none of the existing strategies has any policy for that  son to leave the content at the R max is to avoid maximum
          but simply replaces one of the cached contents by the new  bandwidth utilization, as majority of the content requests pass
          arrived content. This is achieved using a replacement policy,  through that router. However, the question arises of how long
          i.e., either Least Recently Used (LRU) or Least Frequently  R max will keep contents as the cache of that may also over-
          Used (LFU). As LRU and LFU replace the content based on  flow. Therefore, in case of cache overflows at R max , on the
          the access time, the replaced content may be very popular and  arrival of a new content, LRU policy is used to evict one of the
          therefore the subsequent requests for the same content will be  cached contents accessed least recently. The evicted content
          forwarded to the server. In this way, extra content retrieval  is cached at the underlying router, denoted by UR 1 (R2 in
          delay may occur and thus maximum bandwidth is utilized.  Figure 1), placed immediately below R max . Furthermore,
          To overcome such problems, we propose in this paper a new  if the cache of UR 1 is also full then the Random policy is
          caching strategy, Cache Popular Content Everywhere (CPCE),  applied here to accommodate the content coming from R max .
          which caches popular contents on all network nodes on the  The purpose of using the Random policy is to avoid search-
          publisher-subscriber path. The CPCE strategy is explained in  ing overhead for content replacement as other replacement
          the following section.                            policies, e.g., LRU, take some time to find the contents based
                                                            on their access time. Here the replaced content from UR 1 is
                                                            moved to the next down cache router, i.e., UR 2 , and the same
                      3. PROPOSED STRATEGY
                                                            procedure is followed until the content reaches the router
                                                            placed near the subscriber, i.e., UR n .
          The proposed CPCE caches contents at all on-path routers
                                                            As the LRU does not care of content popularity and it is
          available all the way from publisher to subscriber, as in CEE.
                                                            deployed at R max , it may also evict the most popular content.
          However, the difference between our proposed caching strat-
          egy (CPCE) and the CEE is such that CEE caches every  Now, if a new content request arrives for the evicted popular
                                                            content from off-path, it may also go through R max ; however,
          incoming content while CPCE caches contents once their pop-
                                                            if hit does not occur at R max , it can be found from RT that
          ularity reaches a specified threshold value. In other words,
          a content is cached when its popularity reaches the thresh-  the requested content is available at the UR i .
          old value in the Request Table (RT) - a table which locally  Hence, even the CPCE is designed for on-path caching but
          calculates popularity values based on content requests, that  off-path nodes can also benifit from its versatility.

                                                          – 124 –
   137   138   139   140   141   142   143   144   145   146   147