Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
dynamic cache management technique
#1

The memory hierarchy of high performance and embedded processors has been shown to be one of the major energy consumers. Extrapolating the current trends, this portion is likely to be increased in the near future. In this paper, a technique is proposed which uses an additional mini cache, called the L0-cache, located between the I-cache and the CPU core. This mechanism can provide the instruction stream to the data path, and when managed properly, it can efficiently eliminate the need for high utilization of the more expensive I-cache.
Cache memories are accounting for an increasing fraction of a chip's transistors and overall energy dissipation. Current proposals for resizable caches fundamentally vary in two design aspects: (1) cache organization, where one organization, referred to as selective-ways, varies the cache's set-associativity, while the other, referred to as selective-sets, varies the number of cache sets, and (2) resizing strategy, where one proposal statically sets the cache size prior to an application's execution, while the other allows for dynamic resizing both across and within applications.
Five techniques are proposed and evaluated which are used to the dynamic analysis of the program instruction access behavior and to proactively guide the L0-cache. The basic idea is that only the most frequently executed portion of the code should be stored in the L0-cache, since this is where the program spends most of its time.
Results of the experiments indicate that more than 60% of the dissipated energy in the I-cache subsystem can be saved.
Reply

#2
Dynamic cache management technique to reduce the energy in a high performance processor.
The processors exploit the instruction level parallelism to supply instruction and data to the data path so that the execution rate is kepty as high as possible. High energy requireemnts for the on-chip-I-cache entails high energy demand. To reduce the amount of energy dissipated per instruction access, an additional L0 cache is designed. It acts as the primary cache of the processor storing the most frequentlyu accessed instructions. This cache makes use of the properties like the temoporalities of code and the decisions can be taken on the fly while the instruction is executing.

Dynamic management scheme.
The most frequently accessed instructiona are kept in the L0 cache. Energy reduction is aimed without the odification of the existing hardware. A reliable solution to this problem is provided by the branch prediction schene together with the confidence estimator mechanism. The branch behaviour of the future is estimated on the basis of the branches previously taken . The steady state behaviour of the branch is further predicted by the confidence estimator.

For further details, see:
http://cs.york.ac.uk/rts/docs/SIGDA-Comp...s/04_1.pdf
http://scribddoc/7148877/Dynamic-Cache-Management
Reply

#3
The memory hierarchy of high performance and embedded processors are one of the major energy consumers. The article describes a new technique for introducing a new cache called the mini cache- the L0-cache which is located in between the level I cache and the processor. instruction stream can be provided to the data path if this is done properly. The utilization of the more expensive level-I cache can be reduced by this.

The propositions for the resizable cache designs are broadly classified into two types:
-Based on the cache organization:
Here the the cache s set-associativity is varied by the selective-ways organization and the the number of cache sets is varied by the selective-sets
-based on the resizing strategy
Here , on application statically sets the cache size prior to an application s execution and also the dynamic resizing both across and within application is also done by the application.

The five techniques which are used for the dynamic management of the cache are:
-Simple Method. .

-Static Method.

-Dynamic Confidence Estimation Method.

-Restrictive Dynamic Confidence Estimation Method.

-Dynamic Distance Estimation Method.

Get the full report here:
http://scribddoc/7148877/Dynamic-Cache-Management
Reply

#4
This mechanism can provide instruction stream for the data path, and when managed properly, it effectively eliminates the need for extensive use of I-cache more expensive. The basic idea is that only the most played part of the code are stored in the L0 cache, because that is where the program spends most of his time. The experimental results show that over 60% of the energy dissipated in the subsystem I-cache can be saved. caches are accounting for an increasing proportion of transistors on a chip and the total energy dissipation.
Reply

#5
Hi,
visit this thread for more details on the dynamic cache management techniques:
http://seminarsprojects.net/Thread-dynam...2#pid38862
Reply

#6
please visit the following thread for more details on 'Dynamic Cache Management Technique'

http://seminarsprojects.net/Thread-dynam...ique--2380
Reply

#7
Hi,
visit this thread for more details on this topic:
http://seminarsprojects.net/Thread-dynam...8#pid34988
Reply

#8
sorry sarthi, yet we don't have ppt on the topic. we'll upload as soon as possible.
Reply

#9
plz attache sum detailed report of dynamic cache management technique..pleasee..
Reply

#10
plz send me full seminar report on dynamic cache management technique
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

Powered By MyBB, © 2002-2024 iAndrew & Melroy van den Berg.