Project Proposal Format: The proposal should be in pdf format. It should be no longer than two pages in a single-column single-spaced 10pt Times font. Margins are one inch each (top, bottom, left and right).
Submit the following on Canvas:
For each of the projects described below, students will be graded based on the tasks that have been accomplished in the project. Each project will have two goals, and the grade will be based only on finished goals. If only one goal is accomplished, the maximum project score will be 50%. If both goals are accomplished, the maximum project score will be 100%. Note that these maximum levels do not mean that students will always achieve the maximum. The actual student score will be based on other aspects: Project proposal, progress report, presentation and final report/code.
Implement a branch predictor in gem5 that could predict two, three, and four conditional branches every cycle. Choose a technique from literature that could be used to predict many branches. Compare its performance and prediction accuracy to that of the bimodal predictor implemented in gem5. Some references to consider:
Loads that result in cache misses significantly reduce performance. A mechanism that has been proposed to reduce their impact is to predict the values of load instructions before they execute, and forward these values to dependent instructions. When successful, load value prediction can improve performance by allowing instructions dependent on load misses to execute early. However, incorrectly predicted loads require flushing the pipeline which incurs a significant penalty (similar to branch mispredictions or memory dependence mispredictions). In this project, you need to implement a Markov Load Value Predictor in gem5 that uses the history of previously executed loads to predict future loads. You need to model the benefits of a correctly-predicted value and the cost of an incorrectly-predicted value. You need to compare this value predictor with a baseline without value prediction. For more information about Markov predictors, consider this paper:
Y. Sazeides and J.E. Smith, “The Predictability of Data Values,” MICRO 1997. Paper: https://ieeexplore.ieee.org/abstract/document/645815 Talk: https://ftp1.cs.wisc.edu/sohi/talks/1997/micro.predictability.pdf
Effective cache replacement policies can significantly reduce cache miss rates and improve performance. Recent research showed that it is beneficial to bypass the insertion of some blocks in the cache if they are not predicted to be reused. An example of such research is the winner of the cache replacement championship (http://www.jilp.org/jwac-1/online/papers/005_gao.pdf). In this project, you should implement a cache replacement policy with bypass in gem5, and compare its performance to the default processor and cache replacement policy already implemented in gem5.
Effective cache replacement and dead block prediction mechanisms can greatly reduce cache misses and improve peformance. A recent paper published in HPCA 2022 presented a mechanism (Mockingjay) that mimics Belady’s optimal replacement policy to approach optimal performance. In this project, you should implement Mockingjay in gem5 and compare its performance to existing replacement policies already implemented in gem5.
Reference: Ishan Shah, Akanksha Jain and Calvin Lin, Effective Mimicry of Belady’s MIN Policy, HPCA 2022. Link: https://www.cs.utexas.edu/~lin/papers/hpca22.pdf
Cache prefetching mechanisms can greatly reduce compulsory and capacity misses, and therefore improve performance. However, aggressive prefetching can replace useful blocks in the cache which can be counter-productive. An accurate replacement policy can improve performance while avoiding extra misses. In this project, you should implement the Signature Path Prefetcher (SPP) in gem5, and compare its performance to existing prefetchers already implemented in gem5, and to no prefetching.
Reference: J. Kim , S. Pugsley, P. V. Gratz, A. Reddy, C. Wilkerson, Z. Chishti, Path Confidence based Lookahead Prefetching, MICRO 2016. Link: https://ieeexplore.ieee.org/document/7783763
Recent conferences at ISCA, MICRO, HPCA have included multiple kernel accelerators. Kernel accelerators are those that include minimal programmability and offload a particular kernel end-to-end. Implement one of these accelerators on the gem5 SALAM framework and evaluate the design. Some suggested accelerators
Publishable result
By definition, while DSAs seem incomparable to each other, they do adopt a common underlying common theme. Dally et al. and Hennessy and Patterson comment that while computation parallelism and density are important, DSAs must exploit locality and make few global accesses. Thus, most of the resources (area and energy) in current DSAs tend to be dedicated to organizing on-chip memory and fetching data from DRAM. DSAs exploit three main optimizations: i) DSA-specific data types The early wave of DSAs predominantly needed to supply regular loop nests with dense data. However, emerging DSAs work on non-indexed metadata-based data structures e.g., compressed sparse matrix~\cite{sparch}, graph nodes\cite{graphpulse}, or database indexes~\cite{walker}. ii) DSA-specific walkerslike CPUs, DSAs employ hardwired address generators and DRAM fetchers that maximize channel bandwidth. While address generators for DMAing dense arrays tend to be simple \code{base+offset}, state-of-the-art DSAs require complex walkers making referencing multiple elements. iii) DSA-specific orchestration Finally, DSAs explicitly orchestrate movement, overlap computation and maximize DRAM channel utilization. DSAs leverage domain knowledge to pack/unpack data on-chip.
Multiple ideas can be explored in this context.
Phi, Coup. Replicate the works in these papers and figure out if DSAs can benefit from this. Create a general framework for DSAs to exploit. Simulators
Develop a framework for design space exploration to optimize dynamic updates in applications such as graphs. graphbolt
Benchmarks Machsuite(https://github.com/harvard-acc/ALADDIN/tree/master/MachSuite)
A number of custom DSAs have been created targetting specific algorithm kernels. However, the key challenge in a DSA is to figure out what to leave programmable. Pick your favorite application domain, study the cost of keeping something programmable or reconfigurable. The typical overheads of making a DSA programmable involves i) tcost of storing and retreiving the instructions from associated RAM ii) the cost of dynamically scheduling instructions to spatial resources. iii) the cost of transfering operands to the scheduled resources. The key benefit of making a DSA programmable is reusability and elimination of dark silicon i.e., many parts of the DSA remain active and are not underutilized during phases. For instance in a heterogeneous CGRA if the specialized PE is not-utilized the other associated components such as register files and routers also go underutilized. In a homogeneous CGRA such components tend to be shared and this enables better utilization. See this paper Databases. Queries
Prior references
Suggested application domains: image processing, tensor processing, security, databases.
Spectre v1 exploits speculative execution to leak private data from memory. An expensive mechanism to defend against such attacks is by disabling speculative execution. However, recent research has explored mechanisms with much lower performance impact. In this project, you need to compare the performance impact in gem5 of two such strategies: Invisispec and Speculative Taint Tracking.
…more ideas may be posted later.