Journal of Instruction-Level Parallelism
Branch Prediction (CBP-4)
The workshop on
computer architecture competitions is a
forum for holding competitions to evaluate
computer architecture research topics. The
fourth JWAC workshop is organized around a
competition for branch prediction
algorithms. The Championship Branch
Prediction (CBP) invites contestants to
submit their branch prediction code to
participate in this competition. Contestants
will be given a fixed storage budget to
implement their best predictors on a common
evaluation framework provided by the
The goal for this competition is to compare different branch prediction algorithms in a common framework. Predictors will be evaluated for conditional branches. Predictors must be implemented within a fixed storage budget as specified in the competition rules. The simple and transparent evaluation process enables dissemination of results and techniques to the larger computer architecture community and allows independent verification of results.
The championship will have three tracks, each designing conditional branch predictior with different storage budgets: 4KB, 32KB, and unlimited size. In each category an additional budget of 1024 bits is allowed (for tracking global history for example). The top performer for each track will receive a trophy commemorating his/her triumph (OR some other prize to be determined later). Top submissions will be invited to present at the workshop, when results will be announced. All source code, write-ups and performance results will be made publicly available through the JWAC-4 website.
Each submission should
include an abstract, writeup, and predictor
code. We should be able to simulate
your predictor with a reasonable amount of
memory (not exceeding 2GB), and within six hours
of simulation time. Also, your
predictors must not violate causality
(cannot use future information to predict
the current branch). Furthermore, you are
not allowed to spawn another thread from
your predictor code.
Instructions, click here
The competition will proceed as follows. Contestants are responsible for implementing and evaluating their algorithm in the distributed framework. Submissions will be compiled and run with the original version of the framework. Quantitatively assessing the cost/complexity of predictors is difficult. To simplify the review process, maximize transparency, and minimize the role of subjectivity in selecting a champion, CBP-4 will make no attempt to assess the cost/complexity of predictor algorithms. All predictors must be implemented within the constraints of the budget for the track of choice. Clear documentation, in the code as well as the paper writeup, must be provided to assure that this is the case. Predictors will be scored on Mispredictions per thousand instructions (MPKI) only. The arithmetic mean of MPKIs of all 40 traces will be used as the final score of a predictor.
In the interest of assembling a quality program for workshop attendees and future readers, there will be an overall selection process, of which performance ranking is the primary component. To be considered, submissions must conform to the submission requirements described above. Submissions will be selected to appear in the workshop on the basis of the performance ranking, novelty, practicality of the predictor, and overall quality of the paper and commented code. Novelty is not a strict requirement, for example, a contestant may submit his/her previously published design or make incremental enhancements to a previously proposed design. In such cases, MPKI is a heavily weighted criterion, as is overall quality of the paper (for example, analysis of new results on the common framework, etc.).
Alaa R. Alameldeen, Intel
Eric Rotenberg, NC State
Moinuddin Qureshi, Gatech (Chair)
Alaa R. Alameldeen, Intel
Aamer Jaleel, Intel
Chris Wilkerson, Intel
Qureshi (Georgia Tech)
Hyesoon Kim (Georgia Tech)