The Journal of Instruction-Level Parallelism
5th JILP Workshop on Computer Architecture Competitions (JWAC-5):

Championship Branch Prediction (CBP-5)

in conjunction with:
ISCA-43  http://isca2016.eecs.umich.edu
<>

< >

The workshop on computer architecture competitions is a forum for holding competitions to evaluate computer architecture research topics. The fifth JWAC workshop is organized around a competition for branch prediction algorithms. The Championship Branch Prediction (CBP) invites contestants to submit their branch prediction code to participate in this competition. Contestants will be given a fixed storage budget to implement their best predictors on a common evaluation framework provided by the organizing committee.

Objective

The goal for this competition is to compare different branch prediction algorithms in a common framework. Predictors will be evaluated for conditional branches. Predictors must be implemented within a fixed storage budget as specified in the competition rules. The simple and transparent evaluation process enables dissemination of results and techniques to the larger computer architecture community and allows independent verification of results.

 

Prizes

The championship will have three tracks, each designing conditional branch predictior with different storage budgets: 8KB, 64KB, and unlimited size. In each category an additional budget of 2048 bits is allowed (for tracking global history for example).  The top performer for each track will receive a trophy commemorating his/her triumph (OR some other prize to be determined later). Top submissions will be invited to present at the workshop, when results will be announced. All source code, write-ups and performance results will be made publicly available through the JWAC-5 website.

 

Submission Requirements

Each submission should include an abstract, write up, and predictor code. We should be able to simulate your predictor with a reasonable amount of memory (not exceeding 16GB), and within sixty hours of simulation time. Also, your predictors must not violate causality (cannot use future information to predict the current branch). Furthermore, you are not allowed to spawn another thread from your predictor code. Finally, predictors are not allowed to "profile" traces in order to adjust their algorithms for a particular trace or group of traces.


For submission Instructions, click here



    

 

Competition Rules

 

The competition will proceed as follows. Contestants are responsible for implementing and evaluating their algorithm in the distributed framework. An initial set of 223 traces (200 100 million instruction, and 23 1 billion instruction traces) will be released to the competitors with the distributed framework along with weights to compute a weighted average MPKI. Submissions will be compiled and run with the original version of the framework. Quantitatively assessing the cost/complexity of predictors is difficult. To simplify the review process, maximize transparency, and minimize the role of subjectivity in selecting a champion, CBP-5 will make no attempt to assess the cost/complexity of predictor algorithms. All predictors must be implemented within the constraints of the budget for the track of choice. Competitors can choose not to compete in a particular budget category. Clear documentation, in the code as well as the paper write up, must be provided to assure that this is the case. Predictors will be scored on a weighted average of Mispredictions Per Thousand Instructions (MPKI) for the final evaluation trace set supplied by the organizing committee, which will not be the same as the initial set of traces released to the competitors with the evaluation framework. The final evaluation traces and weights will be made available to the public after the final evaluation.

 

 

Acceptance Criteria

 

In the interest of assembling a quality program for workshop attendees and future readers, there will be an overall selection process, of which performance ranking is the primary component. To be considered, submissions must conform to the submission requirements described above. Submissions will be selected to appear in the workshop on the basis of the performance ranking, novelty, practicality of the predictor, and overall quality of the paper and commented code. Novelty is not a strict requirement, for example, a contestant may submit his/her previously published design or make incremental enhancements to a previously proposed design. In such cases, MPKI is a heavily weighted criterion, as is overall quality of the paper (for example, analysis of new results on the common framework, etc.).

 


CBP-5 Kit: Download and Directions (including the training and final evaluation traces)      



Workshop Program  

 


Important Dates

 

Competition formally announced:

January 31st, 2016

Evaluation framework available:

January 31st, 2016

Submissions due:

May 6th 2016 at 11:59 PM CST

Acceptance notification:

May 15th, 2016

Camera Ready version due:

May 31st, 2016

Results announced:

At ISCA workshop (Saturday, June 18th, 2016)


Steering Committee

Alaa R. Alameldeen (Intel)

Hyesoon Kim (Georgia Tech)

Moinuddin Qureshi (Georgia Tech)

 

 

 

 

Organizing Committee

James Dundas (Samsung Austin R&D) (Chair)

Sandeep Gupta (Samsung Austin R&D)

JuHwan Kim (Samsung)

Fuzhou Jou (Samsung Austin R&D)

Maximilien Breughe (Samsung Austin R&D)

 

 

 

 

Program Chair

Trevor Mudge (The University of Michigan - Ann Arbor)

 

 

Program Committee

Stuart Biles (ARM Research)

Rami Sheikh (Qualcomm Research)

Ronald Dreslinski (The University of Michigan - Ann Arbor)

Nam Sung Kim (University of Illinois - Urbana Champaign)

Moinuddin Qureshi (Georgia Tech)

Murali M. Annavaram (University of Southern California)

Lieven Eeckhout (Ghent University)

Jared Stark (Intel)

Pierre Michaud (INRIA)

Sergio Schuler (Oracle)

David Roberts (AMD Research, Advanced Micro Devices Inc.)