2004 Championship Branch Prediction Workshop Agenda and Results


 

Agenda: The slides are provided for each presentation.  For each predictor, the writeup and code is provided.   The code is provided as a uuencoded bzipped tar file (.tar.bz2.uue) formatted according to the directions found by clicking here.  The traces are not provided with the code, but are provided separately as part of the CBP framework.  The CBP framework can be downloaded from the following page.  None of the predictors require the traces with data values and memory addresses, so you don't need to download those traces.

1:00 Introduction: Competition Overview and Workshop Agenda, Jared Stark, MRL/MTL Intel

       slides (ppt pdf)

 

1:10 Perceptrons for Dummies, Daniel A. Jiménez, Rutgers University

       slides (ppt pdf)

 

1:30 Idealized Piecewise Linear Branch Prediction, Daniel A. Jiménez, Rutgers University

       slides (ppt pdf) writeup code

1:50 A PPM-like, Tag-based Predictor, Pierre Michaud, IRISA/INRIA

       slides (ppt pdf) writeup code

2:10 Adaptive Information Processing: An Effective Way to Improve Perceptron Predictors

       Hongliang Gao and Huiyang Zhou, University of Central Florida

       slides (ppt pdf) writeup code

       

2:30 Break

 

3:00 A 2bcgskew Predictor Fused by a Redundant History Skewed Perceptron Predictor

       Veerle Desmet, Hans Vandierendonck, and Koen De Bosschere, Ghent University

       slides (ppt pdf) writeup code

3:20 The O-GEHL Branch Predictor, André Seznec, IRISA/INRIA

       slides (ppt pdf) writeup code

3:40 The Frankenpredictor, Gabriel Loh, Georgia Institute of Technology

       slides (ppt pdf) writeup code

 

4:00 Branch Prediction Caveats and Second-Order Effects, Phil Emma, IBM Research

       slides (ppt pdf)

 

4:20 Conclusion: Ranking of the Finalists, Anointing of the Champion, and “What Next?”

       Chris Wilkerson, MRL/MTL Intel

       slides (ppt pdf)

 

4:30 Adjourn

 


 

Results: The following two tables show the average mispredict rates, in mispredicts per 1000 instructions, for the 6 finalists.  The finalists are indicated by the first name of the first author on the predictor.  For reference, the mispredict rates of an equivalently-sized gshare predictor are also given.  All CBP contestants were scored using the average mispredict rate across all traces (ALL column in the tables).  The average mispredict rates across the floating-point (FP), integer (INT), multi-media (MM), and server (SERV) traces are also given for reference.

 

The first table is for the undistributed trace set, which, as its name implies, was not distributed to the finalists, and was used in the final round of the competition to rank the finalists and crown the champion.  The second table is for the distributed trace set, which was distributed to all contestants, and was used in the initial round of the competition to rank the contestants and choose the finalists. 


Undistributed Trace Set

 

FP

INT

MM

SERV

ALL

Hongliang

0.483

2.917

3.590

3.304

2.574

André

0.567

3.467

3.627

2.845

2.627

Gabe

0.596

3.369

3.695

3.140

2.700

Daniel

0.629

2.963

3.884

3.492

2.742

Pierre

0.659

3.691

3.854

2.905

2.777

Veerle

0.617

3.210

3.670

3.732

2.807

GSHARE

1.553

4.691

5.484

6.351

4.520

 

Distributed Trace Set

 

FP

INT

MM

SERV

ALL

André

0.590

3.299

4.450

2.939

2.820

Hongliang

0.428

3.223

4.444

3.194

2.823

Daniel

0.678

3.237

4.567

3.327

2.952

Gabe

0.601

3.836

4.727

3.181

3.086

Pierre

0.672

3.811

4.934

2.989

3.101

Veerle

0.681

3.698

4.529

3.789

3.174

GSHARE

1.301

6.607

6.778

6.520

5.301