The Journal of Instruction-Level Parallelism
Data Prefetching Championship
The 1st JILP Data Prefetching Championship (DPC-1)
Sponsored by: Intel, JILP, IEEE TC-uARCH
in conjunction with:
HPCA-15  http://www.comparch.ncsu.edu/hpca/

 

 

The first Data Prefetching Championship (DPC) is a competition for data prefetching algorithms. Contestants will be given a fixed storage budget to implement their best prefetching algorithms on a common evaluation framework provided by the DPC organizing committee.

What's New:

March 5, 2009: All results, code, papers, and slides are now available on-line!

Feb. 4, 2009: Final Program

Dec. 9, 2008: Submission Directions Now Available: Submission Directions

Sept. 5, 2008: Simulation Framework Now Available: Description, Download and Directions

Objective

The goal for this competition is to compare different data prefetching algorithms in a common framework. Prefetchers for both the level 1 (L1) and level 2 (L2) data caches must be implemented within a fixed storage budget as specified in the competition rules. Submissions will be evaluated based on their performance on a set of benchmarks on the framework provided by the organizing committee.

Prizes

The champion will receive a trophy commemorating his/her triumph. Selected top contestants will have their work published in a special issue of the Journal of Instruction-Level Parallelism (JILP). In addition to the top performance champion, the program committee will select the best technical paper from all submissions, which does not necessarily have to be that of the champion. All source code, write-ups and performance results will be made publicly available through the DPC website.

Submission Requirements

Each contestant is allowed a maximum of three submissions to the competition. Each submission should include the following:

o     Abstract: A 300-word abstract summarizing the submission. In addition, the abstract should include the author names, their affiliations, and the email address of the contact author.

o     Paper: This will be a conference-quality write-up of the prefetching algorithm, including references to relevant related work. The paper must clearly describe how the algorithm works, how it is practical to implement, and how it conforms to the contest rules. The paper must be written in English and formatted as follows: no more than four pages, single-spaced, two-column format, minimum 10pt Times New Roman font. The paper should be submitted in .pdf format, and should be printable on letter-size paper with one-inch margins. A submission will be disqualified if the paper does not clearly describe the algorithm that corresponds to the submitted code. Papers that do not conform to the length and format rules will only be reviewed at the program committee’s discretion.

o     Prefetcher code: A single C++ header file that can be included in the provided infrastructure must be submitted along with the paper as a separate file. This code must be well-commented so that it can be understood and evaluated. Unreadable or insufficiently documented code will be rejected by the program committee. The file sample_prefetcher.h from the infrastructure is distributed with a simple prefetcher, and should be replaced with the contestant’s code. The prefetcher code should be compiled and run on the existing infrastructure without changing any code or Makefile, and should NOT require any library code that is not part of C++.

More details on where to submit these files will be available soon.

Competition Rules

The competition will proceed as follows. The contestant will be responsible for implementing and evaluating their algorithm in the distributed framework. The framework itself is provided as a binary and cannot be modified (except for the prefetcher header file that implements the prefetching algorithm). Submissions will be taken, compiled and run with the original version of the framework. The contestants will be ranked on the basis of the measured performance of their prefetching algorithms. Each contestant will get three scores, one for each of three configurations, that measure the geometric mean of their prefetching algorithm speedups across a set of (undistributed) benchmarks. The overall score is the sum of these three scores. For example, if a contestant has scores of 1.05, 1.03 and 0.99 for the three configurations, his/her overall score is 3.07.

Acceptance Criteria

In the interest of assembling a quality program for workshop attendees and future readers, there will be an overall selection process, of which performance ranking is a key component but not the sole component.

To be considered, submissions must conform to the submission requirements described above.

Submissions will be selected to appear in the workshop on the basis of the performance ranking, novelty, and overall quality of the paper and commented code. Novelty is not a strict requirement, for example, a contestant may submit his/her previously published prefetchers or make incremental enhancements to previously proposed prefetchers. In such cases, performance is a heavily weighted criterion, as is overall quality of the paper (for example, analysis of new results on the common framework, etc.). Conversely, a very novel submission that is not necessarily a top performer in the competition, will be considered not just from a performance standpoint but also on the basis of insights etc.

Description of the Simulation Infrastructure

DPC Kit: Download and Directions

Important Dates

Competition formally announced at ISCA:

June 21, 2008

Evaluation framework available:

September 5, 2008

Submissions due:

December 12, 2008

Acceptance notification:

January 16, 2009

Final version due:

January 30, 2009

Results announced:

at workshop (Feb. 15)

 

 

Affiliated logos:

JILP         

 

Steering Committee
Alaa R. Alameldeen, Intel
Eric Rotenberg, North Carolina State Univ.

Organizing Committee
Alaa R. Alameldeen, Intel
Zeshan Chishti, Intel
Aamer Jaleel, Intel
Daniel Luchi, Intel
Chris Wilkerson, Intel

Program Chair
Eric Rotenberg, North Carolina State Univ.

Program Committee
Yuan Chou, Sun
David Kaeli, Northeastern
Alvin Lebeck, Duke
Kyle Nesbit, Google
Suleyman Sair, Intel