EPIC architectures rely heavily on state-of-the-art compiler technology to deliver optimal performance while keeping hardware design simple. It is generally believed that an optimizing compiler has an enormous scheduling window to exploit instruction-level parallelism (ILP) since the compiler orchestrates the entire program. Many state-of-the-art compilers typically confine optimizations to loop boundaries (e.g. software pipelining, trace scheduling, and loop unrolling) and function boundaries (e.g. loop peeling, loop exchanges, invariant hoisting, and global optimizations). Although techniques such as function inlining and interprocedural optimizations can alleviate these constraints to a limited extent, loop and function boundaries are often the real scopes of the compiler scheduler. Several previous ILP studies have explored the limits of parallelism on dynamic superscalar machines; however, those results are not applicable to EPIC architectures since they rely on dynamic scheduling, not static code scheduling by the compiler, to reorder instructions. In this paper, we evaluate the limits in ILP obtained through compiler scheduling alone. We quantify these limits as more restrictive scheduling constraints are imposed-starting from inter-procedural code scheduling, to intra-procedural and finally to loop-confined code scheduling.
[1]
Norman P. Jouppi,et al.
Available instruction-level parallelism for superscalar and superpipelined machines
,
1989,
ASPLOS 1989.
[2]
Michael Shebanow,et al.
Single instruction stream parallelism is greater than two
,
1991,
ISCA '91.
[3]
Gary S. Tyson,et al.
The limits of instruction level parallelism in SPEC95 applications
,
1999,
CARN.
[4]
Monica S. Lam,et al.
Limits of control flow on parallelism
,
1992,
ISCA '92.
[5]
Kevin O'Brien,et al.
Single-program speculative multithreading (SPSM) architecture: compiler-assisted fine-grained multithreading
,
1995,
PACT.