CS164: Tournament Results

Preliminary Round

The following are the top five rankings for teams that successfully completed the checkpoint tests exp.py and prime.py. The programs that were actually run are slightly changed from those supplied with the skeleton; the changes are to the parameters supplied (number of primes, values exponentiated) and to comments.

#cycles (smaller=better)
Rank Team exp.py prime.py Total
1 Team26 29263 524108 553371
2 Team14 25482 627884 653366
3 Team29 25006 737634 762640
4 Team05 26781 741241 768022
5 Team31 25086 813643 838729

Final Round

The following are the top five rankings for teams that successfully completed the five project benchmarks. Again, the programs that were actually run are slightly changed from those supplied with the skeleton; the changes are to the parameters supplied (number of primes, values exponentiated) and to comments. In addition, for this round we threw in a dummy use of input(), to prevent certain optimizations that, while allowed by the rules, are not in the true spirit of benchmarking (but see the Addendum below).

#cycles (smaller=better)
Rank Team tree.py stdlib.py exp.py prime.py sieve.py Total
1 Team31 233702 30577 21004 563054 85999 934336
2 Team29 246923 41420 27087 739715 107462 1162607
3 Team14 300071 37990 27190 792633 116967 1274851
4 Team26 354495 189553 34071 647056 135996 1361171
5 Team15 314651 48231 31943 994991 123133 1512949

Final Round: Addendum

Two projects used an (ahem) unorthodox approach. Taking advantage of the fact that programs that do no input will always produce the same results, they precomputed the outcomes of executed and simply generated code that printed a string consisting of those results. Here are the results for these projects on versions of the benchmarks that do not do input.

#cycles (smaller=better)
Rank Team tree.py stdlib.py exp.py prime.py sieve.py Total
1 Team14 3 3 3 3 3 15
2 Team20 4 4 4 4 4 20