Remote Testing for PA5 (11/19/2004)

PA5 remote testing is similar to PA4's. The only difference is, the reference won't try to detect all the possible semantic errors in the testcase. Instead, it will detect the first semantic error it finds and then exit. Therefore, remote testing will say "pass" as long as the reference finds a semantic error and your solution finds one, regardless the error type. HOWEVER, IN REAL GRADING, ONLY VALID SEMANTIC ERROR REPORTING WILL BE GIVEN CREDITS. That means, if a testcase contains multiple errors, you will get credit only if you report at least one of them. If the one you report doesn't belong to the valid error set, you won't get credit. That said, real grading is more strict than remote testing. It's your responsibility to make sure the error you reported actually exists in the testcase.

Remote testing will run from 11/19 to 12/12 11:59PM, every 4 hours. We strongly encourage you to work on it PA5 early for the following reasons:

  1. The previous experience told us we will probably decrease the testing frequency as deadline approaches.
  2. You get more time to study for finals.
  3. You get spare time to think about PA6 and get extra credit for this course.
  4. Course staffs will be less responsive towards the end of the semester because we have to grade your projects and finish up our own projects/finals.

Collection script will run from 12/7 to 12/12 11:59PM. You have to commit your DONE file before the last collection runs.


Remote Testing for PA4 (10/31/2004)

In PA4, your will be building a decaf compiler that compiles .decaf files to .s files. Same as PA1, your testcases will be the .decaf files and they have no input files.

Remote tester will take your .decaf files, compile them with your compiler and our compiler. Then tester runs gcc on the two sets of .s files. After that two sets of executable binaries will be generated. At the end, we will run the binaries and compare the output of the binary files.

If a .decaf file contains semantic errors, we will compare the error output of your compiler with the reference compiler. Since you are not required to output all errors found, we will consider you compiler pass if it produces a subset of our messages. However, if for the same testcase your compiler doesn't find any error but the reference finds some, the testcase is considered to be failed.

It's easy to write testcases that have infinite loops in decaf. We prevent this by setting a 20 second timeout for each executable. We reserve the rights to reduce this timeout allowance as the remote tester gets more load. We generally don't encourage loops with unnecessary iterations because they usually don't expose more bugs and only delay other people's remote testing results. Please be noted that the more testcases you have, the longer it takes to finish one round of remote testing.

Remote testing will be running from 10/31/2004 to 11/13/2004 midnight.


Remote Testing for PA3 (10/16/2004)

Remote testing for PA3 is similar to PA2. Your testcases will be the .dpar files and the inputs will be the .decaf files. The same convention is followed: each testcase has a multiple input files associated with it by prefixing the input files with the testcase's name.

Rules:

1) For each .dpar file you create, we will feed it to our parser generator and yours. With the generated parser, we run each of the decaf files associated with the parser spec on the your LL1Parser.java and compare the result. Note that you also need a token spec file for running the parser generator and the generated paresr. In PA3, we will use the decaf.dtok as provided in the starter kit for all testing because the lexer we use was generated from decaf.dlex

2) There is an exception for rule 1). Since writing decaf-with-actions.dpar is part of your assignment, to assist you with testing it, we will use our own decaf-with-actions.dpar to test against yours.

3) For each dpar files you create, you can put in your actions. Our generator will generate those actions too. So when we run the our generated parser, you actions will be there and get executed.

4) Our testing stub will examine the object on top of your semantic stack after parsing. If it's an ASTNode object, we will use ASTPrinter to print it out for comparison. For any other kind of object, we will call its toString() method and print out whatever it returns.


Updates on Remote Testing PA2 (09/24/2004)

When remote testing PA2, since implementing decaf.dlex is part of your assignment, we will help you test it. Here is what will happen:

  1. We apply your decaf.dlex to your LexerCodeGenerator.java to generate LexerCode.java(1)
  2. We apply our fully implemented decaf.dlex to reference LexerCodeGenerator.java to generate LexerCode.java(2)
  3. LexerCode.java(1) will be used to recognize your decaf input files.
  4. LexerCode.java(2) with be used to recognize your decaf input files.
  5. We will then compare the output at step 3 and 4.

However, for other dlex files, we will apply your testcase file to both your LexerCodeGenerator.java and ours. In these cases, we won't guarantee the reference solution will output the correct line numbers.


Remote Tesing for PA2

In PA2, there are two kinds of files you can create to do remote testing.

  1. .dlex and .dtok files
    .dlex and .dtok files are lexer spec and token spec for LexerCodeGenerator.java to produce LexerCode.java. Remote testing assumes dlex and dtok files go in pairs and their filenames differ only at the extensions. That said, when you create a new lexer spec MySpec.dlex for testing, you have to name the token spec to be MySpec.dtok
  2. .decaf files
    For every dlex-dtok pair you create, you can associate several decaf files to a dlex-dtok so the lexer generated from the pair will use the corresponding .decaf files for testing. The way to associate abc.decaf file with MySpec.dlex and MySpec.dtok is to rename abc.decaf to MySpec_abc.decaf.

Here is a picture that illustrates the naming convention described above:

In PA2, to ensure comparability, we will use our own code to invoke your LexerCodeGenerator and LookaheadLexer. The testing methodology is similar to your part4. We will call LookaheadLexer with LexerCode.java generated with the dlex-dtok pair to do the actual lexing.