Proj1 Notes - cs61c-rc
Common mistakes, point deductions, and error codes follow. If you got some sort of letter-number combination in your comments, and you're wondering what it means, you're in the right place. If unspecified, an error is probably worth half a point, although half point errors may be nullified through good programming practices or if more major errors existed which caused your score to drop substantially.
I wasn't really able to provide a lot of analysis of common mistakes with this project because I was mostly looking at autograder results and this project is more time consuming to look through. Some test cases were thrown off by randomness since I'm not exactly sure what the autograder did to combat randomness (my guess is fixing the random seed, which might be thrown off if you used a different random generator than the TAs expected). Obviously I didn't deduct for this, but this is why there were a lot of false negatives on the autograder tests and why it took some time to grade.
General
-
IMPORTANT: There are a handful of people whose code caused the autograder to segfault on numerous tests. Before you go and panic, DON'T PANIC. If this happened to you, I made a note and put in the current (adjusted for minor differences) autograder result for now, but I will do some indepth manual testing once my schedule clears up a bit (my last midterm is on Thursday). Please email me as soon as you know that you were one of these people so that I won't forget you. Also, if I haven't made any updates by Saturday 4/10, bug me until I fix it. I will probably end up doing some holistic grading/partial credit for anyone who genuinely failed most of the tests because giving low scores is lame. You will probably get around half points for each test you failed if you failed a lot of tests, assuming you attempted to fill in the parts you needed to fill in. Consider the entered score a baseline and a reason to contact me.
Rubric
12 points for part 0 (6 tests, 2 each)
- no savefile
- test1.save - no items and monsters
- test2.save - some items
- test3.save - some monsters
- test4.save - items and monsters
- test5.save - max items and some monsters
12 points for part 1 (6 tests, 2 each)
- add_item() and num_item()
- find_item()
- delete_item()
- __make_inventory_iterator() and next_item()
- remove_last_item()
- delete_inventory_iterator()
8 points for part 2
- full points for passing the test, partial credit assigned sensibly =)
8 points for part 3
- same as part 2
Part 0
- For some reason, some people failed test 5 which had 3 monsters in two different rooms and had all 6 in the same room upon loading the save. Haven't looked into why this happens though.
Part 1
Part 2
- Another odd thing I noticed is that some people didn't gain any experience when killing monsters in this part. This doesn't seem to have anything to do with items though so I'm not sure why it happened. Just a note for future automated tests I guess mostly, although I'm curious whether this ever came up for anyone when working on the project.
Part 3
- 3.0 - Not passing arguments correctly to spells results in the fireball spell not giving an error message when cast without specifying a target. Some people caught this error and printed an error message in their cast() function which I didn't deduct for, but ideally you should have let the fireball() functio do it's thing. I didn't deduct for this because the spec is REALLY unclear about what an improper number of arguments means.