Non-containerized Grading#
Otter supports programmatic or command-line grading of assignments without requiring the use of Docker as an intermediary. This functionality is designed to allow Otter to run in environments that do not support containerization, such as on a user’s JupyterHub account. If Docker is available, it is recommended that Otter Grade is used instead, as non-containerized grading is less secure.
To grade locally, Otter exposes the otter run
command for the command line or the module
otter.api
for running Otter programmatically. The use of both is described in this section.
Before using Otter Run, you should have generated an autograder configuration zip file.
Otter Run works by creating a temporary grading directory using the tempfile
library and
replicating the autograder tree structure in that folder. It then runs the autograder there as
normal. Note that Otter Run does not run environment setup files (e.g. setup.sh
) or install
requirements, so any requirements should be available in the environment being used for grading.
Grading from the Command Line#
To grade a single submission from the command line, use the otter run
utility. This has one
required argument, the path to the submission to be graded, and will run Otter in a separate
directory structure created using tempfile
. Use the optional -a
flag to specify a path to
your configuration zip file if it is not at the default path ./autograder.zip
. Otter Run will
write a JSON file, the results of grading, at {output_path}/results.json
(output_path
can be configured with the -o
flag, and defaults to ./
).
If I wanted to use Otter Run on hw00.ipynb
, I would run
otter run hw00.ipynb
If my autograder configuration file was at ../autograder.zip
, I would run
otter run -a ../autograder.zip hw00.ipynb
Either of the above will produce the results file at ./results.json
.
For more information on the command-line interface for Otter Run, see the CLI Reference.
Grading Programmatically#
Otter includes an API through which users can grade assignments from inside a Python session,
encapsulated in the submodule otter.api
. The main method of the API is
otter.api.grade_submission
, which takes in an autograder configuration file path and a
submission path and grades the submission, returning the GradingResults
object that was produced
during grading.
For example, to grade hw00.ipynb
with an autograder configuration file in autograder.zip
, I
would run
from otter.api import grade_submission
grade_submission("hw00.ipynb", "autograder.zip")
grade_submission
has an optional argument quiet
which will suppress anything printed to the
console by the grading process during execution when set to True
(default False
).
For more information about grading programmatically, see the API reference.
Grading Results#
This section describes the object that Otter uses to store and manage test case scores when grading.
- class otter.test_files.GradingResults(test_files: List[TestFile], notebook: NotebookNode | None = None)#
Stores and wrangles test result objects
Initialize with a list of
otter.test_files.abstract_test.TestFile
subclass objects and this class will store the results as named tuples so that they can be accessed/manipulated easily. Also contains methods to put the results into a nicedict
format or into the correct format for Gradescope.- Parameters:
results (
list[TestFile]
) – the list of test file objects summarized in this grade
whether all results should be hidden from the student on Gradescope
- clear_results()#
Empties the dictionary of results.
- classmethod from_ottr_json(ottr_output)#
Creates a
GradingResults
object from the JSON output of Ottr (Otter’s R client).- Parameters:
ottr_output (
str
) – the JSON output of Ottr as a string- Returns:
the Ottr grading results
- Return type:
GradingResults
- get_plugin_data(plugin_name, default=None)#
Retrieves data for plugin
plugin_name
in the results.This method uses
dict.get
to retrive the data, so aKeyError
is never raised ifplugin_name
is not found; rather, it returnsNone
.- Parameters:
plugin_name (
str
) – the importable name of a plugindefault (any) – a default value to return if
plugin_name
is not found
- Returns:
the data stored for
plugin_name
if found- Return type:
any
- get_result(test_name)#
Returns the
TestFile
corresponding to the test with nametest_name
- Parameters:
test_name (
str
) – the name of the desired test- Returns:
the graded test file object
- Return type:
TestFile
- get_score(test_name)#
Returns the score of a test tracked by these results
- Parameters:
test_name (
str
) – the name of the test- Returns:
the score
- Return type:
int
orfloat
- has_catastrophic_failure()#
Returns whether these results contain a catastrophic error (i.e. an error that prevented submission results from being generated or read).
- Returns:
whether there is such an error
- Return type:
bool
- hide_everything()#
Indicates that all results should be hidden from students on Gradescope.
- notebook: NotebookNode | None#
the executed notebook with outputs that gave these results
- output: str | None#
a string to include in the output field for Gradescope
- property passed_all_public#
whether all public tests in these results passed
- Type:
bool
- pdf_error: Exception | None#
an error thrown while generating/submitting a PDF of the submission to display to students in the Gradescope results
- property possible#
the total points possible
- Type:
int | float
- results: Dict[str, TestFile]#
maps test/question names to their
TestFile
objects (which store the results)
- set_output(output)#
Updates the
output
field of the results JSON with text relevant to the entire submission. See https://gradescope-autograders.readthedocs.io/en/latest/specs/ for more information.- Parameters:
output (
str
) – the output text
- set_pdf_error(error: Exception)#
Set a PDF generation error to be displayed as a failed (0-point) test on Gradescope.
- Parameters:
error (
Exception
) – the error thrown
- set_plugin_data(plugin_name, data)#
Stores plugin data for plugin
plugin_name
in the results.data
must be picklable.- Parameters:
plugin_name (
str
) – the importable name of a plugindata (any) – the data to store; must be serializable with
pickle
- summary(public_only=False)#
Generate a summary of these results and return it as a string.
- Parameters:
public_only (
bool
) – whether only public test cases should be included- Returns:
the summary of results
- Return type:
str
- property test_files#
the names of all test files tracked in these grading results
- Type:
list[TestFile]
- to_dict()#
Converts these results into a dictinary, extending the fields of the named tuples in
results
into key, value pairs in adict
.- Returns:
the results in dictionary form
- Return type:
dict
- to_gradescope_dict(ag_config)#
Convert these results into a dictionary formatted for Gradescope’s autograder.
- Parameters:
ag_config (
otter.run.run_autograder.autograder_config.AutograderConfig
) – the autograder config- Returns:
the results formatted for Gradescope
- Return type:
dict
- to_report_str()#
Returns these results as a report string generated using the
__repr__
of theTestFile
class.- Returns:
the report
- Return type:
str
- property total#
the total points earned
- Type:
int | float
- update_score(test_name, new_score)#
Override the score for the specified test file.
- Parameters:
test_name (
str
) – the name of the test filenew_score (
int | float
) – the new score
- verify_against_log(log, ignore_hidden=True) List[str] #
Verifies these scores against the results stored in this log using the results returned by
Log.get_results
for comparison. A discrepancy occurs if the scores differ by more than the default tolerance ofmath.isclose
. Ifignore_hidden
isTrue
, hidden tests are ignored when verifying scores.- Parameters:
log (
otter.check.logs.Log
) – the log to verify againstignore_hidden (
bool
) – whether to ignore hidden tests during verification
- Returns:
- a list of error messages for discrepancies; if none were found, the list
is empty
- Return type:
list[str]
- classmethod without_results(e)#
Creates an empty results object that represents an execution failure during autograding.
The returned results object will alert students and instructors to this failure, providing the error message and traceback to instructors, and report a score of 0 on Gradescope.
- Parameters:
e (
Exception
) – the error that was thrown- Returns:
the results object
- Return type:
GradingResults