Non-containerized Grading¶
Otter supports programmatic or command-line grading of assignments without requiring the use of Docker as an intermediary. This functionality is designed to allow Otter to run in environments that do not support containerization, such as on a user’s JupyterHub account. If Docker is available, it is recommended that Otter Grade is used instead, as non-containerized grading is less secure.
To grade locally, Otter exposes the otter run
command for the command line or the module otter.api
for running Otter programmatically. The use of both is described in this section. Before using Otter Run, you should have generated an autograder configuration zip file.
Otter Run works by creating a temporary grading directory using the tempfile
library and replicating the autograder tree structure in that folder. It then runs the autograder there as normal. Note that Otter Run does not run environment setup files (e.g. setup.sh
) or install requirements, so any requirements should be available in the environment being used for grading.
Grading from the Command Line¶
To grade a single submission from the command line, use the otter run
utility. This has one required argument, the path to the submission to be graded, and will run Otter in a separate directory structure created using tempfile
. Use the optional -a
flag to specify a path to your configuration zip file if it is not at the default path ./autograder.zip
. Otter Run will write a JSON file, the results of grading, at {output_path}/results.json
(output_path
can be configured with the -o
flag, and defaults to ./
).
If I wanted to use Otter Run on hw00.ipynb
, I would run
otter run hw00.ipynb
If my autograder configuration file was at ../autograder.zip
, I would run
otter run -a ../autograder.zip hw00.ipynb
Either of the above will produce the results file at ./results.json
.
For more information on the command-line interface for Otter Run, see the Otter CLI reference.
Grading Programmatically¶
Otter includes an API through which users can grade assignments from inside a Python session, encapsulated in the submodule otter.api
. The main method of the API is otter.api.grade_submission
, which takes in an autograder configuration file path and a submission path and grades the submission, returning the GradingResults
object that was produced during grading.
For example, to grade hw00.ipynb
with an autograder configuration file in autograder.zip
, I would run
from otter.api import grade_submission
grade_submission("autograder.zip", "hw00.ipynb")
grade_submission
has an optional argument quiet
which will suppress anything printed to the console by the grading process during execution when set to True
(default False
).
For more information about grading programmatically, see the otter.api reference.
Grading Results¶
This section describes the object that Otter uses to store and manage test case scores when grading.
-
class
otter.test_files.
GradingResults
(test_files)¶ Stores and wrangles test result objects
Initialize with a list of
otter.test_files.abstract_test.TestFile
subclass objects and this class will store the results as named tuples so that they can be accessed/manipulated easily. Also contains methods to put the results into a nicedict
format or into the correct format for Gradescope.- Parameters
results (
list
ofTestFile
) – the list of test file objects summarized in this grade
-
test_files
¶ the test files passed to the constructor
- Type
list
ofTestFile
-
results
¶ maps test names to
GradingTestCaseResult
named tuples containing the test result information- Type
dict
-
output
¶ a string to include in the output field for Gradescope
- Type
str
whether all results should be hidden from the student on Gradescope
- Type
bool
-
total
¶ the total points earned by the submission
- Type
numeric
-
possible
¶ the total points possible based on the tests
- Type
numeric
-
tests
¶ list of test names according to the keys of
results
- Type
list
ofstr
-
clear_results
()¶ Empties the dictionary of results
-
get_plugin_data
(plugin_name, default=None)¶ Retrieves data for plugin
plugin_name
in the resultsThis method uses
dict.get
to retrive the data, so aKeyError
is never raised ifplugin_name
is not found; rather, it returnsNone
.- Parameters
plugin_name (
str
) – the importable name of a plugindefault (any, optional) – a default value to return if
plugin_name
is not found
- Returns
the data stored for
plugin_name
if found- Return type
any
-
get_result
(test_name)¶ Returns the
GradingTestCaseResult
named tuple corresponding to the test with nametest_name
- Parameters
test_name (
str
) – the name of the desired test- Returns
the results of that test
- Return type
GradingTestCaseResult
-
get_score
(test_name)¶ Returns the score of a test tracked by these results
- Parameters
test_name (
str
) – the name of the test- Returns
the score
- Return type
int
orfloat
-
hide_everything
()¶ Indicates that all results should be hidden from students on Gradescope
-
set_output
(output)¶ Updates the
output
field of the results JSON with text relevant to the entire submission. See https://gradescope-autograders.readthedocs.io/en/latest/specs/ for more information.- Parameters
output (
str
) – the output text
-
set_plugin_data
(plugin_name, data)¶ Stores plugin data for plugin
plugin_name
in the results.data
must be picklable.- Parameters
plugin_name (
str
) – the importable name of a plugindata (any) – the data to store; must be serializable with
pickle
-
property
test_cases
¶ The names of all test cases tracked in these grading results
-
to_dict
()¶ Converts these results into a dictinary, extending the fields of the named tuples in
results
into key, value pairs in adict
.- Returns
the results in dictionary form
- Return type
dict
-
to_gradescope_dict
(config)¶ Converts these results into a dictionary formatted for Gradescope’s autograder. Requires a dictionary of configurations for the Gradescope assignment generated using Otter Generate.
- Parameters
config (
dict
) – the grading configurations- Returns
the results formatted for Gradescope
- Return type
dict
-
to_report_str
()¶ Returns these results as a report string generated using the
__repr__
of theTestFile
class.- Returns
the report
- Return type
str
-
update_result
(test_name, **kwargs)¶ Updates the values in the
GradingTestCaseResult
object stored inself.results[test_name]
with the key-value pairs inkwargs
.- Parameters
test_name (
str
) – the name of the testkwargs – key-value pairs for updating the
GradingTestCaseResult
object
-
verify_against_log
(log, ignore_hidden=True)¶ Verifies these scores against the results stored in this log using the results returned by
Log.get_results
for comparison. Prints a message if the scores differ by more than the default tolerance ofmath.isclose
. Ifignore_hidden
isTrue
, hidden tests are ignored when verifying scores.- Parameters
log (
otter.check.logs.Log
) – the log to verify againstignore_hidden (
bool
, optional) – whether to ignore hidden tests during verification
- Returns
whether a discrepancy was found
- Return type
bool