Notebook Format

Otter’s notebook format groups prompts, solutions, and tests together into questions. Autograder tests are specified as cells in the notebook and their output is used as the expected output of the autograder when genreating tests. Each question has metadata, expressed in raw YAML metadata cell when the question is declared. Tests generated by Otter Assign follow the Otter- compliant OK format.

Note that the major difference between v0 format and v1 format is the use of raw notebook cells as delimeters. Each boundary cell denotes the start or end of a block and contains valid YAML syntax. First-line comments are used in these YAML raw cells to denote what type of block is being entered or ended.

In the v1 format, Python and R notebooks follow the same structure. There are some features available in Python that are not available in R, and these are noted below, but otherwise the formats are the same.

Assignment Metadata

In addition to various command line arugments discussed below, Otter Assign also allows you to specify various assignment generation arguments in an assignment metadata cell. These are very similar to the question metadata cells described in the next section. Assignment metadata, included by convention as the first cell of the notebook, places YAML-formatted configurations in a raw cell that begins with the comment # ASSIGNMENT CONFIG

# ASSIGNMENT CONFIG
init_cell: false
export_cell: true
generate: true
# etc.

This cell is removed from both output notebooks. These configurations can be overwritten by their command line counterparts (if present). The options, their defaults, and descriptions are listed below. Any unspecified keys will keep their default values. For more information about many of these arguments, see Usage and Output. Any keys that map to sub-dictionaries (e.g. export_cell, generate) can have their behaviors turned off by changing their value to false. The only one that defaults to true (with the specified sub-key defaults) is export_cell.

requirements: null             # the path to a requirements.txt file or a list of packages
overwrite_requirements: false  # whether to overwrite Otter's default requirement.txt in Otter Generate
environment: null              # the path to a conda environment.yml file
run_tests: true                # whether to run the assignment tests against the autograder notebook
solutions_pdf: false           # whether to generate a PDF of the solutions notebook
template_pdf: false            # whether to generate a filtered Gradescope assignment template PDF
init_cell: true                # whether to include an Otter initialization cell in the output notebooks
check_all_cell: true           # whether to include an Otter check-all cell in the output notebooks
export_cell:                   # whether to include an Otter export cell in the output notebooks
  instructions: ''             # additional submission instructions to include in the export cell
  pdf: true                    # whether to include a PDF of the notebook in the generated zip file
  filtering: true              # whether the generated PDF should be filtered
  force_save: false            # whether to force-save the notebook with JavaScript (only works in classic notebook)
  run_tests: false             # whether to run student submissions against local tests during export
seed:                          # intercell seeding configurations
  variable: null               # grading configurations to be passed to Otter Generate as an otter_config.json; if false, Otter Generate is disabled
  autograder_value: null       # whether to save the student's environment in the log
  student_value: null          # a mapping of variable names to type strings for serlizing environments
generate: false                # a list of modules to ignore variables from during environment serialization
save_environment: false        # a list of other files to include in the output directories and autograder
variables: {}                  # a list of other files only to include in the autograder
ignore_modules: []             # a list of plugin names and configurations
files: []                      # whether to store tests in separate .py files rather than in the notebook metadata
autograder_files: []           # whether this assignment will be run on Google Colab

All paths specified in the configuration should be relative to the directory containing the master notebook. If, for example, you were running Otter Assign on the lab00.ipynb notebook in the structure below:

dev
├── lab
│   └── lab00
│       ├── data
│       │   └── data.csv
│       ├── lab00.ipynb
│       └── utils.py
└── requirements.txt

and you wanted your requirements from dev/requirements.txt to be included, your configuration would look something like this:

requirements: ../../requirements.txt
files:
    - data/data.csv
    - utils.py

The requirements key of the assignment config can also be formatted as a list of package names in lieu of a path to a requirements.txt file; for exmaple:

requirements:
    - pandas
    - numpy
    - scipy

This structure is also compatible with the overwrite_requirements key.

A note about Otter Generate: the generate key of the assignment metadata has two forms. If you just want to generate and require no additional arguments, set generate: true in the YAML and Otter Assign will simply run otter generate from the autograder directory (this will also include any files passed to files, whose paths should be relative to the directory containing the notebook, not to the directory of execution). If you require additional arguments, e.g. points or show_stdout, then set generate to a nested dictionary of these parameters and their values:

generate:
    seed: 42
    show_stdout: true
    show_hidden: true

You can also set the autograder up to automatically upload PDFs to student submissions to another Gradescope assignment by setting the necessary keys in the pdfs subkey of generate:

generate:
    token: ''
    course_id: 1234        # required
    assignment_id: 5678    # required
    filtering: true        # true is the default

If you don’t specify a token, you will be prompted for your username and password when you run Otter Assign; optionally, you can specify these via the command line with the --username and --password flags. You can also run the following to retrieve your token:

from otter.generate.token import APIClient
print(APIClient.get_token())

Any configurations in your generate key will be put into an otter_config.json and used when running Otter Generate.

If you are grading from the log or would like to store students’ environments in the log, use the save_environment key. If this key is set to true, Otter will serialize the stuednt’s environment whenever a check is run, as described in Logging. To restrict the serialization of variables to specific names and types, use the variables key, which maps variable names to fully-qualified type strings. The ignore_modules key is used to ignore functions from specific modules. To turn on grading from the log on Gradescope, set generate[grade_from_log] to true. The configuration below turns on the serialization of environments, storing only variables of the name df that are pandas dataframes.

save_environment: true
variables:
    df: pandas.core.frame.DataFrame

As an example, the following assignment metadata includes an export cell but no filtering, no init cell, and passes the configurations points and seed to Otter Generate via the otter_config.json.

# ASSIGNMENT CONFIG
export_cell:
    filtering: false
init_cell: false
generate:
    points: 3
    seed: 0

Intercell Seeding

Python assignments support intercell seeding, and there are two flavors of this. The first involves the use of a seed variable, and is configured in the assignment metadata; this allows you to use tools like np.random.default_rng instead of just np.random.seed. The second flavor involves comments in code cells, and is described below.

To use a seed variable, specify the name of the variable, the autograder seed value, and the student seed value in your assignment metadata.

# ASSIGNMENT CONFIG
seed:
    variable: rng_seed
    autograder_value: 42
    student_value: 713

With this type of seeding, you do not need to specify the seed inside the generate key; this automatically taken care of by Otter Assign.

Then, in a cell of your notebook, define the seed variable with the autograder value. This value needs to be defined in a separate cell from any of its uses and the variable name cannot be used for anything other than seeding RNGs. This is because it the variable will be redefined in the student’s submission at the top of every cell. We recommend defining it in, for example, your imports cell.

import numpy as np
rng_seed = 42

To use the seed, just use the variable as normal:

rng = np.random.default_rng(rng_seed)
rvs = [rng.random() for _ in range(1000)] # SOLUTION

Or, in R:

set.seed(rng_seed)
runif(1000)

If you use this method of intercell seeding, the solutions notebook will contain the original value of the seed, but the student notebook will contain the student value:

# from the student notebook
import numpy as np
rng_seed = 713

When you do this, Otter Generate will be configured to overwrite the seed variable in each submission, allowing intercell seeding to function as normal.

Remember that the student seed is different from the autograder seed, so any public tests cannot be deterministic otherwise they will fail on the student’s machine. Also note that only one seed is available, so each RNG must use the same seed.

You can find more information about intercell seeding here.

Autograded Questions

Here is an example question in an Otter Assign-formatted question:

Note the use of the delimiting raw cells and the placement of question metadata in the # BEGIN QUESTION cell. The question metadata can contain the following fields (in any order):

name: null        # (required) the path to a requirements.txt file
manual: false     # whether this is a manually-graded question
points: null      # how many points this question is worth; defaults to 1 internally
check_cell: true  # whether to include a check cell after this question (for autograded questions only)
export: false     # whether to force-include this question in the exported PDF

As an example, the question metadata below indicates an autograded question q1 that should be included in the filtered PDF.

# BEGIN QUESTION
name: q1
export: true

Solution Removal

Solution cells contain code formatted in such a way that the assign parser replaces lines or portions of lines with prespecified prompts. Otter uses the same solution replacement rules as jAssign. From the jAssign docs:

  • A line ending in # SOLUTION will be replaced by ... (or NULL # YOUR CODE HERE in R), properly indented. If that line is an assignment statement, then only the expression(s) after the = symbol (or the <- symbol in R) will be replaced.

  • A line ending in # SOLUTION NO PROMPT or # SEED will be removed.

  • A line # BEGIN SOLUTION or # BEGIN SOLUTION NO PROMPT must be paired with a later line # END SOLUTION. All lines in between are replaced with ... (or # YOUR CODE HERE in R) or removed completely in the case of NO PROMPT.

  • A line """ # BEGIN PROMPT must be paired with a later line """ # END PROMPT. The contents of this multiline string (excluding the # BEGIN PROMPT) appears in the student cell. Single or double quotes are allowed. Optionally, a semicolon can be used to suppress output: """; # END PROMPT

def square(x):
    y = x * x # SOLUTION NO PROMPT
    return y # SOLUTION

nine = square(3) # SOLUTION

would be presented to students as

def square(x):
    ...

nine = ...

And

pi = 3.14
if True:
    # BEGIN SOLUTION
    radius = 3
    area = radius * pi * pi
    # END SOLUTION
    print('A circle with radius', radius, 'has area', area)

def circumference(r):
    # BEGIN SOLUTION NO PROMPT
    return 2 * pi * r
    # END SOLUTION
    """ # BEGIN PROMPT
    # Next, define a circumference function.
    pass
    """; # END PROMPT

would be presented to students as

pi = 3.14
if True:
    ...
    print('A circle with radius', radius, 'has area', area)

def circumference(r):
    # Next, define a circumference function.
    pass

For R,

# BEGIN SOLUTION
square = function(x) {
    return(x ^ 2)
}
# END SOLUTION
x2 = square(25)

would be presented to students as

# YOUR CODE HERE
x2 = NULL # YOUR CODE HERE

Test Cells

Any cells within the # BEGIN TESTS and # END TESTS boundary cells are considered tests cells. Each test cell corresponds to a single test case. There are two types of tests: public and hidden tests. Tests are public by default but can be hidden by adding the # HIDDEN comment as the first line of the cell. A hidden test is not distributed to students, but is used for scoring their work.

Test cells also support test case-level metadata. If your test requires metadata beyond whether the test is hidden or not, specify the test by including a mutliline string at the top of the cell that includes YAML-formatted test metadata. For example,

""" # BEGIN TEST CONFIG
points: 1
success_message: Good job!
""" # END TEST CONFIG
do_something()

The test metadata supports the following keys with the defaults specified below:

hidden: false          # whether the test is hidden
points: null           # the point value of the test
success_message: null  # a messsge to show to the student when the test case passes
failure_message: null  # a messsge to show to the student when the test case fails

Because points can be specified at the question level and at the test case level, point values get resolved as follows:

  • If one or more test cases specify a point value and no point value is specified for the question, each test case with unspecified point values is assumed to be worth 0 points.

  • If one or more test cases specify a point value and a point value is specified for the question, each test case with unspecified point values is assumed to be equally weighted and together are worth the question point value less the sum of specified point values. For example, in a 6-point question with 4 test cases where two test cases are each specified to be worth 2 points, each of the other test cases is worth \(\frac{6-(2 + 2)}{2} = 1\) point.)

  • If no test cases specify a point value and a point value is specified for the question, each test case is assumed to be equally weighted and is assigned a point value of \(\frac{p}{n}\) where \(p\) is the number of points for the question and \(n\) is the number of test cases.

  • If no test cases specify a point value and no point value is specified for the question, the question is assumed to be worth 1 point and each test case is equally weighted.

Note: Currently, the conversion to OK format does not handle multi-line tests if any line but the last one generates output. So, if you want to print twice, make two separate test cells instead of a single cell with:

print(1)
print(2)

If a question has no solution cell provided, the question will either be removed from the output notebook entirely if it has only hidden tests or will be replaced with an unprompted Notebook.check cell that runs those tests. In either case, the test files are written, but this provides a way of defining additional test cases that do not have public versions. Note, however, that the lack of a Notebook.check cell for questions with only hidden tests means that the tests are run at the end of execution, and therefore are not robust to variable name collisions.

Intercell Seeding

The second flavor of intercell seeding involves writing a line that ends with # SEED; when Otter Assign runs, this line will be removed from the student version of the notebook. This allows instructors to write code with deterministic output, with which hidden tests can be generated.

For example, the first line of the cell below would be removed in the student version of the notebook.

np.random.seed(42) # SEED
rvs = [np.random.random() for _ in range(1000)] # SOLUTION

The same caveats apply for this type of seeding as above.

R Example

Here is an example autograded question for R:

Manually Graded Questions

Otter Assign also supports manually-graded questions using a similar specification to the one described above. To indicate a manually-graded question, set manual: true in the question metadata.

A manually-graded question can have an optional prompt block and a required solution block. If the solution has any code cells, they will have their syntax transformed by the solution removal rules listed above.

If there is a prompt for manually-graded questions, then this prompt is included unchanged in the output. If none is present, Otter Assign automatically adds a Markdown cell with the contents _Type your answer here, replacing this text._ if the solution block has any Markdown cells in it.

Here is an example of a manually-graded code question:

Manually graded questions are automatically enclosed in <!-- BEGIN QUESTION --> and <!-- END QUESTION --> tags by Otter Assign so that only these questions are exported to the PDF when filtering is turned on (the default). In the autograder notebook, this includes the question cell, prompt cell, and solution cell. In the student notebook, this includes only the question and prompt cells. The <!-- END QUESTION --> tag is automatically inserted at the top of the next cell if it is a Markdown cell or in a new Markdown cell before the next cell if it is not.

Ignoring Cells

For any cells that you don’t want to be included in either of the output notebooks that are present in the master notebook, include a line at the top of the cell with the ## Ignore ## comment (case insensitive) just like with test cells. Note that this also works for Markdown cells with the same syntax.

## Ignore ##
print("This cell won't appear in the output.")

Student-Facing Plugins

Otter supports student-facing plugin events via the otter.Notebook.run_plugin method. To include a student-facing plugin call in the resulting versions of your master notebook, add a multiline plugin config string to a code cell of your choosing. The plugin config should be YAML-formatted as a mutliline comment-delimited string, similar to the solution and prompt blocks above. The comments # BEGIN PLUGIN and # END PLUGIN should be used on the lines with the triple-quotes to delimit the YAML’s boundaries. There is one required configuration: the plugin name, which should be a fully-qualified importable string that evaluates to a plugin that inherits from otter.plugins.AbstractOtterPlugin.

There are two optional configurations: args and kwargs. args should be a list of additional arguments to pass to the plugin. These will be left unquoted as-is, so you can pass variables in the notebook to the plugin just by listing them. kwargs should be a dictionary that mappins keyword argument names to values; thse will also be added to the call in key=value format.

Here is an example of plugin replacement in Otter Assign:

Note that student-facing plugins are not supported with R assignments.

Sample Notebook

You can find a sample Python notebook here.