Controller
Overview
HydroBOT takes hydrographs as input data and then processes them through downstream modules, performs aggregation and analyses, and produces outputs. The ‘Controller’ component points to that input data, and sends it off to the modules with arguments controlling those modules. It may also determine how ongoing processing, especially aggregation, occurs.
In typical use, the controller simply points to the input data and initiates processing steps according to the user. Examples of this for the controller alone and the whole toolkit are available to illustrate this, as well as a stepthrough to better understand what the controller is doing.
In practice, it often makes the most sense to run the controller in a notebook or script, and then have a separate notebook or script for aggregation. This is because running the EWR tool is a long process relative to other steps, and aggregations may need to change as projects progress. Achieving this without re-running the modules is thus a common situation. That said, the workflows section illustrates various workflows that run them together.
Whether run alone or as a combined workflow, the controller, specifically prep_run_save_ewrs()
auto-documents itself by saving metadata yaml files with all arguments used. These parameter files not only document a run, but also allow replication since they are fully-specified parameter files acceptable for [run_hydrobot_params()].
Scenarios need to have unique names. As such, in most cases they are extracted from the file paths, which must be unique. This can get a bit messy, but is the only consistent way to ensure uniqueness. Analysis stages can incorporate cleanup steps. See additional detail.