Rhodium is a Python library for Many-Objective Robust Decision Making (MORDM), part of Project Platypus .Multiple past posts have addressed the use of Rhodium on various applications. This post mirrorsthis older post by Julie which demonstrated the MORDM functionality of Rhodium using a different model; I’ll be doing the same using the lake problem , a case study we often use for training purposes. In this post I will use Rhodium to replicate the analysis in the paper and discuss the importance of the various steps in the workflow, why they are performed and what they mean for MORDM. As this post will be quite long, I will be emphasizing discussion of results and significance and will be using previously written code found in the Rhodium repository (slightly adapted) and in Julie’s lake problem repository. In general, this post will not be re-inventing the wheel in terms of scripts, but will be an overview of why . For a more detailed description of the model, the reader is directed to the paper linked,Dave’s recent post, and the accompanying Powerpoint presentation for this training .
The system presents a hypothetical town that seeks to balance economic benefits resulting in phosphorus (P) emissions with the ecological benefits of a healthy lake. The lake naturally receives inflows of phosphorus from the surrounding area and recycles phosphorus in its system. The concentration of P in the lake at every timestep is described by the following equation:
Where a represents the anthropogenic phosphorus inputs from the town, Y ~ LN( mu , sigma ) are natural P inputs to the lake, q is a parameter controlling the recycling rate of P from sediment, b is a parameter controlling the rate at which P is removed from the system and t is the time index in years. Looking at the intersections of the temporal fluxes of P, one can see there are three equilibria for this system, two of which are stable and attractive, and one unstable, a tipping point in this system.
We’re trying to decide on values of that maximize our economic benefits, but also increase P in the lake. If we increase P too much, and we cross a tipping point of no return. Even though this is vital to the design of the best possible management policy, this value of P is hard to know exactly because:
– It depends on stochastic
– The rate of inputs is non-linear
– The system parameters determining this point might be (deeply) uncertain
The management of this hypothetical system is concerned in four objectives:
To search for and identify solution, we will be using two strategies:
Each candidate solution is a set of T decision variables , representing the anthropogenic discharges at each timestep . This strategy necessitates 100 decision variables.
Each candidate solution is a control rule mapping the current P concentration, , to a decision . This strategy necessitates 6 decision variables.
Please refer to the paper for details on the formulations of the objectives and strategies.
In general, to set up any Rhodium model, you’d need to follow this template:
The scripts for the two formulations can be found here and here . In general, Rhodium is written in a declarative manner so after you define what everything is (model, parameter, lever, etc.), you only need to describe the operation to be performed (e.g., “optimize”), without needing to specify all the details of how that should be done.
Why this matters: This library is set up in a way that allows rapid and intuitive application of the MORDM framework. This is especially beneficial in situations were alternative problem formulations are tested (as is the case here with the two search strategies).
To explore the two sets of candidate strategies using the tools provided by Rhodium, we need to merge them into a single dataset. We can do so by adding a key to each solution identified by each strategy:
for i in range(len(output)): output[i]['strategy']=1 dps_output[i]['strategy']=0 merged = DataSet(output+dps_output)
We can visualize the two sets using J3 (see my past post on ithere):
from j3 import J3 J3(merged.as_dataframe(list(model.responses.keys())+['strategy']))
Why this matters: The DPS strategy was able to identify more non-dominated solutions than the intertemporal strategy. Many of the DPS solutions outperform intertemporal solutions in every objective. This is important because we’re trying to identify the best possible management policies for this system and had we only used the Intertemporal strategy we could potentially miss multiple viable solutions that are better.
Now we have a set of candidate management policies, we should like to see what makes their performance different and how they affect the system when applied. Using J3, I can interactively explore the sets using the visualizations provided as well as the tables reporting on the performance of each solution. In the example figure below, I am highlighting the DPS solution with the highest reliability performance.
Using the script found here , I can draw two DPS policies, the one producing the highest reliability and the one resulting in the highest profits. What I am trying to do here is investigate more closely how the two policies prescribe action on the system and how they result in the performance that they do.
Both solutions map low P concentrations, , to higher P releases, , and prescribe decreased releases as the concentration increases. The most reliable policy is more conservative, starting to decrease its emissions earlier. Both policies start increasing discharges beyond some lake P concentration, with the benefits-maximizing solution doing so at lower values. This appears to be as the levels approach the tipping point, beyond which it is best to further maximize economic benefits.
Why this matters :
Understanding how the candidate policies act on a system to achieve their performance gives us further insight on how the system behaves and reacts to our actions. The candidate policies are also not aware of where the critical P threshold is, but appear to “discover” it and when crossed, they prescribe increased emissions to maximize profits.
Finally, we’d like to investigate how the candidate solutions perform under uncertainty. To do so using Rhodium, we need to define which parameters are exogenous uncertainties, including their distributions and ranges, sample them and re-evaluate either specific solutions or all of them in these sampled worlds. The script is simple and intuitive:
model.uncertainties = [UniformUncertainty("b", 0.1, 0.45), UniformUncertainty("q", 2.0, 4.5), UniformUncertainty("mean", 0.01, 0.05), UniformUncertainty("stdev", 0.001, 0.005), UniformUncertainty("delta", 0.93, 0.99)] SOWs = sample_lhs(model, 1000) policy = output.find_max("reliability") scenario_discovery = evaluate(model, update(SOWs, policy))
To perform scenario discovery, Rhodium supports Prim and Cart. We first need to classify the output into “success” and “failure”:
classification = scenario_discovery.apply("'Reliable' if reliability >= 0.95 and utility >=0.2 else 'Unreliable'") p = Prim(scenario_discovery, classification, include=model.uncertainties.keys(), coi="Reliable") box = p.find_box() fig = box.show_tradeoff() c = Cart(scenario_discovery, classification, include=model.uncertainties.keys(), min_samples_leaf=50) c.show_tree()
Why this matters :
Scenario discovery allows us to discover regions in the parametric space that cause our policies to fail. In this particular case, it appears that there is a non linear combination of b and q values that lead to failure in meeting the criterion. Both parameters shift the input and output fluxes of P, shifting the tipping point to lower levels.