Welcome to Hadar!

https://img.shields.io/pypi/v/hadar https://img.shields.io/github/workflow/status/hadar-solver/hadar/main/master https://sonarcloud.io/api/project_badges/measure?project=hadar-solver_hadar&metric=alert_status https://sonarcloud.io/api/project_badges/measure?project=hadar-solver_hadar&metric=coverage https://img.shields.io/github/license/hadar-solver/hadar

Hadar is a adequacy python library for deterministic and stochastic computation

You are in the technical documentation.

  • If you want to discover Hadar and the project, please go to https://www.hadar-simulator.org for an overview
  • If you want to start using Hadar, you can begin with tutorials
  • If you want to understand Hadar engine, see Architecture
  • If you want to look at a method or object behavior search inside References
  • If you want to help us coding Hadar, please read Dev Guide before.
  • If you want to see Mathematics model used in Hadar, go to Mathematics.

Overview

Welcome to the Hadar Architecture Documentation.

Hadar purpose is to be an adequacy library for everyone.

  1. Term everyone is important, Hadar must be such easy that everyone can use it.
  2. And Hadar must be such flexible that everyone business can use it or customize it.

Why these goals ?

We design Hadar in the same spirit of python libraries like numy or scipy, and moreover like scikit-learn. Before scikit-learn, people who want to develop machine learning have to had strong skill in mathematics background to develop their own code. Some ready to go codes existed but were not easy to use and flexible.

Scikit-learn release the power of machine learning by abstract complex algorithms into very straight forward API. It was designed like a toolbox to handle full machine learning framework, where user can just assemble scikit-learn component or build their own.

Hadar want to be the next scikit-learn for adequacy. Hadar has to be easy to use and flexible, which if we translate into architecture terms become high abstraction level and independent modules.

Independent modules

User has the choice : Use only Hadar components, assemble them and create a full solution to generate, solve and analyze adequacy study. Or build their parts.

To reach this constraint, we split Hadar into 4 main modules which can be use together or apart :

  • workflow: module used to generate data study. Hadar handle deterministic computation like stochastic. For stochastic computation user needs to generate many scenarios. Workflow will help user by providing a highly customizable pipeline framework to transform and generate data.
  • optimizer: more complex and mathematical module. User will use it to describe study adequacy to resolve. No need to understand mathematics, Hadar will handle data input given and translate it to a linear optimization problem before to call a solver.
  • analyzer: input data given to optimizer and output date with study result can be heavy to analyze. To avoid that every user build their own toolbox, we develop the most used features once for everyone.
  • viewer analyzer output will be numpy matrix or pandas Dataframe, it great but not enough to analyze result. Viewer uses the analyzer feature and API to generate graphics from study data.

As said, these modules can be used together to handle complete adequacy study lifecycle or used apart.

TODO graph architecture module

High Abstraction API

Each above modules are like a tiny independent libraries. Therefore each module has a high level API. High abstraction, is a bit confuse to handle and benchmark. For us a high abstraction is when user doesn’t need to know mathematics or technicals stuffs when he uses library.

Scikit-learn is the best example of high abstraction level API. For example, if we just want to start a complete SVM research

from sklean.svm import SVR
svm = SVR()
svm.fit(X_train, y_train)
y_pred = svm.predict(X_test)

How many people using this feature know that scikit-learn tries to project data into higher space to find a linear regression inside. And to accelerate computation, it uses mathematics a feature called a kernel trick because problem respect strict requirements ? Perhaps just few people and it’s all the beauty of an high level API, it hidden background gear.

Hadar tries to keep this high abstraction features. Look at the Get Started example

import hadar as hd

study = hd.Study(horizon=3)\
    .network()\
        .node('a')\
            .consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\
            .production(cost=10, quantity=[30, 20, 10], name='prod')\
        .node('b')\
            .consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\
            .production(cost=10, quantity=[10, 20, 30], name='prod')\
        .link(src='a', dest='b', quantity=[10, 10, 10], cost=2)\
        .link(src='b', dest='a', quantity=[10, 10, 10], cost=2)\
    .build()

optim = hd.LPOptimizer()
res = optim.solve(study)

Create a study like you will draw it on a paper. Put your nodes, attach some production, consumption, link and run optimizer.

Optimizer, Analayzer and Viewer parts are build around the same API called inside code Fluent API Selector. Each part has its flavours.

Go Next

Now goals are fixed, we can go deeper into specific module documentation. All architecture focuses on : High Abstraction and Independent module. You can also read the best practices guide to understand more development choice made in Hadar.

Let’t start code explanation.

Workflow

What is a stochastic study ?

Workflow is the preprocessing module for Hadar. It’s a toolbox to create pipelines to transform data for optimizer.

When you want to simulate a network adequacy, you can perform a deterministic computation. That means you believe you won’t have too much fluky behavior in the future. If you perform adequacy for the next hour or day, it’s a good hypothesis. But if you simulate network for the next week, month or year, it’s sound curious.

Are you sur wind will blow next week or sun will shines ? If not, you eolian or solar production could change. Can you warrant that no failure will occur on your network next month or next year ?

Of course, we can not predict future with such precision. It’s why we use stochastic computation. Stochastic means there are fluky behavior in the physics we want simulate. Simulation is quiet useless, if result is a unique result.

The best solution could be to compute a God function which tell you for each input variation (solar production, line, consumptions) what is the adequacy result. Like that, Hadar has just to analyze function, its derivatives, min, max, etc to predict future. But this God function doesn’t exist, we just have an algorithm which tell us adequacy according to one fixed set of input data.

It’s why we use Monte Carlo algorithm. Monte Carlo run many scenarios to analyze many different behavior. Scenario with more consumption in cities, less solar production, less coal production or one line deleted due to crash. By this method we recreate God function by sampling it with the Monte-Carlo method.

_images/monte-carlo.png

Workflow will help user to generate these scenarios and sample them to create a stochastic study.

The main issue when we want to help people generating their scenarios is they are as many generating process as user. Therefore workflow is build upon a Stage and Pipeline Architecture.

Stages, Pipelines & Plug

Stage is an atomic process applied on data. In workflow, data is a pandas Dataframe. Index is time. First column level is for scenario, second is for data (it could be anything like mean, max, sigma, …). Dataframe is represented below:

  scn 1 scn n …
t mean max min mean max min
0 10 20 2 15 22 8
1 12 20 2 14 22 8

A stage will perform compute to this Dataframe. As you assume it, stages can be linked together to create pipeline. Hadar has its own stages very generic, each user can build these stages and create these pipelines.

For examples, you have many coal production. Each production plan has 10 generators of 100 MW. That means a coal plan production has 1,000 MW of power. You know that sometime, some generators crash or need shutdown for maintenance. With Hadar you can create a pipeline to generate these fault scenarios.

# In this example, one timestep = one hour
import hadar as hd
import numpy as np
import hadar as hd
import matplotlib.pyplot as plt

# coal production over 8 weeks with hourly step
coal = pd.DataFrame({'quantity': np.ones(8 * 168) * 1000})

# Copy scenarios ten times
copy = hd.RepeatScenario(n=10)

# Apply on each scenario random fault, such as power drop is 100 MW, there is 0.1% chance of failure each hour
# if failure, it's a least for the whole day and until next week.
fault = hd.Fault(loss=100, occur_freq=0.001, downtime_min=24, downtime_max=168)

pipe = copy + fault
out = pipe.compute(coal)

out.plot()
plt.show()

Output:

_images/fault.png

Create its own Stage

RepeatScenario, Fault and all other are build upon Stage abstract class. A Stage is specified by its Plug (we will see sooner) and a _process_timeline(self, timeline: pd.DataFrame) -> pd.DataFrame to implement. timeline variable inside method is the data passed thought pipeline to transform.

For example, you need to multiply by 2 during your pipeline. You can create your stage by

class Twice(Stage):
 def __init__(self):
     Stage.__init__(self, FreePlug())

 def _process_timeline(self, timelines: pd.DataFrame) -> pd.DataFrame:
     return timelines * 2

Implement Stage will work every time. Often, you want to apply function independently for each scenario. You can of course handle yourself this mechanism to split current timeline apply method and rebuild at the end. Or use FocusStage, same thing but already coded. In this case, you need to inherent from FocusStage and implement _process_scenarios(self, n_scn: int, scenario: pd.DataFrame) -> pd.DataFrame method.

For example, you have thousand of scenarios, your stage has to generate gaussian series according to mean and sigma given.

class Gaussian(FocusStage):
    def __init__(self):
        FocusStage.__init__(self, plug=RestrictedPlug(input=['mean', 'sigma'], output=['gaussian']))

    def _process_scenarios(self, n_scn: int, scenario: pd.DataFrame) -> pd.DataFrame:
        scenario['gaussian'] = np.random.randn(scenario.shape[0])
        scenario['gaussian'] *= scenario['sigma']
        scenario['gaussian'] += scenario['mean']

        return scenario.drop(['mean', 'sigma'], axis=1)

What’s Plug ?

You are already see FreePlug and RestrictedPlug, what’s it ?

Stage are linked together to build pipeline. Some Stage accept every thing as input, like Twice, but other need specific data like Gaussian. How we know that stage can be link together and data given at the beginning of pipeline is correct for all pipeline.

First solution is saying : We don’t care about. During execution, if data is missing, error will be raised and it’s enough. Indeed… That’s work, but if pipeline job is heavy, takes hour, and failed just due to a misspelling column name, it’s ugly.

Plug object describe linkable constraint for Stage and Pipeline. Like Stage, Plug can be added together. In this case, constraint are merged. You can use FreePlug telling this Stage is not constraint and doesn’t expected any column name to run. Or use RestrictedPlug(inputs=[], outputs=[]) to specify inputs mandatory columns and new columns generated.

Plug arithmetic rules are described below (\(\emptyset\) = FreePlug)

\[\begin{split}\begin{array}{rcl} \emptyset & + & \emptyset & = & \emptyset \\ [a \rightarrow \alpha ] & + & \emptyset & = & [a \rightarrow \alpha ] \\ [a \rightarrow \alpha ] & + & [\alpha \rightarrow A]& = & [a \rightarrow A] \\ [a \rightarrow \alpha, \beta ] & + & [\alpha \rightarrow A]& = & [a \rightarrow A, \beta] \\ \end{array}\end{split}\]

Shuffler

User can create as many pipeline as he want. At the end, he could have some pipelines and input data or directly input data pre-generated. He needs to sampling this dataset to create study. For example, he could have 10 coal generation, 25 solar, 10 consumptions. He needs to create study with 100 scenarios.

Of course he can develop sampling algorithm, but he can also use Shuffler. Indeed Shuffler does a bit more than just sampling:

  1. It is like a sink where user put pipeline or raw data. Shuffler will homogeneous them to create scenarios. Behind code, we use Timeline and PipelineTimeline class to homogenize data according to raw data or data from output pipeline.
  2. It will schedule pipelines compute. If shuffler is used with pipeline, it will distribute pipeline running over computer cores. A good tips !
  3. It samples data to create study scenarios.
_images/shuffler.png

Below an example how to use Shuffler

shuffler = Shuffler()
# Add raw data as a numpy array
shuffler.add_data(name='solar', data=np.array([[1, 2, 3], [5, 6, 7]]))

# Add pipeline and its input data
i = pd.DataFrame({(0, 'a'): [3, 4, 5], (1, 'a'): [7, 8, 9]})
pipe = RepeatScenario(2) + ToShuffler('a')
shuffler.add_pipeline(name='load', data=i, pipeline=pipe)

# Shuffle to sample 3 scenarios
res = shuffler.shuffle(3)

# Get result according name given
solar = res['solar']
load = res['load']

Optimizer

Optimizer is the heart of Hadar. Behind it, there are :

  1. Input object called Study. Output object called Result. These two objects encapsulate all data needed to compute adequacy.
  2. Many optimizers. User can chose which will solve study.

Therefore Optimizer is an abstract class build on Strategy pattern. User can select optimizer or create their own by implemented Optimizer.solve(study: Study) -> Result

Today, two optimizers are present LPOptimizer and RemoteOptimizer

_images/ulm-optimizer.png

RemoteOptimizer

Let’s start by the simplest. RemoteOptimizer is a client to hadar server. As you may know Hadar exist like a python library, but has also a tiny project to package hadar inside web server. You can find more details on this server in this repository.

Client implements Optimizer interface. Like that, to deploy compute on a data-center, only one line of code changes.

import hadar as hd
# Normal : optim = hd.LPOptimizer()
optim = hd.RemoteOptimizer(host='example.com')
res = optim.solve(study=study)

LPOptimizer

Before read this chapter, we kindly advertise you to read Linear Model

LPOptimizer translate data into optimization problem. Hadar algorithms focus only on modeling problem and uses or-tools to solve problem.

To achieve modeling goal, LPOptimizer is designed to receive Study object, convert data into or-tools Variables. Then Variables are placed inside objective and constraint equations. Equations are solved by or-tools. Finally Variables are converted to Result object.

Analyze that in details.

InputMapper

If you look in code, you will see three domains. One at hadar.optimizer.input, hadar.optimizer.output and another at hadar.optimizer.lp.domain . If you look carefully it seems the same Consumption , OutputConsumption in one hand, LPConsumption in other hand. The only change is a new attribute in LP* called variable . Variables are the parameters of the problem. It’s what or-tools has to find, i.e. power used for production, capacity used for border and lost of load for consumption.

Therefore, InputMapper roles are just to create new object with ortools Variables initialized, like we can see in this code snippet.

# hadar.optimizer.lp.mapper.InputMapper.get_var
LPLink(dest=l.dest,
       cost=float(l.cost),
       src=name,
       quantity=l.quantity[scn, t],
       variable=self.solver.NumVar(0, float(l.quantity[scn, t]),
          'link on {} to {} at t={} for scn={}'.format(name, l.dest, t, scn)
       )
 )

OutputMapper

At the end, OutputMapper does the reverse thing. LP* objects have computed Variables. We need to extract result found by or-tool to Result object.

Mapping of LPProduction and LPLink are straight forward. I propose you to look at LPConsumption code

self.nodes[name].consumptions[i].quantity[scn, t] =
vars.consumptions[i].quantity - vars.consumptions[i].variable.solution_value()

Line seems strange due to complex indexing. First we select good node name, then good consumption i, then good scenario scn and at the end good timestep t. Rewriting without index, this line means :

\[Cons_{final} = Cons_{given} - Cons_{var}\]

Keep in mind that \(Cons_{var}\) is the lost of load. So we need to subtract it from initial consumption to get really consumption sustained.

Modeler

Hadar has to build problem optimization. These algorithms are encapsulated inside two builders.

ObjectiveBuilder takes node by its method add_node. Then for all productions, consumptions, links, it adds \(variable * cost\) into objective equation.

StorageBuilder build constraints for each storage element. Constraints care about a strict volume integrity (i.e. volume is the sum of last volume + input - output)

ConverterBuilder build ratio constraints between each inputs converter to output.

AdequacyBuilder is a bit more tricky. For each node, it will create a new adequacy constraint equation (c.f. Linear Model). Coefficients, here are 1 or -1 depending of inner power or outer power. Have you seen these line ?

self.constraints[(t, link.src)].SetCoefficient(link.variable, -1)  # Export from src
self.importations[(t, link.src, link.dest)] = link.variable  # Import to dest

Hadar has to set power importation to dest node equation. But maybe this node is not yet setup and its constraint equation doesn’t exist yet. Therefore it has to store all constraint equations and all link capacities. And at the end build() is called, which will add importation terms into all adequacy constraints to finalize equations.

def build(self):
    """
    Call when all node are added. Apply all import flow for each node.

    :return:
    """
    # Apply import link in adequacy
    for (t, src, dest), var in self.importations.items():
        self.constraints[(t, dest)].SetCoefficient(var, 1)

solve_batch method resolve study for one scenario. It iterates over node and time, calls InputMapper, then constructs problem with *Buidler, and asks or-tools to solve problem.

solve_lp applies the last iteration over scenarios and it’s the entry point for linear programming optimizer. After all scenarios are solved, results are mapped to Result object.

_images/lpoptimizer.png
Or-tools, multiprocessing & pickle nightmare

Scenarios are distributed over cores by mutliprocessing library. solve_batch is the compute method called by multiprocessing. Therefore all input data received by this method and output data returned must be serializable by pickle (used by multiprocessing). However, output has ortools Variable object which is not serializable.

Hadar doesn’t need complete Variable object. Indeed, it just want value solution found by or-tools. So we will help pickle by creating more simpler object, we carefully recreate same API solution_value() to be compliant with downstream code

class SerializableVariable(DTO):
    def __init__(self, var: Variable):
        self.val = var.solution_value()

    def solution_value(self):
        return self.val

Then specify clearly how to serialize object by implementing __reduce__ method

# hadar.optimizer.lp.domain.LPConsumption
def __reduce__(self):
    """
    Help pickle to serialize object, specially variable object
    :return: (constructor, values...)
    """
    return self.__class__, (self.quantity, SerializableVariable(self.variable), self.cost, self.name)

It should work, but in fact not… I don’t know why, when multiprocessing want to serialize returned data, or-tools Variable are empty, and mutliprocessing failed. Whatever, we just need to handle serialization oneself

# hadar.optimizer.lp.solver._solve_batch
return pickle.dumps(variables)

Study

code:Study` is a API object I means it encapsulates all data needed to compute adequacy. It’s the glue between workflow (or any other preprocessing) and optimizer. Study has an hierarchical structure of 3 levels :
  1. study level with set of networks and converter (Converter)
  2. network level (InputNetwork) with set of nodes.
  3. node level (InputNode) with set of consumptions, productions, storages and links elements.
  4. element level (Consumption, Production, Storage, Link). According to element type, some attributes are numpy 2D matrix with shape(nb_scn, horizon)

Most important attribute could be quantity which represent quantity of power used in network. For link, is a transfert capacity. For production is a generation capacity. For consumption is a forced load to sustain.

Fluent API Selector

User can construct Study step by step thanks to a Fluent API Selector

import hadar as hd

study = hd.Study(horizon=3)\
    .network()\
        .node('a')\
            .consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\
            .production(cost=10, quantity=[30, 20, 10], name='prod')\
        .node('b')\
            .consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\
            .production(cost=10, quantity=[10, 20, 30], name='prod')\
        .link(src='a', dest='b', quantity=[10, 10, 10], cost=2)\
        .link(src='b', dest='a', quantity=[10, 10, 10], cost=2)\
    .build()

optim = hd.LPOptimizer()
res = optim.solve(study)

In the case of optimizer, Fluent API Selector is represented by NetworkFluentAPISelector , and NodeFluentAPISelector classes. As you assume with above example, optimizer rules for API Selector are :

  • API flow begin by network() and end by build()
  • You can only downstream deeper step by step (i.e. network() then node(), then consumption() )
  • But you can upstream as you want (i.e. go direcly from consumption() to network() or converter() )

To help user, quantity and cost fields are flexible:

  • lists are converted to numpy array
  • if user give a scalar, hadar extends to create (scenario, horizon) matrix size
  • if user give (horizon, ) matrix or list, hadar copies N time scenario to make (scenario, horizon) matrix size
  • if user give (scenario, 1) matrix or list, hadar copies N time timestep to make (scenario, horizon) matrix size

Study includes also check mechanism to be sure: node exist, consumption is unique, etc.

Result

Result look like Study, it has the same hierarchical structure, same element, just different naming to respect Domain Driven Development . Indeed, Result is used as output computation, therefore we can’t reuse the same object. Result is the glue between optimizer and analyzer (or any else postprocessing).

Result shouldn’t be created by user. User will only read it. So, Result has not fluent API to help construction.

Analyzer

For a high abstraction and to be agnostic about technology, Hadar uses objects as glue for optimizer. Objects are cool, but are too complicated to manipulated for data analysis. Analyzer contains tools to help analyzing study and result.

Today, there is only ResultAnalyzer, with two features level:

  • high level user asks directly to compute global cost and global remain capacity, etc.
  • low level user build query and get raw data represented inside pandas Dataframe.

Before speaking about this features, let’s see how data are transformed.

Flatten Data

As said above, object is nice to encapsulate data and represent it into agnostic form. Objects can be serialized into JSON or something else to be used by another software maybe in another language. But keep object to analyze data is awful.

Python has a very efficient tool for data analysis : pandas. Therefore challenge is to transform object into pandas Dataframe. Solution is to flatten data to fill into table.

Consumption

For example with consumption. Data into Study is cost and asked quantity. And in Result it’s cost (same) and given quantity. This tuple (cost, asked, given) is present for each node, each consumption attached on this node, each scenario and each timestep. If we want to flatten data, we need to fill this table

cost asked given node name scn t network
10 5 5 fr load 0 0 default
10 7 7 fr load 0 1 default
10 7 5 fr load 1 0 default
10 6 6 fr load 1 1 default

It is the purpose of _build_consumption(study: Study, result: Result) -> pd.Dataframe to build this array

Production

Production follow the same pattern. However, they don’t have asked and given but available and used quantity. Therefore table looks like

cost avail used node name scn t network
10 100 21 fr coal 0 0 default
10 100 36 fr coal 0 1 default
10 100 12 fr coal 1 0 default
10 100 81 fr coal 1 1 default

It’s done by _build_production(study: Study, result: Result) -> pd.Dataframe method.

Storage

Storage follow the same pattern. Therefore table looks like.

max_capacity capacity max_flow_in flow_in max_flow_out flow_out cost init_capacity eff node name scn t network
12000 678 400 214 400 0 10 0 .99 fr cell 0 0 default
12000 892 400 53 400 0 10 0 .99 fr cell 0 1 default
12000 945 400 0 400 87 10 0 .99 fr cell 1 0 default
12000 853 400 0 400 0 10 0 .99 fr cell 1 1 default

It’s done by _build_storage(study: Study, result: Result) -> pd.Dataframe method.

Converter

Converter follow the same pattern, it just split in two tables. One for source element:

max ratio flow node name scn t network
100 .4 52 fr conv 0 0 default
100 .4 87 fr conv 0 1 default
100 .4 23 fr conv 1 0 default
100 .4 58 fr conv 1 1 default

It’s done by _build_src_converter(study: Study, result: Result) -> pd.Dataframe method.

And an other for destination element, tables are near identical. Source has special attributes called ratio and destintion has special attribute called cost:

max cost flow node name scn t network
100 20 52 fr conv 0 0 default
100 20 87 fr conv 0 1 default
100 20 23 fr conv 1 0 default
100 20 58 fr conv 1 1 default

It’s done by _build_dest_converter(study: Study, result: Result) -> pd.Dataframe method.

Low level analysis power with a FluentAPISelector

When you observe flat data, there are two kind of data. Content like cost, given, asked and index describes by node, name, scn, t.

Low level API analysis provided by ResultAnalyzer lets user to

  1. Organize index level, for example set time, then scenario, then name, then node.
  2. Filter index, for example just time from 10 to 150, just ‘fr’ node, etc

User can said, I want ‘fr’ node productions for first scenario to 50 until 60 timestep. In this cas ResultAnalyzer will return

    used cost avail
t name 21 fr uk
50 oil 36 fr uk
coal 12 fr uk

60

oil 81 fr uk

If first index like node and scenario has only one element, there are removed.

This result can be done by this line of code.

agg = hd.ResultAnalyzer(study, result)
df = agg.network().node('fr').scn(0).time(slice(50, 60)).production()

For analyzer, Fluent API respect these rules:

  • API flow begin by network()
  • API flow must contain strictly one of node() , time(), scn() element
  • API flow must contain only one of element inside link() , production() , consumption()
  • Except for network(), API has no order. Order is free for user to give hierarchy data.
  • Therefore above rules, API will always be 5 elements length.

Behind this mechanism, there are Index objects. As you can see directly in the code

...
self.consumption = lambda x=None: self._append(ConsIndex(x))
...
self.time = lambda x=None: self._append(TimeIndex(x))
...

Each kind of index has to inherent from this class. Index object encapsulate column metadata to use and range of filtered elements to keep (accessible by overriding __getitem__ method). Then, Hadar has child classes with good parameters : ConsIndex , ProdIndex , NodeIndex , ScnIndex , TimeIndex , LinkIndex , DestIndex . For example you can find below NodeIndex implementation

class NodeIndex(Index[str]):
    """Index implementation to filter nodes"""
    def __init__(self):
        Index.__init__(self, column='node')
_images/ulm-index.png

Index instantiation are completely hidden for user. Then, hadar will

  1. check that mandatory indexes are given with _assert_index method.
  2. pivot table to recreate indexing according to filter and sort asked with _pivot method.
  3. remove one-size top-level index with _remove_useless_index_level method.

As you can see, low level analyze provides efficient method to extract data from adequacy study result. However data returned remains a kind of roots and is not ready for business purposes.

High Level Analysis

Unlike low level, high level focus on provides ready to use data. Unlike low level, features should be designed one by one for business purpose. Today we have 2 features:

  • get_cost(self, node: str) -> np.ndarray: method which according to node given returns a matrix (scenario, horizon) shape with summarize cost.
  • get_balance(self, node: str) -> np.ndarray method which according to node given returns a matrix (scenario, horizon) shape with exchange balance (i.e. sum of exportation minus sum of importation)

Viewer

Even with the highest level analyzer features. Data remains simple matrix or tables. Viewer is the end of Hadar framework, it will create amazing plot to bring most valuable data for human analysis.

Viewer use Analyzer API to build plots. It like an extract layer to convert numeric result to visual result.

Viewer is split in two domains. First part implements the FluentAPISelector, use ResultAnalyzer to compute result and perform last compute before display graphics. This behaviour are coded inside all *FluentAPISelector classes.

These classes are directly used by user when asking for a graphics

plot = ...
plot.network().node('fr').consumption('load').gaussian(t=4)
plot.network().map(t=0, scn=0)
plot.network().node('de').stack(scn=7)

For Viewer, Fluent API has these rules:

  • API begins by network.
  • User can only go downstream step by step into data. He must specify element choice at each step.
  • When he reaches wanted scope (network, node, production, etc), he can call graphics available for the current scope.

Second part belonging to Viewer is only for plotting. Hadar can handle many different libraries and technologies for plotting. New plotting has just to implement ABCPlotting and ABCElementPlotting . Today one HTML implementation exist with plotly library inside HTMLPlotting and HTMLElementPlotting.

Data send to plotting classes are complete, pre-computed and ready to display.

Linear Model

The main optimizer is LPOptimizer. It creates linear programming problem representing network adequacy. We will see mathematics problem, step by step

  1. Basic adequacy equations

  2. Add lack of adequacy terms (lost of load and spillage)

    As you will see, \(\Gamma_x\) represents a quantity in network, \(\overline{\Gamma_x}\) is the maximum, \(\underline{\Gamma_x}\) is the minimum, \(\overline{\underline{\Gamma_x}}\) is the maximum and minimum a.k.a it’s a forced quantity. Upper case grec letter is for quantity, and lower case grec letter is for cost \(\gamma_x\) associated to this quantity.

Basic adequacy

Let’s begin by the first adequacy behavior. We have a graph \(G(N, L)\) with \(N\) nodes on the graph and \(L\) unidirectional edges on this graph.

Variables

  • \(n \in N\) a node belongs to graph
  • \(T \in \mathbb{Z}_+\) time horizon

Edge variables

  • \(l \in L\) an unidirectional edge belongs to graphs
  • \(\overline{\Gamma_l} \in \mathbb{R}^T_+\) maximum power transfert capacity for \(l\)
  • \(\Gamma_l \in \mathbb{R}^T_+\) power transfered inside \(l\)
  • \(\gamma_l \in \mathbb{R}^T_+\) proportional cost when \(\Gamma_l\) is used
  • \(L^n_\uparrow \subset L\) set of edges with direction to node \(n\) (i.e. importation for \(n\))
  • \(L^n_\downarrow \subset L\) set of edges with direction from node \(n\) (i.e. exportation for \(n\))

Productions variables

  • \(P^n\) set of productions attached to node \(n\)
  • \(p \in P^n\) a production inside set of productions attached to node \(n\)
  • \(\overline{\Gamma_p} \in \mathbb{R}^T_+\) maximum power capacity available for \(p\) production.
  • \(\Gamma_p \in \mathbb{R}^T_+\) power capacity of \(p\) used during adequacy
  • \(\gamma_p \in \mathbb{R}^T_+\) proportional cost when \(\Gamma_p\) is used

Consumptions variables

  • \(C^n\) set of consumptions attached to node \(n\)
  • \(c \in C^n\) a consumption inside set of consumptions attached to node \(n\)
  • \(\underline{\overline{\Gamma_c}} \in \mathbb{R}^T_+\) forced consumptions of \(c\) to sustain.

Objective

\[\begin{split}\begin{array}{rcl} objective & = & \min{\Omega_{transmission} + \Omega_{production}} \\ \Omega_{transmission} &=& \sum^{L}_{l}{\Gamma_l*{\gamma_l}} \\ \Omega_{production} & = & \sum^N_n \sum^{P^n}_{p}{\Gamma_p * {\gamma_p}} \end{array}\end{split}\]

Constraint

First constraint is from Kirschhoff law and describes balance between productions and consumptions

\[\begin{array}{rcl} \Pi_{kirschhoff} &:& \forall n &,& \underbrace{\sum^{C^n}_{c}{\underline{\overline{\Gamma_c}}} + \sum^{L^n_{\downarrow}}_{l}{ \Gamma_l }}_{Consuming\ Flow} = \underbrace{\sum^{P^n}_{p}{ \Gamma_p } + \sum^{L^n_{\uparrow}}_{l}{ \Gamma_l }}_{Producing\ Flow} \end{array}\]

Then productions and edges need to be bounded

\[\begin{split}\begin{array}{rcl} \Pi_{Edge\ bound} &:& \forall l \in L &,& 0 \le \Gamma_{l} \le \overline{\Gamma_l} \\ \Pi_{Prod\ bound} &:& \left\{ \begin{array}{cl} \forall n \in N \\ \forall p \in P^n \end{array} \right. &,& 0 \le \Gamma_p \le \overline{\Gamma_p} \end{array}\end{split}\]

Lack of adequacy

Variables

Sometime, there are a lack of adequacy because there are not enough production, called lost of load.

Like \(\Gamma_x\) means quantity present in network, \(\Lambda_x\) represents a lack in network (consumption or production) to reach adequacy. Like for \(\Gamma_x\) , lower case grec letter \(\lambda_x\) is for cost associated to this lack.
  • \(\Lambda_c \in \mathbb{R}^T_+\) lost of load for \(c\) consumption
  • \(\lambda_c \in \mathbb{R}^T_+\) proportional cost when \(\Lambda_c\) is used

Objective

Objective has a new term

\[\begin{split}\begin{array}{rcl} objective & = & \min{\Omega_{...} + \Omega_{lol}}\\ \Omega_{lol} & = & \sum^N_n \sum^{C^n}_{c}{\Lambda_c * {\lambda_c}} \end{array}\end{split}\]

Constraints

Kirschhoff law needs an update too. Lost of Load is represented like a fantom import of energy to reach adequacy.

\[\begin{array}{rcl} \Pi_{kirschhoff} &:& \forall n \in N &,& [Consuming\ Flow] = [Producing\ Flow] + \sum^{C^n}_{c}{ \Lambda_c } \end{array}\]

Lost of load must be bounded

\[\begin{split}\begin{array}{rcl} \Pi_{Lol\ bound} &:& \left\{ \begin{array}{cl} \forall n \in N \\ \forall c \in C^n \end{array} \right. &,& 0 \le \Lambda_c \le \overline{\underline{\Gamma_c}} \end{array}\end{split}\]

Storage

Variables

Storage is a element inside Hadar to store quantity on a node. We have:

  • \(S^n\) : set of storage attached to node \(n\)
  • \(s \in S^n\) a storage element inside a set of storage attached to node \(n\)
  • \(\Gamma_s\) current capacity inside storage \(s\)
  • \(\overline{ \Gamma_s }\) max capacity for storage \(s\)
  • \(\Gamma_s^0\) initial capacity inside storage \(s\)
  • \(\gamma_s\) linear cost of capacity storage \(s\) for one time step
  • \(\Gamma_s^\downarrow\) input flow to storage \(s\)
  • \(\overline{ \Gamma_s^\downarrow }\) max input flow to storage \(s\)
  • \(\Gamma_s^\uparrow\) output flow to storage \(s\)
  • \(\overline{ \Gamma_s^\uparrow }\) max output flow to storage \(s\)
  • \(\eta_s\) storage efficiency for \(s\)

Objective

\[\begin{split}\begin{array}{rcl} objective & = & \min{\Omega_{...} + \Omega_{storage}} \\ \Omega_{storage} & = & \sum^N_n \sum^{S^n}_{s}{\Gamma_s * {\gamma_s}} \end{array}\end{split}\]

Constraints

Kirschhoff law needs an update too. Warning with naming : Input flow for storage is a output flow for node, so goes into consuming flow. And as you assume output flow for storage is a input flow for node, and goes into production flow.

\[\begin{array}{rcl} \Pi_{kirschhoff} &:& \forall n \in N &,& [Consuming\ Flow] + \sum^{S^n}_{s}{\Gamma_s^\downarrow} = [Producing\ Flow] + \sum^{S^n}_{s}{\Gamma_s^\uparrow} \end{array}\]

And all these things are bounded :

\[\begin{split}\begin{array}{rcl} \Pi_{Store\ bound} &:& \left\{\begin{array}{cl} \forall n \in N \\ \forall s \in S^n \end{array} \right. &,& \begin{array}{rcl} 0 &\le& \Gamma_s &\le& \overline{\Gamma_s} \\ 0 &\le& \Gamma_s^\downarrow &\le& \overline{\Gamma_s^\downarrow} \\ 0 &\le& \Gamma_s^\uparrow &\le& \overline{\Gamma_s^\uparrow} \end{array} \end{array}\end{split}\]

Storage has also a new constraint. This constraint applies over time to ensure capacity integrity.

\[\begin{split}\begin{array}{rcl} \Pi_{storage} &:& \left\{\begin{array}{cl} \forall n \in N \\ \forall s \in S^n \\ \forall t \in T \end{array} \right. &,& \Gamma_s[t] = \left| \begin{array}{ll}\Gamma_s[t-1]\\ \Gamma_s^0\ ,\ t=0 \end{array} + \right.\Gamma_s^\downarrow[t] * \eta_s - \Gamma_s^\uparrow[t] \end{array}\end{split}\]

Multi-Energies

Hadar handle multi-energies. In the code, one energy lives inside one network. Multi-energies means multi-networks. Mathematically, there are all the same. That why we don’t talk about multi graph, there are always one graph \(G\), nodes remains the same, with same equation for every kind of energies.

The only difference is how we link node together. If nodes belongs to same network, we use link (or edge) seen before. When nodes belongs to different energies we need to use converter. All things above remains true, we just add now a new element \(V\) converters ont this graph \(G(N, L, V)\) .

Converter can take energy form many nodes in different network. Each converter input has a ratio between output quantity and input quantity. Converter has only one output to only on node.

_images/converter.png

Variables

  • \(V\) set of converters
  • \(v \in V\) a converter in the set of converters
  • \(V^n_\uparrow \subset V\) set of converters to node \(n\)
  • \(V^n_\downarrow \subset V\) set of converters from node \(n\)
  • \(\Gamma_v^\uparrow\) flow from converter \(v\).
  • \(\overline{\Gamma_v^\uparrow}\) max flow from converter \(v\)
  • \(\gamma_v\) linear cost when \(\Gamma_v^\uparrow\) is used
  • \(\Gamma_v^\downarrow\) flow(s) to converter. They can have many flows for \(v \in V\), but only one for \(v \in V^n_\downarrow\)
  • \(\overline{\Gamma_v^\downarrow}\) max flow to converter
  • \(\alpha^n_v\) ratio conversion for converter \(v\) from node \(n\)

Objective

\[\begin{split}\begin{array}{rcl} objective & = & \min{\Omega_{...} + \Omega_{converter}} \\ \Omega_{converter} & = & \sum^V_v {\Gamma_v^\uparrow * \gamma_v} \end{array}\end{split}\]

Constraints

Of course Kirschhoff need a little update. Like for storage Warning with naming ! Converter input is a consuming flow for node, converter output is a production flow for node.

\[\begin{array}{rcl} \Pi_{kirschhoff} &:& \forall n \in N &,& [Consuming\ Flow] + \sum^{V^n_\downarrow}_{v}{\Gamma_v^\downarrow} = [Producing\ Flow] + \sum^{V^n_\uparrow}_{v}{\Gamma_v^\uparrow} \end{array}\]

And all these things are bounded :

\[\begin{split}\begin{array}{rcl} \Pi_{Conv\ bound} &:& \left\{\begin{array}{cl} \forall n \in N \\ \forall v \in V^n \end{array} \right. &,& \begin{array}{rcl} 0 &\le& \Gamma_v^\downarrow &\le& \overline{\Gamma_v^\downarrow} \\ 0 &\le& \Gamma_v^\uparrow &\le& \overline{\Gamma_v^\uparrow} \end{array} \end{array}\end{split}\]

Now, we need to fix ratios conversion by a new constraints

\[\begin{split}\begin{array}{rcl} \Pi_{converter} &:& \left\{\begin{array}{cl} \forall n \in N \\ \forall v \in V^n_\downarrow \end{array} \right. &,& \begin{array}{rcl} \Gamma_v^\downarrow * \alpha^n_v &=& \Gamma_v^\uparrow \end{array} \end{array}\end{split}\]

Repository Organization

Hadar repository is split in many parts.

  • hadar/ source code
  • tests/ unit and integration tests perform by unittest
  • examples/ set of notebooks used like End to End test when executed during CI or like tutorials when exported to html.
  • docs/ sphinx documentation hosted by readthedocs at https://docs.hadar-simulator.org . Main website is hosted by Github Pages and source code can be find in this repository
  • .github/ github configuration to use Github Action for CI.

Ticketing

We use all github features to organize development. We implement a Agile methodology and try to recreate Jira behavior in github. Therefore we swap Jira features to Github such as :

Jira github swap
User Story / Bug Issue
Version = Sprint Project
task check list in issue
Epic Milestone

Devops

We respect git flow pattern. Main developments are on develop branch. We accept feature/** branch but is not mandatory.

CI pipelines are backed on git flow, actions are sum up in table below :

action develop release/** master
TU + IT 3.6, 3.7, 3.8 / linux, mac, win linux-3.7 linux-3.7
E2E   from source code from test.pypip.org
Sonar yes yes yes
package   to test.pypip.org to pypip.org

How to Contribute

First off, thank you to considering contributing to Hadar. We believe technology can change the world. But only great community and open source can improve the world.

Following these guidelines helps to communicate that you respect the time of the developers managing and developing this open source project. In return, they should reciprocate that respect in addressing your issue, assessing changes, and helping you finalize your pull requests.

We try to describe most of Hadar behavior and organization to avoid any shadow part. Additionally, you can read Dev Guide section or Architecture to learn hadar purposes and processes.

What kind of contribution ?

You can participate on Hadar from many ways:

  • just use it and spread it !
  • write plugin and extension for hadar
  • Improve docs, code, examples
  • Add new features

Issue tracker are only for features, bug or improvment; not for support. If you have some question please go to TODO . Any support issue will be closed.

Feature / Improvement

Little changes can be directly send into a pull request. Like :

  • Spelling / grammar fixes
  • Typo correction, white space and formatting changes
  • Comment clean up
  • Adding logging messages or debugging output

For all other, you need first to create an issue. If issue receives good feedback. Then you can fork project, work on your side and send a Pull Request

Bug

If you find a security bug, please DON’T create an issue. Contact use at admin@hadar-simulator.org

First be sure it’s a bug and not a misuse ! Issues are not for technical support. To speed up bug fixing (and avoid misuse), you need to clearly explain bug, with most simple step by step guide to reproduce bug. Specify us all details like OS, Hadar version and so on.

Please provide us response to these questions

- What version of Hadar and python are you using ?
- What operating system and processor architecture are you using?
- What did you do?
- What did you expect to see?
- What did you see instead?

Best Practices

We try to code the most clear and maintainable software. Your Pull Request has to follow some good practices:

  • respect PEP 8 style guide
  • name meaningful variables, method, class
  • respect SOLID , KISS , DRY , YAGNI principe
  • make code easy testable (use dependencies injection)
  • test code (at least 80% UT code coverage)
  • Add docstring for each class and method.

TL;TR: code as Uncle Bob !

hadar.workflow package

Submodules

hadar.workflow.pipeline module

class hadar.workflow.pipeline.RestrictedPlug(inputs: List[str] = None, outputs: List[str] = None)

Bases: hadar.workflow.pipeline.Plug

Implementation where stage expect presence of precise columns.

linkable_to(next) → bool

Defined if next stage is linkable with current. In this implementation, plug is linkable only if input of next stage are present in output of current stage.

Parameters:next – other stage to link
Returns:True if current output contain mandatory columns for next input else False
class hadar.workflow.pipeline.FreePlug

Bases: hadar.workflow.pipeline.Plug

Plug implementation when stage can use any kind of DataFrame, whatever columns present inside.

linkable_to(other: hadar.workflow.pipeline.Plug) → bool

Defined if next stage is linkable with current. In this implementation, plug is always linkable

Parameters:other – other stage to link
Returns:True whatever
class hadar.workflow.pipeline.Stage(plug: hadar.workflow.pipeline.Plug)

Bases: abc.ABC

Abstract method which represent an unit of compute. It can be addition with other to create workflow pipeline.

static build_multi_index(scenarios: Union[List[int], numpy.ndarray], names: List[str])

Create column multi index.

Parameters:
  • scenarios – list of scenarios serial
  • names – names of data type preset inside each scenario
Returns:

multi-index like [(scn, type), …]

static get_names(timeline: pandas.core.frame.DataFrame) → List[str]
static get_scenarios(timeline: pandas.core.frame.DataFrame) → numpy.ndarray
static standardize_column(timeline: pandas.core.frame.DataFrame) → pandas.core.frame.DataFrame

Timeline must have first column for scenario and second for data timeline. Add the Oth scenario index if not present.

Parameters:timeline – timeline with or without scenario index
Returns:timeline with scenario index
class hadar.workflow.pipeline.FocusStage(plug)

Bases: hadar.workflow.pipeline.Stage, abc.ABC

Stage focuses on same behaviour for any scenarios.

class hadar.workflow.pipeline.Drop(names: Union[List[str], str])

Bases: hadar.workflow.pipeline.Stage

Drop columns by name.

class hadar.workflow.pipeline.Rename(**kwargs)

Bases: hadar.workflow.pipeline.Stage

Rename column names.

class hadar.workflow.pipeline.Fault(loss: float, occur_freq: float, downtime_min: int, downtime_max, seed: int = None)

Bases: hadar.workflow.pipeline.FocusStage

Generate a random fault for each scenarios.

class hadar.workflow.pipeline.RepeatScenario(n)

Bases: hadar.workflow.pipeline.Stage

Repeat n-time current scenarios.

class hadar.workflow.pipeline.ToShuffler(result_name: str)

Bases: hadar.workflow.pipeline.Rename

To Connect pipeline to shuffler

class hadar.workflow.pipeline.Pipeline(stages: List[T])

Bases: object

Compute many stages sequentially.

assert_computable(timeline: pandas.core.frame.DataFrame)

Verify timeline is computable by pipeline.

Parameters:timeline – timeline to check
Returns:True if computable False else
assert_to_shuffler()
class hadar.workflow.pipeline.Clip(lower: float = None, upper: float = None)

Bases: hadar.workflow.pipeline.Stage

Cut data according to upper and lower boundaries. Same as np.clip function.

hadar.workflow.shuffler module

class hadar.workflow.shuffler.Shuffler(sampler=<built-in method randint of numpy.random.mtrand.RandomState object>)

Bases: object

Receive all data sources like raw matrix or pipeline. Schedule pipeline generation and shuffle all timeline to create scenarios.

add_data(name: str, data: numpy.ndarray)

Add raw data by numpy array. If you generate data by pipeline use add_pipeline. It will parallelize computation and manage swap. :param name: timeline name :param data: numpy array with shape as (scenario, horizon) :return: self

add_pipeline(name: str, data: pandas.core.frame.DataFrame, pipeline: hadar.workflow.pipeline.Pipeline)

Add data by pipeline and input data for pipeline.

Parameters:
  • name – timeline name
  • data – data to use as pipeline input
  • pipeline – pipeline to generate data
Returns:

self

shuffle(nb_scn)

Start pipeline generation and shuffle result to create scenario sampling.

Parameters:nb_scn – number of scenarios to sample
Returns:
class hadar.workflow.shuffler.Timeline(data: numpy.ndarray = None, sampler=<built-in method randint of numpy.random.mtrand.RandomState object>)

Bases: object

Manage data used to generate timeline. Perform sampling too.

compute() → numpy.ndarray

Compute method called before sampling. For Timeline method just return data.

Returns:return data given in constructor
sample(nb) → numpy.ndarray

Perform sampling. Compute data is needed before.

Parameters:nb – number of sampling
Returns:scenario matrix shape like (nb, horizon)

Module contents

hadar.optimizer package

Subpackages

hadar.optimizer.lp package

Submodules
hadar.optimizer.lp.domain module
class hadar.optimizer.lp.domain.LPConsumption(quantity: int, variable: Union[ortools.linear_solver.pywraplp.Variable, hadar.optimizer.lp.domain.SerializableVariable], cost: float = 0, name: str = '')

Bases: hadar.optimizer.input.DTO

Consumption element for linear programming.

class hadar.optimizer.lp.domain.LPConverter(name: str, src_ratios: Dict[Tuple[str, str], float], var_flow_src: Dict[Tuple[str, str], Union[ortools.linear_solver.pywraplp.Variable, hadar.optimizer.lp.domain.SerializableVariable]], dest_network: str, dest_node: str, var_flow_dest: Union[ortools.linear_solver.pywraplp.Variable, hadar.optimizer.lp.domain.SerializableVariable], cost: float, max: float)

Bases: hadar.optimizer.input.DTO

Converter element for linear programming

Bases: hadar.optimizer.input.DTO

Link element for linear programming

class hadar.optimizer.lp.domain.LPNetwork(nodes: Dict[str, hadar.optimizer.lp.domain.LPNode] = None)

Bases: hadar.optimizer.input.DTO

Network element for linear programming

class hadar.optimizer.lp.domain.LPNode(consumptions: List[hadar.optimizer.lp.domain.LPConsumption], productions: List[hadar.optimizer.lp.domain.LPProduction], storages: List[hadar.optimizer.lp.domain.LPStorage], links: List[hadar.optimizer.lp.domain.LPLink])

Bases: hadar.optimizer.input.DTO

Node element for linear programming

class hadar.optimizer.lp.domain.LPProduction(quantity: int, variable: Union[ortools.linear_solver.pywraplp.Variable, hadar.optimizer.lp.domain.SerializableVariable], cost: float = 0, name: str = 'in')

Bases: hadar.optimizer.input.DTO

Production element for linear programming.

class hadar.optimizer.lp.domain.LPStorage(name, capacity: int, var_capacity: Union[ortools.linear_solver.pywraplp.Variable, hadar.optimizer.lp.domain.SerializableVariable], flow_in: float, var_flow_in: Union[ortools.linear_solver.pywraplp.Variable, hadar.optimizer.lp.domain.SerializableVariable], flow_out: float, var_flow_out: Union[ortools.linear_solver.pywraplp.Variable, hadar.optimizer.lp.domain.SerializableVariable], cost: float = 0, init_capacity: int = 0, eff: float = 0.99)

Bases: hadar.optimizer.input.DTO

Storage element

class hadar.optimizer.lp.domain.LPTimeStep(networks: Dict[str, hadar.optimizer.lp.domain.LPNetwork], converters: Dict[str, hadar.optimizer.lp.domain.LPConverter])

Bases: hadar.optimizer.input.DTO

static create_like_study(study: hadar.optimizer.input.Study)
class hadar.optimizer.lp.domain.SerializableVariable(var: ortools.linear_solver.pywraplp.Variable)

Bases: hadar.optimizer.input.DTO

solution_value()
hadar.optimizer.lp.mapper module
class hadar.optimizer.lp.mapper.InputMapper(solver: ortools.linear_solver.pywraplp.Solver, study: hadar.optimizer.input.Study)

Bases: object

Input mapper from global domain to linear programming specific domain

get_conv_var(name: str, t: int, scn: int) → hadar.optimizer.lp.domain.LPConverter

Map Converter to LPConverter.

Parameters:
  • name – converter name
  • t – time step
  • scn – scenario index
Returns:

LPConverter

get_node_var(network: str, node: str, t: int, scn: int) → hadar.optimizer.lp.domain.LPNode

Map InputNode to LPNode.

Parameters:
  • network – network name
  • node – node name
  • t – time step
  • scn – scenario index
Returns:

LPNode according to node name at t in study

class hadar.optimizer.lp.mapper.OutputMapper(study: hadar.optimizer.input.Study)

Bases: object

Output mapper from specific linear programming domain to global domain.

get_result() → hadar.optimizer.output.Result

Get result.

Returns:final result after map all nodes
set_converter_var(name: str, t: int, scn: int, vars: hadar.optimizer.lp.domain.LPConverter)
set_node_var(network: str, node: str, t: int, scn: int, vars: hadar.optimizer.lp.domain.LPNode)

Map linear programming node to global node (set inside intern attribute).

Parameters:
  • network – network name
  • node – node name
  • t – timestamp index
  • scn – scenario index
  • vars – linear programming node with ortools variables inside
Returns:

None (use get_result)

hadar.optimizer.lp.optimizer module
class hadar.optimizer.lp.optimizer.AdequacyBuilder(solver: ortools.linear_solver.pywraplp.Solver)

Bases: object

Build adequacy flow constraint.

add_converter(conv: hadar.optimizer.lp.domain.LPConverter, t: int)

Add converter element in equation. Sources are like consumptions, destination like production

Parameters:
  • conv – converter element to add
  • t – time index to use
Returns:

add_node(name_network: str, name_node: str, node: hadar.optimizer.lp.domain.LPNode, t: int)

Add flow constraint for a specific node.

Parameters:
  • name_network – network name. Used to differentiate each equation
  • name_node – node name. Used to differentiate each equation
  • node – node to map constraint
Returns:

build()

Call when all node are added. Apply all import flow for each node.

Returns:
class hadar.optimizer.lp.optimizer.ConverterMixBuilder(solver: ortools.linear_solver.pywraplp.Solver)

Bases: object

Build equation to determine ratio mix between sources converter.

add_converter(conv: hadar.optimizer.lp.domain.LPConverter)
build()
class hadar.optimizer.lp.optimizer.ObjectiveBuilder(solver: ortools.linear_solver.pywraplp.Solver)

Bases: object

Build objective cost function.

add_converter(conv: hadar.optimizer.lp.domain.LPConverter)

Add converter. Apply cost on output of converter.

Parameters:conv – converter to cost
Returns:
add_node(node: hadar.optimizer.lp.domain.LPNode)

Add cost in objective for each node element.

Parameters:node – node to add
Returns:
build()
class hadar.optimizer.lp.optimizer.StorageBuilder(solver: ortools.linear_solver.pywraplp.Solver)

Bases: object

Build storage constraints

add_node(name_network: str, name_node: str, node: hadar.optimizer.lp.domain.LPNode, t: int) → ortools.linear_solver.pywraplp.Constraint
build()
hadar.optimizer.lp.optimizer.solve_lp(study: hadar.optimizer.input.Study, out_mapper=None) → hadar.optimizer.output.Result

Solve adequacy flow problem with a linear optimizer.

Parameters:
  • study – study to compute
  • out_mapper – use only for test purpose to inject mock. Keep None as default.
Returns:

Result object with optimal solution

Module contents

hadar.optimizer.remote package

Submodules
hadar.optimizer.remote.optimizer module
exception hadar.optimizer.remote.optimizer.ServerError(mes: str)

Bases: Exception

hadar.optimizer.remote.optimizer.check_code(code)
hadar.optimizer.remote.optimizer.solve_remote(study: hadar.optimizer.input.Study, url: str, token: str = 'none') → hadar.optimizer.output.Result

Send study to remote server.

Parameters:
  • study – study to resolve
  • url – server url
  • token – authorized token (default server config doesn’t use token)
Returns:

result received from server

Module contents

Submodules

hadar.optimizer.input module

class hadar.optimizer.input.Consumption(quantity: Union[List[T], numpy.ndarray, float], cost: Union[List[T], numpy.ndarray, float], name: str = '')

Bases: hadar.optimizer.input.JSON

Consumption element.

static from_json(dict)

Bases: hadar.optimizer.input.JSON

Link element

static from_json(dict)
class hadar.optimizer.input.Production(quantity: Union[List[T], numpy.ndarray, float], cost: Union[List[T], numpy.ndarray, float], name: str = 'in')

Bases: hadar.optimizer.input.JSON

Production element

static from_json(dict)
class hadar.optimizer.input.Storage(name, capacity: int, flow_in: float, flow_out: float, cost: float = 0, init_capacity: int = 0, eff: float = 0.99)

Bases: hadar.optimizer.input.JSON

Storage element

static from_json(dict)
class hadar.optimizer.input.Converter(name: str, src_ratios: Dict[Tuple[str, str], float], dest_network: str, dest_node: str, cost: float, max: float)

Bases: hadar.optimizer.input.JSON

Converter element

static from_json(dict: dict)
to_json() → dict
class hadar.optimizer.input.InputNetwork(nodes: Dict[str, hadar.optimizer.input.InputNode] = None)

Bases: hadar.optimizer.input.JSON

Network element

static from_json(dict)
class hadar.optimizer.input.InputNode(consumptions: List[hadar.optimizer.input.Consumption], productions: List[hadar.optimizer.input.Production], storages: List[hadar.optimizer.input.Storage], links: List[hadar.optimizer.input.Link])

Bases: hadar.optimizer.input.JSON

Node element

static from_json(dict)
class hadar.optimizer.input.Study(horizon: int, nb_scn: int = 1, version: str = None)

Bases: hadar.optimizer.input.JSON

Main object to facilitate to build a study

Add a link inside network.

Parameters:
  • network – network where nodes belong
  • src – source node name
  • dest – destination node name
  • cost – cost of use
  • quantity – transfer capacity
Returns:

add_network(network: str)
add_node(network: str, node: str)
static from_json(dict)
network(name='default')

Entry point to create study with the fluent api.

Returns:
class hadar.optimizer.input.NetworkFluentAPISelector(study, selector)

Bases: object

Network level of Fluent API Selector.

build()

Build study.

Returns:return study
converter(name: str, to_network: str, to_node: str, max: float, cost: float = 0)

Add a converter element.

Parameters:
  • name – converter name
  • to_network – converter output network
  • to_node – converter output node on network
  • max – maximum quantity from converter
  • cost – cost for each quantity produce by converter
Returns:

Add a link on network.

Parameters:
  • src – node source
  • dest – node destination
  • cost – unit cost transfer
  • quantity – available capacity
Returns:

NetworkAPISelector with new link.

network(name='default')

Go to network level.

Parameters:name – network level, ‘default’ as default name
Returns:NetworkAPISelector with selector set to ‘default’
node(name)

Go to node level.

Parameters:name – node to select when changing level
Returns:NodeFluentAPISelector initialized
class hadar.optimizer.input.NodeFluentAPISelector(study, selector)

Bases: object

Node level of Fluent API Selector

build()

Build study.

Returns:study
consumption(name: str, cost: int, quantity: Union[List[T], numpy.ndarray, float])

Add consumption on node.

Parameters:
  • name – consumption name
  • cost – cost of unsuitability
  • quantity – consumption to sustain
Returns:

NodeFluentAPISelector with new consumption

converter(name: str, to_network: str, to_node: str, max: float, cost: float = 0)

Add a converter element.

Parameters:
  • name – converter name
  • to_network – converter output network
  • to_node – converter output node on network
  • max – maximum quantity from converter
  • cost – cost for each quantity produce by converter
Returns:

Add a link on network.

Parameters:
  • src – node source
  • dest – node destination
  • cost – unit cost transfer
  • quantity – available capacity
Returns:

NetworkAPISelector with new link.

network(name='default')

Go to network level.

Parameters:name – network level, ‘default’ as default name
Returns:NetworkAPISelector with selector set to ‘default’
node(name)

Go to different node level.

Parameters:name – new node level
Returns:NodeFluentAPISelector
production(name: str, cost: int, quantity: Union[List[T], numpy.ndarray, float])

Add production on node.

Parameters:
  • name – production name
  • cost – unit cost of use
  • quantity – available capacities
Returns:

NodeFluentAPISelector with new production

storage(name, capacity: int, flow_in: float, flow_out: float, cost: float = 0, init_capacity: int = 0, eff: int = 0.99)

Create storage.

Parameters:
  • capacity – maximum storage capacity (like of many quantity to use inside storage)
  • flow_in – max flow into storage during on time step
  • flow_out – max flow out storage during on time step
  • cost – unit cost of storage at each time-step. default 0
  • init_capacity – initial capacity level. default 0
  • eff – storage efficient (applied on input flow stored). default 0.99
to_converter(name: str, ratio: float = 1)

Add an ouptput to converter.

Parameters:
  • name – converter name
  • ratio – ratio for output
Returns:

hadar.optimizer.optimizer module

class hadar.optimizer.optimizer.LPOptimizer

Bases: hadar.optimizer.optimizer.Optimizer

Basic Optimizer works with linear programming.

solve(study: hadar.optimizer.input.Study) → hadar.optimizer.output.Result

Solve adequacy study.

Parameters:study – study to resolve
Returns:study’s result
class hadar.optimizer.optimizer.RemoteOptimizer(url: str, token: str = '')

Bases: hadar.optimizer.optimizer.Optimizer

Use a remote optimizer to compute on cloud.

solve(study: hadar.optimizer.input.Study) → hadar.optimizer.output.Result

Solve adequacy study.

Parameters:study – study to resolve
Returns:study’s result

hadar.optimizer.output module

class hadar.optimizer.output.OutputProduction(quantity: Union[numpy.ndarray, list], cost: Union[numpy.ndarray, list], name: str = 'in')

Bases: hadar.optimizer.input.JSON

Production element

static from_json(dict)
class hadar.optimizer.output.OutputNode(consumptions: List[hadar.optimizer.output.OutputConsumption], productions: List[hadar.optimizer.output.OutputProduction], storages: List[hadar.optimizer.output.OutputStorage], links: List[hadar.optimizer.output.OutputLink])

Bases: hadar.optimizer.input.JSON

Node element

static build_like_input(input: hadar.optimizer.input.InputNode, fill: numpy.ndarray)

Use an input node to create an output node. Keep list elements fill quantity by zeros.

Parameters:
  • input – InputNode to copy
  • fill – array to use to fill data
Returns:

OutputNode like InputNode with all quantity at zero

static from_json(dict)
class hadar.optimizer.output.OutputStorage(name: str, capacity: Union[numpy.ndarray, list], flow_in: Union[numpy.ndarray, list], flow_out: Union[numpy.ndarray, list])

Bases: hadar.optimizer.input.JSON

Storage element

static from_json(dict)

Bases: hadar.optimizer.input.JSON

Link element

static from_json(dict)
class hadar.optimizer.output.OutputConsumption(quantity: Union[numpy.ndarray, list], cost: Union[numpy.ndarray, list], name: str = '')

Bases: hadar.optimizer.input.JSON

Consumption element

static from_json(dict)
class hadar.optimizer.output.OutputNetwork(nodes: Dict[str, hadar.optimizer.output.OutputNode])

Bases: hadar.optimizer.input.JSON

Network element

static from_json(dict)
class hadar.optimizer.output.OutputConverter(name: str, flow_src: Dict[Tuple[str, str], Union[numpy.ndarray, List[T]]], flow_dest: Union[numpy.ndarray, List[T]])

Bases: hadar.optimizer.input.JSON

Converter element

static from_json(dict: dict)
to_json() → dict
class hadar.optimizer.output.Result(networks: Dict[str, hadar.optimizer.output.OutputNetwork], converters: Dict[str, hadar.optimizer.output.OutputConverter])

Bases: hadar.optimizer.input.JSON

Result of study

static from_json(dict)

Module contents

hadar.analyzer package

Submodules

hadar.analyzer.result module

class hadar.analyzer.result.ResultAnalyzer(study: hadar.optimizer.input.Study, result: hadar.optimizer.output.Result)

Bases: object

Single object to encapsulate all postprocessing aggregation.

static check_index(indexes: List[hadar.analyzer.result.Index], type: Type[CT_co])

Check indexes cohesion :param indexes: list fo indexes :param type: Index type to check inside list :return: true if at least one type is in list False else

filter(indexes: List[hadar.analyzer.result.Index]) → pandas.core.frame.DataFrame

Aggregate according to index level and filter.

get_balance(node: str, network: str = 'default') → numpy.ndarray

Compute balance over time on asked node.

Parameters:
  • node – node asked
  • network – network asked. Default is ‘default’
Returns:

timeline array with balance exchanges value

get_cost(node: str, network: str = 'default') → numpy.ndarray

Compute adequacy cost on a node.

Parameters:
  • node – node name
  • network – network name, ‘default’ as default
Returns:

matrix (scn, time)

get_elements_inside(node: str, network: str = 'default')

Get numbers of elements by node.

Parameters:
  • network – network name
  • node – node name
Returns:

(nb of consumptions, nb of productions, nb of storages, nb of links (export), nb of converters (export), nb of converters (import)

get_rac(network='default') → numpy.ndarray

Compute Remain Availabale Capacities on network.

Parameters:network – selecto network to compute. Default is default.
Returns:matrix (scn, time)
horizon

Shortcut to get study horizon.

Returns:study horizon
nb_scn

Shortcut to get study number of scenarios.

Returns:study number of scenarios
network(name='default')

Entry point for fluent api :param name: network name. ‘default’ as default :return: Fluent API Selector

nodes(network: str = 'default') → List[str]

Shortcut to get list of node names

Parameters:network – network selected
Returns:nodes name
class hadar.analyzer.result.NetworkFluentAPISelector(indexes: List[hadar.analyzer.result.Index], analyzer: hadar.analyzer.result.ResultAnalyzer)

Bases: object

Fluent Api Selector to analyze network element.

User can join network, node, consumption, production, link, time, scn to create filter and organize hierarchy. Join can me in any order, except: - join begin by network - join is unique only one element of node, time, scn are expected for each query - production, consumption and link are excluded themself, only on of them are expected for each query

FULL_DESCRIPTION = 5

Module contents

hadar.viewer package

Submodules

hadar.viewer.abc module

class hadar.viewer.abc.ABCElementPlotting

Bases: abc.ABC

Abstract interface to implement to plot graphics

candles(open: numpy.ndarray, close: numpy.ndarray, title: str)

Plot candle stick with open close :param open: candle open data :param close: candle close data :param title: title to plot :return:

gaussian(rac: numpy.ndarray, qt: numpy.ndarray, title: str)

Plot gaussian.

Parameters:
  • rac – Remain Available Capacities matrix (to plot green or red point)
  • qt – value vector
  • title – title to plot
Returns:

map_exchange(nodes, lines, limit, title, zoom)

Plot map with exchanges as arrow.

Parameters:
  • nodes – node to set on map
  • lines – arrow to se on map
  • limit – colorscale limit to use
  • title – title to plot
  • zoom – zoom to set on map
Returns:

matrix(data: numpy.ndarray, title)

Plot matrix (heatmap)

Parameters:
  • data – 2D matrix to plot
  • title – title to plot
Returns:

monotone(y: numpy.ndarray, title: str)

Plot monotone.

Parameters:
  • y – value vector
  • title – title to plot
Returns:

stack(areas: List[Tuple[str, numpy.ndarray]], lines: List[Tuple[str, numpy.ndarray]], title: str)

Plot stack.

Parameters:
  • areas – list of timelines to stack with area
  • lines – list of timelines to stack with line
  • title – title to plot
Returns:

timeline(df: pandas.core.frame.DataFrame, title: str)

Plot timeline with all scenarios.

Parameters:
  • df – dataframe with scenario on columns and time on index
  • title – title to plot
Returns:

class hadar.viewer.abc.ABCPlotting(agg: hadar.analyzer.result.ResultAnalyzer, unit_symbol: str = '', time_start=None, time_end=None, node_coord: Dict[str, List[float]] = None)

Bases: abc.ABC

Abstract method to plot optimizer result.

network(network: str = 'default')

Entry point to use fluent API.

Parameters:network – select network to anlyze. Default is ‘default’
Returns:NetworkFluentAPISelector
class hadar.viewer.abc.ConsumptionFluentAPISelector(plotting: hadar.viewer.abc.ABCElementPlotting, agg: hadar.analyzer.result.ResultAnalyzer, network: str, name: str, node: str, kind: str)

Bases: hadar.viewer.abc.FluentAPISelector

Consumption level of fluent api.

gaussian(t: int = None, scn: int = None)

Plot gaussian graphics

Parameters:
  • t – focus on t index
  • scn – focus on scn index if t not given
Returns:

monotone(t: int = None, scn: int = None)

Plot monotone graphics.

Parameters:
  • t – focus on t index
  • scn – focus on scn index if t not given
Returns:

timeline()

Plot timeline graphics. :return:

class hadar.viewer.abc.DestConverterFluentAPISelector(plotting: hadar.viewer.abc.ABCElementPlotting, agg: hadar.analyzer.result.ResultAnalyzer, network: str, node: str, name: str)

Bases: hadar.viewer.abc.FluentAPISelector

Source converter level of fluent api

gaussian(t: int = None, scn: int = None)

Plot gaussian graphics

Parameters:
  • t – focus on t index
  • scn – focus on scn index if t not given
Returns:

monotone(t: int = None, scn: int = None)

Plot monotone graphics.

Parameters:
  • t – focus on t index
  • scn – focus on scn index if t not given
Returns:

timeline()

Plot timeline graphics. :return:

class hadar.viewer.abc.FluentAPISelector(plotting: hadar.viewer.abc.ABCElementPlotting, agg: hadar.analyzer.result.ResultAnalyzer)

Bases: abc.ABC

static not_both(t: int, scn: int)
class hadar.viewer.abc.LinkFluentAPISelector(plotting: hadar.viewer.abc.ABCElementPlotting, agg: hadar.analyzer.result.ResultAnalyzer, network: str, src: str, dest: str, kind: str)

Bases: hadar.viewer.abc.FluentAPISelector

Link level of fluent api

gaussian(t: int = None, scn: int = None)

Plot gaussian graphics

Parameters:
  • t – focus on t index
  • scn – focus on scn index if t not given
Returns:

monotone(t: int = None, scn: int = None)

Plot monotone graphics.

Parameters:
  • t – focus on t index
  • scn – focus on scn index if t not given
Returns:

timeline()

Plot timeline graphics. :return:

class hadar.viewer.abc.NetworkFluentAPISelector(plotting: hadar.viewer.abc.ABCElementPlotting, agg: hadar.analyzer.result.ResultAnalyzer, network: str)

Bases: hadar.viewer.abc.FluentAPISelector

Network level of fluent API

map(t: int, zoom: int, scn: int = 0, limit: int = None)

Plot map exchange graphics

Parameters:
  • t – t index to focus
  • zoom – zoom to set
  • scn – scn index to focus
  • limit – color scale limite to use
Returns:

node(node: str)

Go to node level fo fluent API :param node: node name :return: NodeFluentAPISelector

rac_matrix()

plot RAC matrix graphics

Returns:
class hadar.viewer.abc.NodeFluentAPISelector(plotting: hadar.viewer.abc.ABCElementPlotting, agg: hadar.analyzer.result.ResultAnalyzer, network: str, node: str)

Bases: hadar.viewer.abc.FluentAPISelector

Node level of fluent api

consumption(name: str, kind: str = 'given') → hadar.viewer.abc.ConsumptionFluentAPISelector

Go to consumption level of fluent API

Parameters:
  • name – select consumption name
  • kind – kind of data ‘asked’ or ‘given’
Returns:

from_converter(name: str)

get a converter importation level fluent API :param name: :return:

got to link level of fluent API

Parameters:
  • dest – select destination node name
  • kind – kind of data available (‘avail’) or ‘used’
Returns:

production(name: str, kind: str = 'used') → hadar.viewer.abc.ProductionFluentAPISelector

Go to production level of fluent API

Parameters:
  • name – select production name
  • kind – kind of data available (‘avail’) or ‘used’
Returns:

stack(scn: int = 0, prod_kind: str = 'used', cons_kind: str = 'asked')

Plot with production stacked with area and consumptions stacked by dashed lines.

Parameters:
  • node – select node to plot.
  • scn – scenario index to plot.
  • prod_kind – select which prod to stack : available (‘avail’) or ‘used’
  • cons_kind – select which cons to stack : ‘asked’ or ‘given’
Returns:

plotly figure or jupyter widget to plot

storage(name: str) → hadar.viewer.abc.StorageFluentAPISelector

Got o storage level of fluent API

Parameters:name – select storage name
Returns:
to_converter(name: str)

get a converter exportation level fluent API :param name: :return:

class hadar.viewer.abc.ProductionFluentAPISelector(plotting: hadar.viewer.abc.ABCElementPlotting, agg: hadar.analyzer.result.ResultAnalyzer, network: str, name: str, node: str, kind: str)

Bases: hadar.viewer.abc.FluentAPISelector

Production level of fluent api

gaussian(t: int = None, scn: int = None)

Plot gaussian graphics

Parameters:
  • t – focus on t index
  • scn – focus on scn index if t not given
Returns:

monotone(t: int = None, scn: int = None)

Plot monotone graphics.

Parameters:
  • t – focus on t index
  • scn – focus on scn index if t not given
Returns:

timeline()

Plot timeline graphics. :return:

class hadar.viewer.abc.SrcConverterFluentAPISelector(plotting: hadar.viewer.abc.ABCElementPlotting, agg: hadar.analyzer.result.ResultAnalyzer, network: str, node: str, name: str)

Bases: hadar.viewer.abc.FluentAPISelector

Source converter level of fluent api

gaussian(t: int = None, scn: int = None)

Plot gaussian graphics

Parameters:
  • t – focus on t index
  • scn – focus on scn index if t not given
Returns:

monotone(t: int = None, scn: int = None)

Plot monotone graphics.

Parameters:
  • t – focus on t index
  • scn – focus on scn index if t not given
Returns:

timeline()

Plot timeline graphics. :return:

class hadar.viewer.abc.StorageFluentAPISelector(plotting: hadar.viewer.abc.ABCElementPlotting, agg: hadar.analyzer.result.ResultAnalyzer, network: str, node: str, name: str)

Bases: hadar.viewer.abc.FluentAPISelector

Storage level of fluent API

candles(scn: int = 0)
monotone(t: int = None, scn: int = None)

Plot monotone graphics.

Parameters:
  • t – focus on t index
  • scn – focus on scn index if t not given
Returns:

hadar.viewer.html module

class hadar.viewer.html.HTMLPlotting(agg: hadar.analyzer.result.ResultAnalyzer, unit_symbol: str = '', time_start=None, time_end=None, node_coord: Dict[str, List[float]] = None)

Bases: hadar.viewer.abc.ABCPlotting

Plotting implementation interactive html graphics. (Use plotly)

Module contents