H1st Architectural Overview

H1st comprises a Workbench and an API framework.

The H1st Workbench is a cloud-based UI for data scientists to develop and deploy H1st-AI workflows.

The H1st API framework is embedded within the Workbench.

H1st Objects


An H1st Graph is essentially a flowchart, modeling a real-world workflow. It is a directed graph describing an execution flow (and not a data flow). There may be conditionals and loops in an H1st Graph.

H1st Graphs describe both high-level business workflows, as well as low-level model-to-model execution flow.


Nodes are elements that connect to form a Graph. An H1st Node has inputs, outputs, and may contain within it a NodeContainable.


NodeContainables are anything that might be contained within a Node. Most often, what we see contained is a Model. A Graph is also a NodeContainables, so that H1st Graphs are hierarchical: a Graph may contain a sub-Graph and so on.


H1st Models are the workhorse of the entire framework. At the base, a Model is the simplest unit that “models” a behavior that takes some inputs, processes them, and emits some outputs.


A ProcessModel mainly exists to distinguish between predictive and non-predictive models.


A DataAnnotatingModel takes some data as inputs and annotates or labels them. They are mainly used to process training data for PredictiveModels.


A DataGeneratingModel takes some parameters as inputs, and outputs data used as training data for PredictiveModels.


A DataAugmentingModel is like a DataGeneratingModel, but used in the context of expanding or adding to some existing training data.


A PredictiveModel Is a Model that interpolates or extrapolates, more generally, it outputs inferences based on the parameters within it. The parameters may be externally set, or automatically learned from the data.


An MLModel Is a PredictiveModel that gets its parameters via machine learning. It has a standardized API including methods such as load_data(), explore(), train(), predict(), evaluate() etc.


A RuleBasedModel Is a PredictiveModel that follows Boolean logic at its core.


A FuzzyLogicModel Is a PredictiveModel that uses fuzzy logic for its outputs.


A FuzzyLogicModel Is a PredictiveModel that uses Kalman filters for its inferences.

H1st Object Capabilities


Trustable is one of the most important capabilities of H1st objects, most notably, of H1st Models. It really comprises each of the capabilities below.


To be trusted, an object (Model) must be Auditable. It means that the object self-records an audit trail of its provenance and decisions, so that authorized personnel can query the object with questions like, “When was this model trained? Who did it? What was the data it was trained with? What biases exist in that training data? What is the history of decisions made by this model and what were the corresponding input data?”


To be trusted, an object (Model) must at minimum be Describable. It means that the object is self-describing when queried, with information such as “What the intent of the model is, and how it is designed to achieve that intent?”


To be trusted, an object (Model) must also be Explainable. It means that authorized personnel can query the object with questions like, “What are the top 3 most important input features for this model? Why did this model make that particular decision in that way?”


To be trusted, an object (Model) must further be Debiasable. It is an inescapable rule that all Models are biased, because all data samples are biased. The only question is whether the axis of those biases are protected characteristics, and what is the impact of those biases. If an undesirable impact is likely to be felt, a Debiasable Model must allow itself to correct its output, so as to remove or mitigate that impact. Furthermore, a Debiasable Model may allow its output to be adjusted so as to model the world as it should be, rather than the world as it is in the data.


It’s important to differentiate an Explanation made to a Consumer vs. one made to a Regulator. Each Constituency has different interests and must be treated differently.



Each given question asked of a Model belongs to a different Aspect of that Model.