The H1st Model is one of the core concepts of H1st, and it is central to the way H1st works. Model presents a uniform interface to its users, whether the underlying model is boolean logic, fuzzy logic derived from human’s intuition, a Scikit-learn random forest, or a Tensorflow neural network. This makes it possible for you to use and combine Models in Graphs or Ensembles easily.
The easiest way to understand H1st model is actually implementing it. H1st model provides all the interfaces to manage the life cycle of the model.
Below is an example of H1st Model that utilizes an underlying Scikit-learn model for digits classification.
To create an H1st model, you can start by create a new class and subclass from the h1.Model.
Then we populate the methods to get_data() to get the data, prep() to preprocess it, and of course train(), evaluate() and predict().
This is how the model is used. Pay close attention to the parameters of the methods and note that the train-val data splitting is done in prep(), and that most data parameters should be Python dictionaries where the data scientists can creatively decide how to use the keys & values such as train_x, test_x.
The beauty of this API is that we can keep same workflow steps for all kinds of models, whether they are boolean/fuzzy logic or ML models!
H1st AI supports out-of-the-box easy persisting & loading of sklearn and tf.keras models to a model repository (other types can be added).
This makes it much easier to include model in larger workflows such as in H1st Graphs or Ensembles. It can enable data science teams to be much more productive.
A model repository is simply a folder on local disk or S3. We call h1.init() specifying MODEL_REPO_PATH. Alternative it can be automatically picked up in the project’s config.py.