Human-First AI is the solution to real-world problems we ran into in our hundreds of person-years of experience—building and deploying industrial- and enterprise-AI systems at Arimo-Panasonic.
Like many pioneers in the discipline of Data Science, our team has a strong grounding in the theories of machine learning (one of us was even a professor), but the application of ML to most of the real world outside of native digital companies quickly ran into a host of problems. These problems fall into 3 key patterns:
- Can’t Start: in reality, today, for most industrial applications and environments, there is insufficient data to leverage machine learning. We call this the Cold-Start problem: it’s not possible to get going, without significant time and financial investments in data collection and retention efforts, in spite of any organizational desire to take advantage of machine learning in the enterprise.
- Can’t Profit: tools and techniques for the practice of Data Science are still in the relative Stone Age, compared to, say, Software Engineering. For example, while useful, Python Notebooks are unstructured, messy, hard-to-reuse, and harder to share or collaborate on. This makes the data scientist’s life miserably unproductive compared to that of a software engineer. We call this the ROI problem—effectively, our data scientists are too expensive compared to the potential return of any problem they may be asked to work on.
- Can’t Deploy: people—operators, consumers, regulators, and customers—do not “trust” machine learning. Or are at least nervous about them. This is because ML models exhibit a strange, new behavior in computing systems. People are far more used to software-engineered code that are imperative: humans explicitly create rules and procedures for machines to execute. They can be examined, and in a broad sense, understood or “explainable”. ML models, on the other hand, have no explicit code but numeric parameters that are “learned” from the data examples. ML models are in this sense “blackboxes”. And yet they help make decisions that can have enormous human impact, with built-in destructive biases, yet with no simple explainability in the sense that we feel software-engineered models are. As a result, even after we build ML models, we cannot deploy them. We call this the Trustworthy-AI problem.
Human-First AI (H1st) was created out of a collection of tools, techniques, systems and processes that we built over the years that helped us successfully overcome the challenges above. We have drawn from our years of experience in industrial domain expertise, in software engineering, and in machine learning, to build a robust, principled framework for data science.