How to Build an AI Strategy - part 3
This is the third article in a series dedicated to providing a blueprint for building AI Strategy for the enterprise. In the previous two articles, we helped define what an Actionable Strategy represents and then we began to explore the methodology that could be applied to building that strategy. In the preface article to this series, we also introduced a simple yet updated ethical framework for AI. In this article, we're going to take a look at what Artificial Intelligence represents in the context of the current Industry Innovation Horizon (which we defined in the previous article but for our purposes here represents the next 3-5 years). We'll start with a brief conceptual definition of Artificial Intelligence, but then we'll move to a pragmatic review of how AI has been and is being productized for possible adoption.
AI Defined (circa 2020)
Several years back, I made an attempt to define the overall genre of AI capability but within the context of potential re-branding of it. In other words, I had attempted a core definition, highlighted where it was problematic and then recommended a new way to think about it and refer to it. At the end of that examination, I recommended using the term Artificial Thought but perhaps the term"Augmented Intelligence" might make more sense for branding. My rationale then was that this term (or terms) better approximated the set of productized capability that has emerged over the past 5 to 10 years conceptually. While I still believe that, it looks as though "AI" as Artificial instead of Augmented will remain the overarching term used to define this industry space for the foreseeable future, so let's revisit the top-level definition for AI again before introducing product / capability categories within that space.
Top-level definition of Artificial Intelligence
Trying to define this begs the question - can the definition for an entire field of related technologies be defined separately from the most common or popular types of products associated with the field? In some cases the answer is more obvious than others - for AI, while we can provide an over-arching definition, it's not perhaps as useful as we'd like it to be (which was one of the reasons I had gone through the exercise described above which result in the prospective industry term Augmented Intelligence). The reason that I believe the definitions are not as useful yet as they could be (and referred to here as 'they' rather than 'it' because there isn't one standard definition), is that the current definitions tends to outpace actual capability. Here are three recent examples of the top-level definition for AI:
The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
Google Dictionary
In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans.
Wikipedia
Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.
Investopedia
This is a rather small sampling of definitions, but as you've probably noticed, none of these definitions seem particularly helpful to someone trying to make a determination as to what any given AI capability is or how to decide which of those ought to be adopted within their organization. The first definition is perhaps more pragmatic if somewhat vague and tends to mirror current product categories. The second definition is incredibly vague and the third one is fairly misleading given the current state of the technology in question. To some extent, each of these definitions could also apply to most all previous computer-related technology as well (if we discount the whole mimicking human intelligence theme, which in truth AI can't really do yet anyway). For example, Speech Recognition technology has been around a long time - however it was being done differently before and AI is making it more more efficient but didn't create it. So, is there a way that we can come up with a meaningful, yet pragmatic top-level definition that encompasses the entire subject? Here's an attemptâ¦
Artificial Intelligence is a unified, yet complex technology domain sharing a non-standard set of high-level goals for approximating or augmenting human cognitive capability. While the technologies have evolved significantly in the six decades since the term was coined, the perceived goals have remained relatively consistent. Many of those goals are still well out of reach, but others are now well-represented across a variety of AI products on the market. The current set of technologies tend to follow a fairly non-integrative approach, each tackling tactical problem sets across any industry domain through common AI capabilities. The near-term innovation horizon is unlikely to produce integrated AI solutions or human-like intelligence - however the available tactical capabilities can still be "logically integrated" within / across an enterprise context.
This definition doesn't drive into what the goals are partly because beyond the Turing test, there is no rock solid set of definitions or expectations for what AI should be able to accomplish other than to have the solution do some "thinking" (but not in the sense of self aware human thought) without explicit instructions for how to do that thinking. What the definition above tries to do is a) provide the top-level philosophical definition not for AI of all-time but AI here and now, b) provide some background or context about the history of AI, c) highlight that there aren't a lot of standards yet in this field - especially in regard to expectations and d) emphasize that today's AI is "Tactical AI," while implying that this will change. However, we've emphasized that Tactical AI can still be viewed and managed within an enterprise context even if it is narrowly focused in the near-term. In other words, the definition opens up the possibility that the real challenge for AI and AI Strategy may be in the coordination of many discrete AI capabilities across and within enterprise-level missions (and architecture). Thus, this top-level definition combines both conceptual and practical considerations that apply to our immediate Innovation Horizon.
Once we dive directly into what constitutes AI capability categories, things don't get any easier. This level of confusion revolving top level definitions and expectations is always problematic with merging technologies, but in the case of AI it's far more complex than usual. Over the past few years, the AI market has shifted its focus quite a bit and providing top-level categories is in some cases not much more helpful than referring to AI. The best example of this relates to the relationship of Machine Learning (ML) to AI. The ML capability category can be split up both from a product perspective (platform vs. applications), but can also be split up by sub-categories (based upon technique and algorithms employed) within Machine Learning (supervised, unsupervised and reinforced - plus these also have their own subcategories). ML can also be divided by specific computational approaches (sometimes equivalent to the subcategories within it) and of course is also being directed into industry domain specific solutions as well. Moreover, Machine Learning is also spilling over into other AI capabilities into somewhat hybrid AI solutions (still mostly tactical in nature). So, how does someone respond if leadership asks them "should we be adopting Machine Learning and if so where?" Is ML the category or are the products branded as ML or ML-based the focus? And perhaps with the recent pervasiveness of ML in the deployed products available on the market, we might even have to ask ourselves whether AI and ML have become in fact synonymous (at least for now from a practical standpoint).
The illustration above is a conceptual (process) view of how capability alignment occurs in strategic development. In an actual AI strategy, there could be a handful or dozens of capabilities that must be aligned or rationalized. Understanding the definitions of what's being assessed and rationalized makes the difference between strategic success and failure.
AI and the Hype Cycle
This brings up an interesting point in relation to hype cycles and how any / all IT capability ought to be assessed and / or acquired. Typically in a Hype Cycle, IT leadership responds to or is motivated by various external references (word of mouth, trade shows and magazines etc.). In those situations, then the focus is on the trend buzzword and that buzzword can refer to either the underlying technology or the product class and is typically ill-defined from the perspective of potential adopters. So, for example in the early 2000's the buzzword "E-Learning" referred to the top-level trend and product class as opposed to specific underlying technologies or standards but that wasn't entirely clear to organizations that were interested in it. In other words, E-learning become synonymous with Learning Management Systems (LMSs) as a product class and while there were attempts to associate it with specific standards, (for example SCORM), ultimately the field remained somewhat flexible in regards to the standards (and in some cases lack of them) for delivering product capability.
Moodle which is a popular open-source LMS platform now supports four different standards, some of which are radically different in their intent and approach from the others. Other LMS's have similar standards support but are agnostic in regard to underlying technology (Java, scripting, Cloud etc.). And in the case of E-learning, an excessive focus on the LMS product category became highly problematic for the E-learning field as it severely limited the perception of what could or couldn't be done with Learning Technology. E-learning has be trying to undone this problem now for nearly 20 years. At first, focusing on a product category may have seemed to be a pragmatic way of adopting Learning technology but without the larger context of how that would fit within or enhance other related activities and processes the industry fell flat.
In the case described above, the buzzword trend "E-learning" and related IT domain (roughly analogous to the AI topic construct) translated mainly to a single product class (LMS-related) deployed using a non-specific set of underlying technologies and a diverse set of standards but failed to address the majority of real world Learning Use Cases for learning (or only tangentially satisfied them). One of the problems at first was that the E-learning industry became too "prescriptive" in regards to what standards ought to be used, that was eventually corrected but the main problem here was that the overall reaction (both by the folks trying to build the E-learning market and the organizations wishing to adopt clearing technology) was that the focus became overly tactical. In other words, for many people the goal became deploying the LMS, rather than using an LMS to help evolve training opportunities. Similar situations occurred with other trends such as SOA and Big Data. There is a very delicate balance between deploying capability within a reasonable window and deploying meaningful capability in a reasonable timeframe that both solves immediate problems and paves the way towards solving others.
If that seems confusing, well - it's fairly tame compared to what we're dealing with in AI. Artificial Intelligence involves multiple product categories (not one primary focus) developed on a non-specific set of underlying technologies - but with a big difference. In AI we're not dealing with standards per se (although someday they will likely settle down into many specific standards), but rather a large and diverse set of applied computational theory. There is a dynamic interaction between that theory and the exploration of how those might be applied in product contexts - in other words, AI is much more disruptive and much less focused than say E-learning is or was. It is not only redefining itself rapidly, it is also beginning to redefine nearly every other technology field or trend as well.
For example, we could easily posit scenarios near-term where AI can and will likely influence the development of E-learning solutions. AI may be used to generate dynamic assessments and to create learning paths on the fly based upon those assessments and the specific characteristics of any given learner (this ma or may not be deployed within an LMS). That's just one simple example and we can multiple this 100's of times right now across most of the technology domains we can think of. What's perhaps more remarkable about Artificial Intelligence is how much disruption is already occurring primarily just from tactical perspectives. Once AI becomes more integrative or holistic in nature, the pace of disruption could expand exponentially. Many IT trends tend to refer to themselves as "revolutionary," few really of them really were, but AI does deserve that distinction.
In the next article in this series, we'll walk through a simple evaluation and selection process to arrive at a prototypical AI capability target for a generic AI Strategy.
Copyright 2020, Stephen Lahanas
"The Intelligent Enterprise" article series