Empirical vs. Analytical Decisions

Empirical vs. Analytical Decisions

I, albeit many other people, have come-across the topic of complex algorithms which are used for determining credit scores, employment satisfaction and retention, etc. the come under scrutiny in terms of evaluating that algorithm's hindsight performance while being applied to "real-world" applications. All in all, why does algorithmic engineering have the capacity to fail given that the algorithm is designed under the gauze of historical information within the domain for which it will operate? Here, the crux of the issue is distilled into the underestimation of importance regarding the context of applying solutions (i.e., algorithms) under "empirical" and "analytical" domains, respectively. 

We should initialize our conversation by illustrating the difference between empirical and analytical-based solutions. Beginning with an analytical perspective, analytic solutions provide successful results when being applied within their respective domain for which they are operating if such a domain may be classified as "closed form ." What is the phrase "closed form" within the framework of algorithmic design? Well, when we speak of "closed-form", we are speaking of "closed-form solutions" concerning the domain for which we are directing the given algorithm.

Now, being "closed form" is simply a convoluted identifier for when an algorithm operates within the space of the domain in which it operates as having those coefficients embedded with factors of independent variables of the given domain to be "easy" (i.e., non-probabilistic) to solve for. Wherein, what we are speaking of in terms of "algorithmic design" is manufacturing a differential equation which provides a solution to the domain of operation. Where, the cardinality of the corresponding solution for which the given algorithm facilitates is of a minimum-cardinality . Given this, what is "cardinality" in the sense of solutions of algorithmic design? To be brief, achieving cardinality which is minimized underlying the output of a given algorithmic structure is explicitly sought because the lower the cardinality, the lower the complexity of the output which the given algorithm provides (i.e., the algorithmic-based solution). 

Further distilling the given illustration made-prior, what does it mean when we seek a minimal cardinality for the resultant output of the algorithm when that algorithm is applied onto the domain for which we take interest? A minimal cardinality is to say that there is no "straight-forward" solution. Stated in a different light, no algorithm can guarantee to predict outcomes predicated on future behavior within the domain for which that algorithm operates due to the fact that the given discrete (i.e., "independent") categories that relegate themselves to that domain for which the algorithm operates, these "independent categories" withhold the property "probabilistic" behavior (i.e., behavior subject to information predicated within that domain which is realized only at a future state of that domain for which those "independent categories" identify themselves within).  

Extending this statement that was made-prior to not only clarify what I mean but to also reinforce it, we should momentarily pivot our focus to the work of Horst Rittel and Melvin Webber concerning the experimental field of "Wicked Problems ." In short, the field of "Wicked Problems" concerns itself with domains that evolve according to at least one independent variable; usually, the set of independent variables of the domain for which the tailored algorithm operates is predicates itself upon the independent variables of time or cash-reserves. Providing further explanation, if a problem is categorized as "wicked", then no solution for which the given algorithm facilitates is a "bounded" solution; these types of problems develop according to a set of independent variables within the given domain that the given algorithm operates under.  

Wherefore, this is why algorithms operating within specific domains need to be continuously augmented; it is because the domain where the algorithm generates a solution radically changes relative to its (i.e., the domain's) prior state, where such a state is "time-stamped" to that given set of independent variables of the domain for which the engineered algorithm operates within.

Providing concluding statements, algorithmic design, when applied to "real-world" applications are subjected to probabilistic independent variables within the domain for which the algorithm is directed to be "injected" into. Here then, it should be at this moment made understood that algorithms should only be used to assist in a given decision and not being the de facto standard for making decisions under the direction of human-based engineering. Therein, this is not to say that humans make better decisions than machines, but neither machine nor human have the capacity for providing "closed form" solutions to domains that retain probabilistic independent variables (i.e., "Wicked Problems").  

Finally, to clarify, probabilistic domains are constrained to empirical-based decision making, while the prototyping stage of algorithmic development (i.e., algorithm R&D) is subject to analytical-based decision making, where the independent variables with the given prototyping process are not subject to future-based data; the data the algorithms use is categorized to that of historical data. 

 

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics