AI & Our World: What is Artificial Intelligence?
Letâs talk about a question a lot of people ask themselves⦠What is AI? The thing about artificial intelligence is that most people donât really know what it is. A lot of what most individuals think they know about AI technology is derived from assumptions based on the clichés  found in Hollywood rhetoric and politically-motivated paradigms, often impeding AI adoption. The real story is a lot less about an inevitable war between cyborgs and humanity and much more about advanced solutions, grounded in the age of digital transformation, that will propel us forward into this next era of technological evolution.
Origins of AI: How did it all start?
The general concept behind âartificial intelligenceâ dates far back to the times of ancient history with traces found in Greek mythologyâs Talos , a gigantic bronze mechanical robot-like heroic figure with human intelligence that served as a guardian to the island of Crete against ill-intentioned outsiders and invaders. The concept maintained its headway with early period thinkers from Descartesâ â who posited the concept that the bodies of animals are nothing more than complex machines â to early-modern legends and modern fiction like Maryâs Shelleyâs âFrankensteinâ.
The origination of modern artificial intelligence is considered to have transpired in the 1950s alongside the dawn of the computer age. In 1950 Alan Turing published a seminal paper titled âComputing Machinery and Intelligenceâ where Turing discusses the potential for a scenario where humans create a scenario in which âmachines can think.â This led to the development of the earliest major proposal in artificial intelligence philosophy, the Turing Test . Currently, the Turing test serves an investigative technique to decide whether or not a software/computer/machine/etc. is capable of âthinkingâ like a human being. Of course, âthoughtâ is a subjective ideology. Turingâs proposal also sought to dissect the term âthinkâ and instead focus on the action of a machineâs performance capabilities and dexterity to create human cognitive competency .
In 1956, not long after Turingâs contributions, the term âartificial intelligenceâ was minted by John McCarthy as the subject matter of a Dartmouth Conference in which he presented a âProposal for the Dartmouth Summer Research Project on Artificial Intelligence â. This was a first-of-its-kind conference focused on the issue of artificial intelligence and is widely considered the moment âAIâ was officially born and defined as âthe science and engineering of making intelligent machines.â  It was also at this conference that the sentiment âartificial intelligence is achievableâ was accepted, ultimately igniting the spark that kicked off the following few decades of research in AI.
AI Classifications â What does it all mean?
There are scores of categorizations when it comes to classifying factions and features of artificial intelligence. However, boiling a vast pool down to a few of the most commonly consumed terms paints a fairly holistic picture for our purposes.
Machine Learning:
Itâs important to note that some do not consider machine learning to be AI, but rather solely a field of computer science. However, the term is commonly used in conjunction with artificial intelligence. In that vein, we can (more or less) consider machine learning to be a branch stemming from both computer science and artificial intelligence. Machine learning strives to educate systems, using structured and/or labeled data, on how to absorb information and perform a specific task without requiring categorical programming. It is a method of data analysis that comprises constructing and amending models that permit programs to âlearnâ through experience and repetition. A few examples of instances wherein machine learning is utilized include image/speech recognition, financial services such as spend tracking, spam/malware email filtering, customer service chatbots, and many more.
Deep Learning:
Deep learning is a division of machine learning that employs numerous âlevelsâ of neural networks, built to function in an unsupervised learning manner that emulates a human brain with the ability to âlearnâ from vast quantities of data, regardless of whether or not that data is unstructured, unlabeled, missing, or otherwise. Every layer of the neural network contains deep learning algorithms that carry out computations and make forecasts in repetition to learn and progressively boost the precision of the results/recommendations as time goes on. A few examples of instances wherein deep learning is utilized include digital assistants, financial fraud detection, self-driving vehicles, and many more.
 Neural Net[works]:
Neural networks are structures of neurons that can adjust with variable data inputs, composed of a sequence of algorithms that seek to identify core connections within a group of data in a procedure that simulates how a human brain might function in order to identify and recognize patterns. âNeural networks take input data, train themselves to recognize patterns found in the data, and then predict the output for a new set of similar data. Therefore, a neural network can be thought of as the functional unit of deep learning, which mimics the behavior of the human brain to solve complex data-driven problems,â stated Pratik Shukla and Roberto Iriondo for Towards AI , a Medium publication.
Symbolic AI:
Symbolic artificial intelligence is the practice of directly infusing human knowledge and expertise into a solution/system/machine/etc. by employing symbols, intelligible by humans, which characterize real-world notions and rationale to construct ârulesâ that can then direct those symbols. This AI technique promotes a human-readable, rule-based technique, to produce transparent recommendations that are more easily understood by the people utilizing the solution. One example of an instance wherein a more symbolic-based AI approach is utilized includes conversational chatbots that use Natural Language Processing (NPL). âNatural Language Processing or NLP is a field of Artificial Intelligence that gives the machines the ability to read, understand and derive meaning from human languages,â stated Diego Lopez Yse for Towards Data Science , a Medium publication.
Cognitive AI:
Cognitive AI  is feasibly the most advanced form of artificial intelligence to date. It is a hybrid of conventional numeric AI (machine learning, neural networks, and deep learning), used in conjunction with symbolic AI to enable a system to produce transparent recommendations. Cognitive AI is an intelligent system that comprehends large quantities of variable data while applying situational awareness and codified expert human knowledge, expertise, and best practices to identify problems and recommend solutions to real-world challenges. âThis unique hybrid AI combines the best of numerical/statistical approaches with the best of symbolic/logical techniques to become greater than the sum of its parts,â stated VentureBeat and Beyond Limits in a 2019 VBLab article â Beyond Conventional AI: More Intelligent, More Explainable AI .
In contrast to the âblack boxâ issue concerning conventional AI approaches, Cognitive AI systems equate to Explainable AI  solutions that can disclose the reasoning behind their recommendations. Cognitive AI systems have the ability to show human-users detailed information regarding the substantiations, contingencies, confidence-levels, and ambiguities behind their decision-making process through intelligible audit trails. The key to a successful Cognitive AI tool is to build a primary set of models and propose hypothetical extensions, resulting in systems that sustain distinctive competencies to utilize encoded human expert knowledge in conjunction with historical and other external data. This results in systems with the ability to model hypothetical paths that predict problematic scenarios then recommend remediation plans, regardless of whether or not data inputs are unstructured, unlabeled, missing, or otherwise.
Advanced AI Applications â How can they help solve real challenges in the real world?
More advanced, enterprise AI software and cognitive solutions, have been proving their value and making a permanent mark on this world from industrial AI  to power and natural resources  and renewable initiatives. âEnterprises are increasingly deploying AI systems to monitor IoT devices in far-flung environments where humans are not always present, and internet connectivity is spotty at best; think highway cams, drones that survey farmlands, or an oil rig infrastructure in the middle of the ocean,â said Beyond Limitsâ CEO AJ Abdallat in an insideBIGDATA article â AI Hype: Why the Reality Often Falls Short of Expectations . âOne-quarter of organizations  with established IoT strategies are also investing in AI.â
Enterprise-level AI has been substantiating its power to transform businesses for the better by supporting leaders in extracting more value from their data and production processes by streamlining their entire operation at every level of the organization. Industries and businesses have been realizing more than just a return on AI investments; they are experiencing actual revenue from artificial intelligence investments (RAI) .
In a recent Forbes Tech Council Article , AJ Abdallat explained how âAI is generating major revenue in major industries.â In the article Abdallat referenced a 2019 report by Morgan Stanley  to illustrate that fact and spotlighted the following examples:
 â-Machine learning is analyzing wind farms to make power predictions 36 hours in advance, enabling providers to make supply commitments to power grids a full day before delivery and increase the value of wind energy output by 20%.
Recommended by LinkedIn
-In Australia, mining companies are using autonomous trucks and drilling technology to cut mining costs, improve worker safety and boost productivity by 20%.
If U.S. utility companies used AI-powered asset management software, costs could be cut by $23 billion annually, reducing outage frequency, overall footprints, installation times and copper cabling usage.
-A European automaker built a âfully digitizedâ factory and significantly reduced manufacturing time while boosting productivity by 10%.â
The Future of AI: Where can it take us?
 Outside of purely business-centric purposes, powerful AI solutions have the potential to help solve some of Earthâs most complex challenges. For example, the healthcare  industry has also been discovering AI solutions fueled by historical medical data, lab work, literature, and expert human knowledge. Recently, powerhouse AI has been designed to aid doctors, nurses, and other leading industry professionals by reducing risk and improving patient outcomes at the point-of-care.
Lately, forecasting models such as Beyond Limitsâ Coronavirus Dynamic Predictive Model  have been designed to help provide some relief in humanityâs fight against the unexpected introduction of the COVID-19 pandemic. âThis moment in time has uncovered just how crucial AI solutions are for the future of healthcare. Rapid changes have made it difficult to manage the pandemicâs spread and determine what the industry will look like after coming through the other side,â said AJ Abdallat in a recent Forbes Tech Council  article. âRegardless of complications, itâs still the responsibility of leadership teams to use every tool at their disposal to manage the pandemic and be better prepared for the future. It matters that legitimate attempts are made by all â for the good of all â to pursue pioneering solutions in the face of this global challenge.
Another example of AI being used for good includes the exploration of applications designed to aid in the fight against climate change; generating hope for a more sustainable, renewable future with the aim to create a scenario in which humanityâs carbon footprint looks a lot less discouraging. âThe suggested use-cases are varied, ranging from using AI and satellite imagery to better monitor deforestation, to developing new materials that can replace steel and cement (the production of which accounts for nine percent of global green house gas emissions),â wrote James Vincent in a 2019 article for The Verge on AI and climate change .
A 2019 article written by Simon Greenman for Towards Data Science, a Medium publication, also discusses the capacity for AI  to, âImprove manufacturing efficiency by digitising, connecting and analysing end to end manufacturing processes. For example many global manufactures are using predictive AI modelling to make turbine combustion more efficient, reduce errors and energy wastage on the production line, and improve production efficiency with advanced robotics.â
The potential use cases for artificial intelligence solutions are seemingly endless. While AI may seem like nebulous ideation from an outsiderâs perspective, and whether or not we are actively aware of the fact, it is ever-present throughout our surroundings and day-in-day-out lives. What used to be perceived as merely an intriguing plot-point for sci-fi narratives is now an inevitable, necessary, and welcomed reality. If youâre still asking yourself, âWhat is AI?â It may not be far-flung to boil it down to this statement: artificial intelligence is humanityâs focal point in their next stage of digital transformation and technological evolution.
REFERENCES