Can generative AI master emotional intelligence?
Welcome to AI Decoded, Fast Companyâs weekly LinkedIn newsletter that breaks down the most important news in the world of AI. Iâm Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.
This week, Iâm taking a look at the next frontiers in the development of large language models, as well as the story of how Microsoft secured its future in the AI arms race by forming a partnership with OpenAI. Plus: A federal court rejects the idea of copyrighting AI-generated art.Â
If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on Twitter @thesullivan.
Generative AIâs next frontiers include emotional intelligence and advanced inference
Even though the tech world are still working through early kinks with generative AIâtraining the large language models (LLMs) to understand, for example, when a user prompt calls for a factual answer and when it calls for a creative oneâresearchers are already thinking about the next frontiers of communication skills they hope to program into LLMs.Â
Click here for tickets to see leading AI executives and experts in conversation next month at Fast Companyâs Innovation Festival.Â
Compared to humans, LLMs are still lacking in complex cognitive and communicative skills. We humans have intuitions that take into account factors beyond the plain facts of a problem or situation. We can read between the lines of the verbal or written messages we receive. We can imply things without explicitly saying them, and understand when others are doing so. Researchers are working on ways to imbue LLMs with such capabilities. They also hope to give AIs a far better understanding of the emotional layer that influences how we humans communicate and interpret messages.
AI companies are also thinking about how to make chatbots more âagenticââthat is, better at autonomously taking a set of actions to achieve a larger goal. (For example, a bot might arrange all aspects of a trip or carry out a complex stock trading strategy.) But this raises obvious safety questions: What if a chatbot goes off to work toward a goal without a clear understanding of both the letter and the intent of the humanâs commands? Can they be trained to intuit that they donât have a proper understanding of the humanâs real intentions? And, most importantly, shouldnât AI companies give chatbots the skills to fully understand the fineries of human commands before training them to act autonomously?
How Microsoft and Satya Nadella are winning big techâs AI war
In his latest feature story, Fast Company global tech editor Harry McCracken sheds light on Microsoftâs early courtship of OpenAI, with insights from Microsoft CEO Satya Nadella and others who were directly involved in the deal. As McCracken shows, the two companiesâ relationship wasnât always smooth sailing:Â
Seeing a prospective customer for its Azure cloud platform, Microsoft had given the fledgling company [OpenAI] some credits for complimentary computing time. As those freebies dwindled, OpenAI began shifting its workload to Google Cloud, seemingly winding down its relationship with Microsoft before it really got underway.
By 2017, OpenAI was working on its version of the transformer models that had been developed at Google. OpenAI dramatically increased the size of the model, as well as the corpus of training data and the compute power used. If thereâs a âsecret sauceâ in OpenAIâs work, itâs the way the companyâs scientists foresaw and planned for the stunning performance increases the dramatic scale-up delivered:Â
Recommended by LinkedIn
After running into OpenAI CEO Sam Altman at a conference and briefly discussing the possibility of official collaboration, he [Nadella] asked Microsoft CTO Kevin Scott to visit the company and assess GPT with a dispassionate eye. âI went there definitely a little bit skeptical,â Scott recalls. âAnd they had such excellent clarity of vision about where they thought things were headed, and some experimental data to show that it wasnât just ungrounded hypothesizing about the futureâthat something was really happening.âÂ
As McCracken writes, Scott immediately saw how the GPT model could enhance Microsoft products. This led Microsoft to invest an initial $1 billion in OpenAI in July 2019, which bought it âpreferred partnerâ status for commercializing OpenAIâs models. OpenAI got a cash infusion and access to ample computing power on Microsoftâs Azure servers:
After that, Microsoftâs Github began experimenting with GPT-3 to generate computer code based on plain-language prompts, and the results were astonishingly good (while not perfect). This led to Githubâs release of the âCopilotâ coding assistant, which is now used by more than a million developers, to take some of the grunt work out of coding. This was an important event because it offered proof to Microsoft that OpenAIâs models could be productized. Microsoft would later adopt the âcopilotâ concept to brand its new GPT-driven features:
In the late summer of 2022, Microsoft executives had yet another âholy shitâ experience when OpenAI engineers showed them a rough draft of its most capable LLM to date. Code-named Davinci 3 (and later called GPT-4), it generated text that was far more fluid and factual than that of its predecessors.Â
This was just a few months before generative AIâs âbig bangââthe public release of ChatGPT in late November 2022. In January of this year, Microsoft locked in its priority access to the OpenAI models by acquiring an estimated 49% of the startup for a reported $10 billion. By then, numerous Microsoft teams were sprinting to build GPT-4-powered integrations for everything from Bing search to Microsoft 365.
A federal court deals another blow to AI-generated art
The U.S. legal system is beginning to hash out a jurisprudence around AI-generated art, and it looks like bad news for artists who eschew paintbrushes for Stable Diffusion.
Back in 2018, when AI artist Stephan Thalen sought to copyright an image generated by an AI tool he himself created, the Copyright Office refused, stating that the work âlacked human authorship.â Now a federal court has upheld the Copyright Officeâs denial, saying that human authorship âis an essential part of a valid copyright claim.âÂ
Last year, AI artists appeared to have gained a victory when Adobe AI evangelist Kris Kashtanova was granted copyright protection for a comic book that contained images generated by the AI tool, Midjourney. Kashtanova said he applied for the copyright with the intention of setting a copyright precedent for AI works. But the Copyright Office later revoked the copyright protection from the AI-generated images, leaving only the human-created words and layout protected.Â
The Copyright Officeâs two rejections, backed up by a federal court decision, could form the outlines of a legal doctrine that will be both cited as a precedent, and challenged, in the many cases involving generative-art ownership that are sure to come.
 More AI coverage from Fast Company:
From around the web:
@grupoandreluizdiretor_delivery®
1yhttps://maps.app.goo.gl/m2PHaWBsodMJYU556
AI chatbots with humanlike âintuitionâ and âagencyâ? Fascinating! At Good AI Vibes, we explore cutting-edge advancements like this in our bi-weekly newsletter. It's an invaluable resource for staying ahead. Join us as we dive into the potential of AI in various industries. Subscribe here: https://goodaivibes.substack.com/
JEHAMA MÃHENDÄ°SLÄ°K. Mining Engineer (General Coordinator)
1yTrying to solve the philosophy of human creation until today; Science, technology, philosophy and divine religions could not come to a definite common judgment on this issue. The last point reached as a result of intensive studies on human mythology is as follows; the entity we call human, whom we send into space today in a space capsule; It could not be determined whether it was a fish, animal or monkey. As a result, based on the double slit experiment as in the philosophy of Quantum Entanglement; This is the difference between artificial intelligence and natural intelligence. Point.
Business advisor. Specialist in leadership, growth & business turnaround. Focus on SME. Author & fine-art photography
1yAI is scary enough as it is. If we use such tools for emotional support, we are in trouble
Global Senior-Level Strategic UX Researcher, Innovation, Fintech/InsurTech, qualitative, quantitative & agile user research
1yYes, but while AI can be trained on vast amounts of data related to human emotions, it doesn't genuinely understand emotions in the way we humans do. Its responses are based on patterns and associations. If GenAi can eventually become conscious and emotionally aware, then it should be able to provide genuine empathy and true understanding