{"id":19383,"date":"2024-08-13T11:25:17","date_gmt":"2024-08-13T11:25:17","guid":{"rendered":"https:\/\/uxmag.com\/?p=19383"},"modified":"2024-12-26T11:46:44","modified_gmt":"2024-12-26T11:46:44","slug":"when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces","status":"publish","type":"post","link":"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces","title":{"rendered":"When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces"},"content":{"rendered":"\n<p><em>As Artificial Intelligence evolves the computing paradigm, designers have an opportunity to craft more intuitive user interfaces. Text-based Large Language Models unlock most of the new capabilities, leading many to suggest a shift from graphical interfaces to conversational ones like a chatbot is necessary. However, plenty of evidence suggests conversation is a poor interface for many interaction patterns. Maximillian Piras examines how the latest AI capabilities can reshape the future of human-computer interaction beyond conversation alone.<\/em><\/p>\n\n\n\n<p>Few technological innovations can completely change the way we interact with computers. Lucky for us, it seems we\u2019ve won front-row seats to the unfolding of the next paradigm shift.<\/p>\n\n\n\n<p>These shifts tend to unlock a new abstraction layer to hide the working details of a subsystem. Generalizing details allows our complex systems to appear simpler &amp; more intuitive. This streamlines coding programs for computers as well as designing the interfaces to interact with them.<\/p>\n\n\n\n<p>The&nbsp;<strong>Command Line Interface<\/strong>, for instance, created an abstraction layer to enable interaction through a stored program. This hid the subsystem details once exposed in earlier computers that were only programmable by inputting 1s &amp; 0s through switches.<\/p>\n\n\n\n<p><strong>Graphical User Interfaces (GUI)<\/strong>&nbsp;further abstracted this notion by allowing us to manipulate computers through visual metaphors. These abstractions made computers accessible to a mainstream of non-technical users.<\/p>\n\n\n\n<p>Despite these advances, we still haven\u2019t found a&nbsp;<em>perfectly<\/em>&nbsp;intuitive interface \u2014 the troves of support articles across the web make that evident. Yet recent advances in AI have convinced many technologists that the next evolutionary cycle of computing is upon us.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"800\" height=\"600\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/1-smashing-abstraction-intro.gif\" alt=\"\" class=\"wp-image-19384\"\/><figcaption class=\"wp-element-caption\">Layers of interface abstraction, bottom to top: Command Line Interfaces, Graphical User Interfaces, &amp; AI-powered Conversational Interfaces. (Image source:&nbsp;<a href=\"https:\/\/www.maximillian.nyc\/\" target=\"_blank\" rel=\"noreferrer noopener\">Maximillian Piras<\/a>) (<a href=\"https:\/\/files.smashing.media\/articles\/designing-ai-beyond-conversational-interfaces\/1-smashing-abstraction-intro.gif\" target=\"_blank\" rel=\"noreferrer noopener\">Large preview<\/a>)<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-next-layer-of-interface-abstraction\">The Next Layer Of Interface Abstraction<\/h2>\n\n\n\n<p>A branch of machine learning called&nbsp;<strong>generative AI<\/strong>&nbsp;drives the bulk of recent innovation. It leverages pattern recognition in datasets to establish probabilistic distributions that enable novel constructions of text, media, &amp; code.&nbsp;<a href=\"https:\/\/www.gatesnotes.com\/The-Age-of-AI-Has-Begun\" target=\"_blank\" rel=\"noreferrer noopener\">Bill Gates believes<\/a>&nbsp;it\u2019s \u201cthe most important advance in technology since the graphical user interface\u201d because it can make controlling computers even easier. A newfound ability to interpret unstructured data, such as natural language, unlocks new inputs &amp; outputs to enable&nbsp;<a href=\"https:\/\/twitter.com\/AviSchiffmann\/status\/1708439854005321954\" target=\"_blank\" rel=\"noreferrer noopener\">novel<\/a>&nbsp;<a href=\"https:\/\/www.youtube.com\/watch?v=9lNIwOOMVHk\" target=\"_blank\" rel=\"noreferrer noopener\">form<\/a>&nbsp;<a href=\"https:\/\/www.youtube.com\/watch?v=Yla0f5JZg78\" target=\"_blank\" rel=\"noreferrer noopener\">factors<\/a>.<\/p>\n\n\n\n<p>Now our universe of information can be instantly invoked through an interface as intuitive as talking to another human. These are the computers we\u2019ve dreamed of in science fiction, akin to systems like Data from Star Trek. Perhaps computers up to this point were only prototypes &amp; we\u2019re now getting to the actual product launch. Imagine&nbsp;<strong>if building the internet was laying down the tracks, AIs could be the trains to transport all of our information at breakneck speed<\/strong>&nbsp;&amp; we\u2019re about to see what happens when they barrel into town.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><\/p>\n<cite><em>\u201cSoon the pre-AI period will seem as distant as the days when using a computer meant typing at a C:&gt; prompt rather than tapping on a screen.\u201d<\/em><br><br><em>\u2014 Bill Gates in \u201c<\/em><a href=\"https:\/\/www.gatesnotes.com\/The-Age-of-AI-Has-Begun\" target=\"_blank\" rel=\"noreferrer noopener\">The Age of AI Has Begun<\/a><em>\u201d<\/em><\/cite><\/blockquote>\n\n\n\n<p>If everything is about to change, so must the mental models of software designers. As&nbsp;<a href=\"https:\/\/www.lukew.com\/ff\/entry.asp?933\" target=\"_blank\" rel=\"noreferrer noopener\">Luke Wroblewski<\/a>&nbsp;once popularized mobile-first design, the next zeitgeist is likely AI-first. Only through understanding AI\u2019s constraints &amp; capabilities can we craft delight. Its influence on the discourse of interface evolution has already begun.<\/p>\n\n\n\n<p>Large Language Models (LLMs), for instance, are a type of AI utilized in many new applications &amp; their text-based nature leads many to believe a conversational interface, such as a chatbot, is a fitting form for the future. The notion that AI is something you talk to&nbsp;<a href=\"https:\/\/www.wired.com\/2013\/03\/conversational-user-interface\/\" target=\"_blank\" rel=\"noreferrer noopener\">has been permeating across the industry for years<\/a>. Robb Wilson, the co-owner of UX Magazine, calls conversation \u201cthe infinitely scalable interface\u201d in his book&nbsp;<em>The Age of Invisible Machines<\/em>&nbsp;(2022).&nbsp;<a href=\"https:\/\/www.designerfund.com\/blog\/how-figma-midjourney-and-databricks-harness-ai-in-design\/\" target=\"_blank\" rel=\"noreferrer noopener\">Noah Levin, Figma\u2019s VP of Product Design, contends<\/a>&nbsp;that \u201cit\u2019s a very intuitive thing to learn how to talk to something.\u201d Even a herald of GUIs such as&nbsp;<a href=\"https:\/\/www.gatesnotes.com\/The-Age-of-AI-Has-Begun\" target=\"_blank\" rel=\"noreferrer noopener\">Bill Gates posits<\/a>&nbsp;that \u201cour main way of controlling a computer will no longer be pointing and clicking.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"960\" height=\"540\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/2-smashing-abstraction-mscopilot.gif\" alt=\"\" class=\"wp-image-19385\"\/><figcaption class=\"wp-element-caption\">Microsoft Copilot is a new conversational AI feature being integrated across their office suite (Image source:&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/microsoft-365\/blog\/2023\/03\/16\/introducing-microsoft-365-copilot-a-whole-new-way-to-work\/\" target=\"_blank\" rel=\"noreferrer noopener\">Microsoft<\/a>) (<a href=\"https:\/\/files.smashing.media\/articles\/designing-ai-beyond-conversational-interfaces\/2-smashing-abstraction-mscopilot.gif\" target=\"_blank\" rel=\"noreferrer noopener\">Large preview<\/a>)<\/figcaption><\/figure>\n\n\n\n<p>The hope is that conversational computers will flatten learning curves.&nbsp;<a href=\"https:\/\/www.rabbit.tech\/keynote\" target=\"_blank\" rel=\"noreferrer noopener\">Jesse Lyu, the founder of Rabbit, asserts<\/a>&nbsp;that a natural language approach will be \u201cso intuitive that you don\u2019t even need to learn how to use it.\u201d<\/p>\n\n\n\n<p>After all, it\u2019s not as if Data from Stark Trek came with an instruction manual or onboarding tutorial. From this perspective, the&nbsp;<a href=\"https:\/\/www.shopify.com\/partners\/blog\/conversational-interfaces\" target=\"_blank\" rel=\"noreferrer noopener\">evolutionary tale<\/a>&nbsp;of conversational interfaces superseding GUIs seems logical &amp; echoes the earlier shift away from command lines. But others have&nbsp;<a href=\"https:\/\/wattenberger.com\/thoughts\/boo-chatbots\" target=\"_blank\" rel=\"noreferrer noopener\">opposing opinions<\/a>, some going as far as&nbsp;<a href=\"https:\/\/maggieappleton.com\/lm-sketchbook\" target=\"_blank\" rel=\"noreferrer noopener\">Maggie Appleton<\/a>&nbsp;to call conversational interfaces like chatbots \u201cthe lazy solution.\u201d<\/p>\n\n\n\n<p>This might seem like a schism at first, but it\u2019s more so a symptom of a simplistic framing of interface evolution. Command lines are far from extinct; technical users still prefer them for their greater flexibility &amp; efficiency. For use cases like software development or automation scripting, the added abstraction layer in graphical no-code tools can act as a barrier rather than a bridge.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><\/p>\n<cite><a href=\"https:\/\/twitter.com\/share?text=%0aGUIs%20were%20revolutionary%20but%20not%20a%20panacea.%20Yet%20there%20is%20ample%20research%20to%20suggest%20conversational%20interfaces%20won%e2%80%99t%20be%20one,%20either.%20For%20certain%20interactions,%20they%20can%20decrease%20usability,%20increase%20cost,%20&amp;%20introduce%20security%20risk%20relative%20to%20GUIs.%0a%0a&amp;url=https:\/\/smashingmagazine.com%2f2024%2f02%2fdesigning-ai-beyond-conversational-interfaces%2f\" target=\"_blank\" rel=\"noreferrer noopener\">GUIs were revolutionary but not a panacea. Yet there is ample research to suggest conversational interfaces won\u2019t be one, either. For certain interactions, they can decrease usability, increase cost, &amp; introduce security risk relative to GUIs.<\/a><\/cite><\/blockquote>\n\n\n\n<p><strong>So, what is the right interface for artificially intelligent applications?<\/strong>&nbsp;This article aims to inform that design decision by contrasting the capabilities &amp; constraints of conversation as an interface.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-connecting-the-pixels\">Connecting The Pixels<\/h2>\n\n\n\n<p>We\u2019ll begin with some historical context, as the key to knowing the future often starts with looking at the past. Conversational interfaces feel new, but we\u2019ve been able to chat with computers for decades.<\/p>\n\n\n\n<p>Joseph Weizenbaum invented the first chatbot,&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/ELIZA\" target=\"_blank\" rel=\"noreferrer noopener\">ELIZA<\/a>, during an MIT experiment in 1966. This laid the foundation for the following generations of language models to come, from voice assistants like Alexa to those annoying phone tree menus. Yet the majority of chatbots were seldom put to use beyond&nbsp;<a href=\"https:\/\/www.theverge.com\/2021\/12\/23\/22851451\/amazon-alexa-by-the-way-use-case-functionality-plateaued\" target=\"_blank\" rel=\"noreferrer noopener\">basic tasks like setting timers<\/a>.<\/p>\n\n\n\n<p>It seemed most consumers weren\u2019t that excited to converse with computers after all. But something changed last year. Somehow we went from&nbsp;<a href=\"https:\/\/www.cnet.com\/tech\/computing\/why-were-all-obsessed-with-the-mind-blowing-chatgpt-ai-chatbot\/\" target=\"_blank\" rel=\"noreferrer noopener\">CNET reporting<\/a>&nbsp;that \u201c72% of people found chatbots to be a waste of time\u201d to ChatGPT gaining&nbsp;<a href=\"https:\/\/techcrunch.com\/2023\/11\/06\/openais-chatgpt-now-has-100-million-weekly-active-users\/\" target=\"_blank\" rel=\"noreferrer noopener\">100 million weekly active users<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"800\" height=\"481\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/3-abstraction-eliza.png\" alt=\"\" class=\"wp-image-19386\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/3-abstraction-eliza.png 800w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/3-abstraction-eliza-300x180.png 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/3-abstraction-eliza-768x462.png 768w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><figcaption class=\"wp-element-caption\">A conversation with the first chatbot, ELIZA, invented in 1966. (Image source:&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/ELIZA\" target=\"_blank\" rel=\"noreferrer noopener\">Wikipedia<\/a>) (<a href=\"https:\/\/files.smashing.media\/articles\/designing-ai-beyond-conversational-interfaces\/3-abstraction-eliza.png\" target=\"_blank\" rel=\"noreferrer noopener\">Large preview<\/a>)<\/figcaption><\/figure>\n\n\n\n<p>What took chatbots from arid to astonishing? Most assign credit to&nbsp;<a href=\"https:\/\/cdn.openai.com\/research-covers\/language-unsupervised\/language_understanding_paper.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">OpenAI\u2019s 2018 invention<\/a>&nbsp;(PDF) of the&nbsp;<strong>Generative Pre-trained Transformer (GPT)<\/strong>. These are a new type of LLM with significant improvements in natural language understanding. Yet, at the core of a GPT is the earlier&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1706.03762.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">innovation of the transformer architecture introduced in 2017<\/a>&nbsp;(PDF). This architecture enabled the parallel processing required to capture long-term context around natural language inputs. Diving deeper, this architecture is only possible thanks to the&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1409.0473.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">attention mechanism introduced in 2014<\/a>&nbsp;(PDF). This enabled the selective weighing of an input\u2019s different parts.<\/p>\n\n\n\n<p>Through this assemblage of complementary innovations, conversational interfaces now seem to be capable of competing with GUIs on a wider range of tasks. It took a surprisingly similar path to unlock GUIs as a viable alternative to command lines. Of course, it required hardware like a mouse to capture user signals beyond keystrokes &amp; screens of adequate resolution. However, researchers found the missing software ingredient years later with the invention of bitmaps.<\/p>\n\n\n\n<p>Bitmaps allowed for complex pixel patterns that earlier vector displays struggled with. Ivan Sutherland\u2019s Sketchpad, for instance, was the inaugural GUI but couldn\u2019t support concepts like overlapping windows. IEEE Spectrum\u2019s&nbsp;<a href=\"https:\/\/spectrum.ieee.org\/graphical-user-interface\" target=\"_blank\" rel=\"noreferrer noopener\">Of Mice and Menus<\/a>&nbsp;(1989) details the progress that led to the bitmap\u2019s invention by Alan Kay\u2019s group at Xerox Parc. This new technology enabled the revolutionary&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/WIMP_(computing)\" target=\"_blank\" rel=\"noreferrer noopener\">WIMP (windows, icons menus, and pointers)<\/a>&nbsp;paradigm that helped onboard an entire generation to personal computers through intuitive visual metaphors.<\/p>\n\n\n\n<p>Computing no longer required a preconceived set of steps at the outset. It may seem trivial in hindsight, but the presenters were already alluding to an artificially intelligent system during&nbsp;<a href=\"https:\/\/www.youtube.com\/watch?v=6orsmFndx_o\" target=\"_blank\" rel=\"noreferrer noopener\">Sketchpad\u2019s MIT demo<\/a>&nbsp;in 1963. This was an&nbsp;<strong>inflection point transforming an elaborate calculating machine into an exploratory tool<\/strong>. Designers could now craft interfaces for experiences where a need to discover eclipsed the need for flexibility &amp; efficiency offered by command lines.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"800\" height=\"444\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/5-abstraction-susankare.png\" alt=\"\" class=\"wp-image-19388\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/5-abstraction-susankare.png 800w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/5-abstraction-susankare-300x167.png 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/5-abstraction-susankare-768x426.png 768w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><figcaption class=\"wp-element-caption\">Susan Kare\u2019s early sketch for the pointer icon in Apple\u2019s GUI. (Image source: <a href=\"https:\/\/kareprints.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Susan Kare<\/a>) (<a href=\"https:\/\/files.smashing.media\/articles\/designing-ai-beyond-conversational-interfaces\/5-abstraction-susankare.png\" target=\"_blank\" rel=\"noreferrer noopener\">Large preview<\/a>)<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-parallel-paradigms\">Parallel Paradigms<\/h2>\n\n\n\n<p>Novel adjustments to existing technology made each new interface viable for mainstream usage \u2014 the cherry on top of a sundae, if you will. In both cases, the foundational systems were already available, but a different data processing decision made the output meaningful enough to attract a mainstream audience beyond technologists.<\/p>\n\n\n\n<p>With bitmaps, GUIs can organize pixels into a grid sequence to create complex skeuomorphic structures. With GPTs, conversational interfaces can organize unstructured datasets to create responses with human-like (or greater) intelligence.<\/p>\n\n\n\n<p>The prototypical interfaces of both paradigms were invented in the 1960s, then saw a massive delta in their development timelines \u2014 a case study unto itself. Now we find ourselves at another&nbsp;<strong>inflection point: in addition to calculating machines &amp; exploratory tools, computers can act as life-like entities<\/strong>.<\/p>\n\n\n\n<p>But which of our needs call for conversational interfaces over graphical ones? We see a theoretical solution to our need for companionship in the movie&nbsp;<em>Her<\/em>, where the protagonist falls in love with his digital assistant. But what is the benefit to those of us who are content with our organic relationships? We can look forward to validating the assumption that&nbsp;<strong>conversation is a more intuitive interface<\/strong>. It seems plausible because a few core components of the WIMP paradigm have well-documented usability issues.<\/p>\n\n\n\n<p><a href=\"https:\/\/www.nngroup.com\/articles\/icon-usability\/\" target=\"_blank\" rel=\"noreferrer noopener\">Nielsen Norman Group<\/a>&nbsp;reports that cultural differences make universal recognition of icons rare \u2014 menus trend towards an unusable mess with the inevitable addition of complexity over time. Conversational interfaces&nbsp;<em>appear<\/em>&nbsp;more usable because you can just tell the system when you\u2019re confused! But as we\u2019ll see in the next sections, they have their fair share of usability issues as well.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><\/p>\n<cite><a href=\"https:\/\/twitter.com\/share?text=%0aBy%20replacing%20menus%20with%20input%20fields,%20we%20must%20wonder%20if%20we%e2%80%99re%20trading%20one%20set%20of%20usability%20problems%20for%20another.%0a&amp;url=https:\/\/smashingmagazine.com%2f2024%2f02%2fdesigning-ai-beyond-conversational-interfaces%2f\" target=\"_blank\" rel=\"noreferrer noopener\">By replacing menus with input fields, we must wonder if we\u2019re trading one set of usability problems for another.<\/a><\/cite><\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-cost-of-conversation\">The Cost of Conversation<\/h2>\n\n\n\n<p>Why are conversational interfaces so popular in science fiction movies? In a&nbsp;<a href=\"https:\/\/rhizome.org\/editorial\/2014\/feb\/03\/ill-send-os-world-her-product-spec\/\" target=\"_blank\" rel=\"noreferrer noopener\">Rhizome essay<\/a>, Martine Syms theorizes that they make \u201cfor more cinematic interaction and a leaner production.\u201d This same cost\/benefit applies to app development as well. Text completion delivered via written or spoken word is the core capability of an LLM. This makes conversation the simplest package for this capability from a design &amp; engineering perspective.<\/p>\n\n\n\n<p>Linus Lee, a prominent AI Research Engineer,&nbsp;<a href=\"https:\/\/thesephist.com\/posts\/latent\/\" target=\"_blank\" rel=\"noreferrer noopener\">characterizes<\/a>&nbsp;it as \u201cexposing the algorithm\u2019s raw interface.\u201d Since the interaction pattern &amp; components are already largely defined, there isn\u2019t much more to invent \u2014 everything can get thrown into a chat window.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><\/p>\n<cite><em>\u201cIf you\u2019re an engineer or designer tasked with harnessing the power of these models into a software interface, the easiest and most natural way to \u201cwrap\u201d this capability into a UI would be a conversational interface\u201d<\/em><br><br><em>\u2014 Linus Lee in&nbsp;<\/em><a href=\"https:\/\/thesephist.com\/posts\/latent\/\" target=\"_blank\" rel=\"noreferrer noopener\">Imagining Better Interfaces to Language Models<\/a><\/cite><\/blockquote>\n\n\n\n<p>This is further validated by&nbsp;<a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2023\/11\/sam-altman-open-ai-chatgpt-chaos\/676050\/?gift=bQgJMMVzeo8RHHcE1_KM0WVuODpKll0A708pOI0Ple4&amp;utm_source=copy-link&amp;utm_medium=social&amp;utm_campaign=share\" target=\"_blank\" rel=\"noreferrer noopener\">The Atlantic\u2019s reporting on ChatGPT\u2019s launch<\/a>&nbsp;as a \u201clow-key research preview.\u201d OpenAI\u2019s hesitance to frame it as a product suggests a lack of confidence in the user experience. The internal expectation was so low that employees\u2019 highest guess on first-week adoption was 100,000 users (90% shy of the actual number).<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><\/p>\n<cite><a href=\"https:\/\/twitter.com\/share?text=%0aConversational%20interfaces%20are%20cheap%20to%20build,%20so%20they%e2%80%99re%20a%20logical%20starting%20point,%20but%20you%20get%20what%20you%20pay%20for.%20If%20the%20interface%20doesn%e2%80%99t%20fit%20the%20use%20case,%20downstream%20UX%20debt%20can%20outweigh%20any%20upfront%20savings.%0a&amp;url=https:\/\/smashingmagazine.com%2f2024%2f02%2fdesigning-ai-beyond-conversational-interfaces%2f\" target=\"_blank\" rel=\"noreferrer noopener\">Conversational interfaces are cheap to build, so they\u2019re a logical starting point, but you get what you pay for. If the interface doesn\u2019t fit the use case, downstream UX debt can outweigh any upfront savings.<\/a><\/cite><\/blockquote>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"768\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/7-abstraction-llmwrapper-1024x768.png\" alt=\"\" class=\"wp-image-19390\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/7-abstraction-llmwrapper-1024x768.png 1024w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/7-abstraction-llmwrapper-300x225.png 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/7-abstraction-llmwrapper-768x576.png 768w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/7-abstraction-llmwrapper-1536x1152.png 1536w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/7-abstraction-llmwrapper-702x526.png 702w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/7-abstraction-llmwrapper.png 1600w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">A visualization of how easy it is to wrap an LLM\u2019s raw output into a conversational interface. (Image source:&nbsp;<a href=\"https:\/\/www.maximillian.nyc\/\" target=\"_blank\" rel=\"noreferrer noopener\">Maximillian Piras<\/a>) (<a href=\"https:\/\/files.smashing.media\/articles\/designing-ai-beyond-conversational-interfaces\/7-abstraction-llmwrapper.png\" target=\"_blank\" rel=\"noreferrer noopener\">Large preview<\/a>)<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-forgotten-usability-principles\">Forgotten Usability Principles<\/h2>\n\n\n\n<p><a href=\"https:\/\/www.bloomberg.com\/news\/articles\/1998-05-25\/steve-jobs-theres-sanity-returning\" target=\"_blank\" rel=\"noreferrer noopener\">Steve Jobs once said<\/a>, \u201cPeople don\u2019t know what they want until you show it to them.\u201d Applying this thinking to interfaces echoes a usability evaluation called&nbsp;<em>discoverability<\/em>.&nbsp;<a href=\"https:\/\/www.nngroup.com\/articles\/navigation-ia-tests\/\" target=\"_blank\" rel=\"noreferrer noopener\">Nielsen Norman Group<\/a>&nbsp;defines it as a user\u2019s ability to \u201cencounter new content or functionality that they were not aware of.\u201d<\/p>\n\n\n\n<p>A well-designed interface should help users discover what features exist. The interfaces of many popular generative AI applications today revolve around an input field in which a user can type in anything to prompt the system. The problem is that it\u2019s often unclear what a user&nbsp;<em>should<\/em>&nbsp;type in to get ideal output. Ironically, a theoretical&nbsp;<a href=\"https:\/\/www.forbes.com\/sites\/forbesagencycouncil\/2023\/07\/20\/generative-ai-and-solving-the-blank-page-problem\/\" target=\"_blank\" rel=\"noreferrer noopener\">solution to writer\u2019s block<\/a>&nbsp;may have a blank page problem itself.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><\/p>\n<cite><em>\u201cI think AI has a problem with these missing user interfaces, where, for the most part, they just give you a blank box to type in, and then it\u2019s up to you to figure out what it might be able to do.\u201d<\/em><br><br><em>\u2014 Casey Newton on&nbsp;<\/em><a href=\"https:\/\/www.nytimes.com\/2023\/10\/27\/podcasts\/hardfork-meta-lawsuit-mkbhd-dalle.html\" target=\"_blank\" rel=\"noreferrer noopener\">Hard Fork Podcast<\/a><\/cite><\/blockquote>\n\n\n\n<p>Conversational interfaces excel at mimicking human-to-human interaction but can fall short elsewhere. A popular image generator named Midjourney, for instance, only supported text input at first but is now&nbsp;<a href=\"https:\/\/www.zdnet.com\/article\/later-discord-midjourney-ai-tool-is-moving-to-dedicated-website\/\" target=\"_blank\" rel=\"noreferrer noopener\">moving towards a GUI<\/a>&nbsp;for \u201cgreater ease of use.\u201d<\/p>\n\n\n\n<p>This is a good reminder that as we venture into this new frontier, we cannot forget classic human-centered principles like those in Don Norman\u2019s seminal book&nbsp;<em>The Design of Everyday Things<\/em>&nbsp;(1988). Graphical components still seem better aligned with his advice of providing explicit affordances &amp; signifiers to increase discoverability.<\/p>\n\n\n\n<p>There is also Jakob Nielsen\u2019s list of&nbsp;<a href=\"https:\/\/www.nngroup.com\/articles\/ten-usability-heuristics\/\" target=\"_blank\" rel=\"noreferrer noopener\">10 usability heuristics<\/a>; many of today\u2019s conversational interfaces seem to ignore every one of them. Consider the&nbsp;<strong>first usability heuristic<\/strong>&nbsp;explaining how visibility of system status educates users about the consequences of their actions. It uses a metaphorical map\u2019s \u201cYou Are Here\u201d pin to explain how proper orientation informs our next steps.<\/p>\n\n\n\n<p><strong>Navigation is more relevant to conversational interfaces like chatbots than it might seem<\/strong>, even though all interactions take place in the same chat window. The backend of products like ChatGPT will navigate across a neural network to craft each response by focusing attention on a different part of their training datasets.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"768\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/8-abstraction-roleplay-1024x768.png\" alt=\"\" class=\"wp-image-19391\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/8-abstraction-roleplay-1024x768.png 1024w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/8-abstraction-roleplay-300x225.png 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/8-abstraction-roleplay-768x576.png 768w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/8-abstraction-roleplay-1536x1152.png 1536w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/8-abstraction-roleplay-702x526.png 702w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/8-abstraction-roleplay.png 1600w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">A visualization of how role-playing in prompt engineering loosely guides an AI model to craft different output. (Image source:&nbsp;<a href=\"https:\/\/www.maximillian.nyc\/\" target=\"_blank\" rel=\"noreferrer noopener\">Maximillian Piras<\/a>) (<a href=\"https:\/\/files.smashing.media\/articles\/designing-ai-beyond-conversational-interfaces\/8-abstraction-roleplay.png\" target=\"_blank\" rel=\"noreferrer noopener\">Large preview<\/a>)<\/figcaption><\/figure>\n\n\n\n<p>Putting a pin on the proverbial map of their parametric knowledge isn\u2019t trivial. LLMs are so opaque that even&nbsp;<a href=\"https:\/\/openaipublic.blob.core.windows.net\/neuron-explainer\/paper\/index.html\" target=\"_blank\" rel=\"noreferrer noopener\">OpenAI admits<\/a>&nbsp;they \u201cdo not understand how they work.\u201d Yet, it is possible to tailor inputs in a way that loosely guides a model to craft a response from different areas of its knowledge.<\/p>\n\n\n\n<p>One popular technique for guiding attention is&nbsp;<strong>role-playing<\/strong>. You can ask an LLM to assume a role, such as by inputting \u201cimagine you\u2019re a historian,\u201d to effectively switch its mode.&nbsp;<a href=\"https:\/\/promptengineering.org\/role-playing-in-large-language-models-like-chatgpt\/\" target=\"_blank\" rel=\"noreferrer noopener\">The Prompt Engineering Institute explains<\/a>&nbsp;that when \u201ctraining on a large corpus of text data from diverse domains, the model forms a complex understanding of various roles and the language associated with them.\u201d Assuming a role invokes associated aspects in an AI\u2019s training data, such as tone, skills, &amp; rationality.<\/p>\n\n\n\n<p>For instance, a historian role responds with factual details whereas a storyteller role responds with narrative descriptions. Roles can also improve task efficiency through tooling, such as by assigning a data scientist role to generate responses with Python code.<\/p>\n\n\n\n<p>Roles also reinforce social norms, as&nbsp;<a href=\"https:\/\/www.youtube.com\/watch?v=ieWT6X2Yh_g\" target=\"_blank\" rel=\"noreferrer noopener\">Jason Yuan remarks<\/a>&nbsp;on how \u201cyour banking AI agent probably shouldn\u2019t be able to have a deep philosophical chat with you.\u201d Yet conversational interfaces will bury this type of system status in their message history, forcing us to keep it in our&nbsp;<a href=\"https:\/\/www.nngroup.com\/articles\/working-memory-external-memory\/\" target=\"_blank\" rel=\"noreferrer noopener\">working memory<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"800\" height=\"600\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/9-abstraction-rolebot.gif\" alt=\"\" class=\"wp-image-19392\"\/><figcaption class=\"wp-element-caption\">A theoretical AI chatbot that uses a segmented controller to let users specify a role in one click \u2014 each button automatically adjusts the LLM\u2019s system prompt. (Source:&nbsp;<a href=\"https:\/\/www.maximillian.nyc\/\" target=\"_blank\" rel=\"noreferrer noopener\">Maximillian Piras<\/a>) (<a href=\"https:\/\/files.smashing.media\/articles\/designing-ai-beyond-conversational-interfaces\/9-abstraction-rolebot.gif\" target=\"_blank\" rel=\"noreferrer noopener\">Large preview<\/a>)<\/figcaption><\/figure>\n\n\n\n<p>The lack of persistent signifiers for context, like roleplay, can lead to usability issues. For clarity, we must constantly ask the AI\u2019s status, similar to typing&nbsp;<code>ls<\/code>&nbsp;&amp;&nbsp;<code>cd<\/code>&nbsp;commands into a terminal. Experts can manage it, but the added cognitive load is likely to weigh on novices. The problem goes beyond human memory, systems suffer from a similar cognitive overload. Due to data limits in their context windows, a user must eventually reinstate any roleplay below the system level. If this type of information persisted in the interface, it would be clear to users &amp; could be automatically reiterated to the AI in each prompt.<\/p>\n\n\n\n<p><a href=\"http:\/\/character.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">Character.ai<\/a>&nbsp;achieves this by using historical figures as familiar focal points. Cultural cues lead us to ask different types of questions to \u201cAl Pacino\u201d than we would \u201cSocrates.\u201d A \u201ccharacter\u201d becomes a heuristic to set user expectations &amp; automatically adjust system settings. It\u2019s like posting up a restaurant menu; visitors no longer need to ask what there is to eat &amp; they can just order instead.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><\/p>\n<cite><em>\u201cHumans have limited short-term memories. Interfaces that promote recognition reduce the amount of cognitive effort required from users.\u201d<\/em><br><br><em>\u2014 Jakob Nielsen in \u201c<\/em><a href=\"https:\/\/www.nngroup.com\/articles\/ten-usability-heuristics\/\" target=\"_blank\" rel=\"noreferrer noopener\">10 Usability Heuristics for User Interface Design<\/a><em>\u201d<\/em><\/cite><\/blockquote>\n\n\n\n<p>Another forgotten usability lesson is that some tasks are easier to do than to explain, especially through the&nbsp;<a href=\"https:\/\/www.nngroup.com\/articles\/direct-manipulation\/\" target=\"_blank\" rel=\"noreferrer noopener\">direct manipulation<\/a>&nbsp;style of interaction popularized in GUIs.<\/p>\n\n\n\n<p>Photoshop\u2019s new generative AI features reinforce this notion by integrating with their graphical interface. While&nbsp;<a href=\"https:\/\/www.adobe.com\/products\/photoshop\/generative-fill.html\" target=\"_blank\" rel=\"noreferrer noopener\">Generative Fill<\/a>&nbsp;includes an input field, it also relies on skeuomorphic controls like their classic lasso tool. Describing which part of an image to manipulate is much more cumbersome than clicking it.<\/p>\n\n\n\n<p><strong>Interactions should remain outside of an input field when words are less efficient.<\/strong>&nbsp;Sliders seem like a better fit for sizing, as saying \u201cmake it bigger\u201d leaves too much room for subjectivity. Settings like colors &amp; aspect ratios are easier to select than describe. Standardized controls can also let systems better organize prompts behind the scenes. If a model accepts specific values for a parameter, for instance, the interface can provide a natural mapping for how it should be input.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"768\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/10-abstraction-promptgui-1024x768.png\" alt=\"\" class=\"wp-image-19393\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/10-abstraction-promptgui-1024x768.png 1024w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/10-abstraction-promptgui-300x225.png 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/10-abstraction-promptgui-768x576.png 768w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/10-abstraction-promptgui-1536x1152.png 1536w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/10-abstraction-promptgui-702x526.png 702w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/10-abstraction-promptgui.png 1600w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">A diagram of Visual Electric\u2019s input field showcasing how graphical controls can help a system organize a prompt behind the scenes. (Image source:&nbsp;<a href=\"https:\/\/www.maximillian.nyc\/\" target=\"_blank\" rel=\"noreferrer noopener\">Maximilian Piras<\/a>) (<a href=\"https:\/\/files.smashing.media\/articles\/designing-ai-beyond-conversational-interfaces\/10-abstraction-promptgui.png\" target=\"_blank\" rel=\"noreferrer noopener\">Large preview<\/a>)<\/figcaption><\/figure>\n\n\n\n<p>Most of these usability principles are over three decades old now, which may lead some to wonder if they\u2019re still relevant.&nbsp;<a href=\"https:\/\/www.nngroup.com\/articles\/ten-usability-heuristics\/\" target=\"_blank\" rel=\"noreferrer noopener\">Jakob Nielsen recently remarked on the longevity of their relevance<\/a>, suggesting that \u201cwhen something has remained true for 26 years, it will likely apply to future generations of user interfaces as well.\u201d However, honoring these usability principles doesn\u2019t require adhering to classic components. Apps like Krea are already exploring&nbsp;<a href=\"https:\/\/twitter.com\/MaximillianNYC\/status\/1733627162517794899\/video\/1\" target=\"_blank\" rel=\"noreferrer noopener\">new GUI<\/a>&nbsp;to manipulate generative AI.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-prompt-engineering-is-engineering\">Prompt Engineering Is Engineering<\/h2>\n\n\n\n<p>The biggest usability problem with today\u2019s conversational interfaces is that they offload technical work to non-technical users. In addition to low discoverability, another similarity they share with command lines is that&nbsp;<strong>ideal output is only attainable through learned commands<\/strong>. We refer to the practice of tailoring inputs to best communicate with generative AI systems as \u201cprompt engineering\u201d. The name itself suggests it\u2019s an expert activity, along with the fact that becoming proficient in it can lead to a&nbsp;<a href=\"https:\/\/www.wsj.com\/tech\/ai\/talking-to-chatbots-is-now-a-200k-job-so-i-applied-258bd5f0\" target=\"_blank\" rel=\"noreferrer noopener\">$200k salary<\/a>.<\/p>\n\n\n\n<p>Programming with natural language is a fascinating advancement but seems misplaced as a requirement in consumer applications. Just because anyone can now speak the same language as a computer doesn\u2019t mean they know what to say or the best way to say it \u2014 we need to guide them. While all new technologies have learning curves, this one feels steep enough to hinder further adoption &amp; long-term retention.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"649\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/screenshot_2024-07-30_at_10.52.08___am-1024x649.png\" alt=\"\" class=\"wp-image-19405\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/screenshot_2024-07-30_at_10.52.08___am-1024x649.png 1024w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/screenshot_2024-07-30_at_10.52.08___am-300x190.png 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/screenshot_2024-07-30_at_10.52.08___am-768x487.png 768w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/screenshot_2024-07-30_at_10.52.08___am-1536x974.png 1536w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/screenshot_2024-07-30_at_10.52.08___am-2048x1299.png 2048w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Canva markets its AI features as \u201cMagic Studio.\u201d (Image source: <a href=\"https:\/\/www.canva.com\/magic\/\" target=\"_blank\" rel=\"noreferrer noopener\">Canva<\/a>) (<a href=\"https:\/\/files.smashing.media\/articles\/designing-ai-beyond-conversational-interfaces\/11-abstraction-magicstudio.jpeg\" target=\"_blank\" rel=\"noreferrer noopener\">Large preview<\/a>)<\/figcaption><\/figure>\n\n\n\n<p>Prompt engineering as a prerequisite for high-quality output seems to have taken on the mystique of a dark art. Many marketing materials for AI features reinforce this through terms like \u201cmagic.\u201d If we assume there is a positive feedback loop at play, this opaqueness must be an inspiring consumer intrigue.<\/p>\n\n\n\n<p>But positioning products in the realm of spellbooks &amp; shamans also suggests an indecipherable experience \u2014 is this a good long-term strategy? If we assume Steve Krug\u2019s influential lessons from&nbsp;<a href=\"https:\/\/sensible.com\/dont-make-me-think\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Don\u2019t Make Me Think<\/em><\/a>&nbsp;(2000) still apply, then most people won\u2019t bother to study proper prompting &amp; instead will&nbsp;<em>muddle through<\/em>.<\/p>\n\n\n\n<p>But the problem with trial &amp; error in generative AI is that there aren\u2019t any error states; you\u2019ll always get a response. For instance, if you ask an LLM to do the math, it will provide you with confident answers that may be&nbsp;<a href=\"https:\/\/garymarcus.substack.com\/p\/math-is-hard-if-you-are-an-llm-and\" target=\"_blank\" rel=\"noreferrer noopener\">completely wrong<\/a>. So it becomes harder to learn from errors when we are unaware if a response is a hallucination. As OpenAI\u2019s&nbsp;<a href=\"https:\/\/twitter.com\/karpathy\/status\/1733299213503787018\" target=\"_blank\" rel=\"noreferrer noopener\">Andrej Karpathy suggests<\/a>, hallucinations are not necessarily a bug because LLMs are \u201cdream machines,\u201d so it&nbsp;<strong>all depends on how interfaces set user expectations<\/strong>.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><\/p>\n<cite><em>\u201cBut as with people, finding the most meaningful answer from AI involves asking the right questions. AI is neither psychic nor telepathic.\u201d<\/em><br><br><em>\u2014 Stephen J. Bigelow in&nbsp;<\/em><a href=\"https:\/\/www.techtarget.com\/whatis\/feature\/Skills-needed-to-become-a-prompt-engineer\" target=\"_blank\" rel=\"noreferrer noopener\">5 Skills Needed to Become a Prompt Engineer<\/a><\/cite><\/blockquote>\n\n\n\n<p>Using magical language risks leading novices to the magical thinking that AI is omniscient. It may not be obvious that its&nbsp;<em>knowledge<\/em>&nbsp;is limited to the training data.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>When reaching the limits of this dataset, will users know to complement it with&nbsp;<a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-retrieval-augmented-generation\/\" target=\"_blank\" rel=\"noreferrer noopener\">Retrieval Augmented Generation<\/a>?<\/li>\n\n\n\n<li>Will users know to explore different prompting techniques, such as&nbsp;<a href=\"https:\/\/www.promptingguide.ai\/techniques\/fewshot\" target=\"_blank\" rel=\"noreferrer noopener\">Few-Shot<\/a>&nbsp;or&nbsp;<a href=\"https:\/\/www.promptingguide.ai\/techniques\/cot\" target=\"_blank\" rel=\"noreferrer noopener\">Chain of Thought<\/a>, to adjust an AI\u2019s reasoning?<\/li>\n<\/ul>\n\n\n\n<p>Once the magic dust fades away, software designers will realize that these decisions&nbsp;<strong>are<\/strong>&nbsp;the user experience!<\/p>\n\n\n\n<p>Crafting delight comes from selecting the right prompting techniques, knowledge sourcing, &amp; model selection for the job to be done. We should be exploring how to offload this work from our users.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Empty states could explain the limits of an AI\u2019s knowledge &amp; allow users to fill gaps as needed.<\/li>\n\n\n\n<li>Onboarding flows could learn user goals to recommend relevant models tuned with the right reasoning.<\/li>\n\n\n\n<li>An equivalent to fuzzy search could markup user inputs to educate them on useful adjustments.<\/li>\n<\/ul>\n\n\n\n<p>We\u2019ve begun to see a hint of this with OpenAI\u2019s image generator&nbsp;<a href=\"https:\/\/community.openai.com\/t\/api-image-generation-in-dall-e-3-changes-my-original-prompt-without-my-permission\/476355\" target=\"_blank\" rel=\"noreferrer noopener\">rewriting<\/a>&nbsp;a user\u2019s input behind the scenes to optimize for better image output.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"768\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/12-abstraction-promptengineering-1024x768.png\" alt=\"\" class=\"wp-image-19395\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/12-abstraction-promptengineering-1024x768.png 1024w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/12-abstraction-promptengineering-300x225.png 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/12-abstraction-promptengineering-768x576.png 768w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/12-abstraction-promptengineering-1536x1152.png 1536w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/12-abstraction-promptengineering-702x526.png 702w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/12-abstraction-promptengineering.png 1600w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">An example of how combining Graphical User Interfaces with freeform inputs can automate prompt engineering with techniques like Retrieval Augmented Generation. (Image source:&nbsp;<a href=\"https:\/\/www.maximillian.nyc\/\" target=\"_blank\" rel=\"noreferrer noopener\">Maximillian Piras<\/a>) (<a href=\"https:\/\/files.smashing.media\/articles\/designing-ai-beyond-conversational-interfaces\/12-abstraction-promptengineering.png\" target=\"_blank\" rel=\"noreferrer noopener\">Large preview<\/a>)<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-lamborghini-pizza-delivery\">Lamborghini Pizza Delivery<\/h2>\n\n\n\n<p>Aside from the cognitive cost of usability issues, there is a&nbsp;<strong>monetary cost<\/strong>&nbsp;to consider as well. Every interaction with a conversational interface invokes an AI to reason through a response. This requires a lot more computing power than clicking a button within a GUI. At the current cost of computing, this expense can be prohibitive. There are some tasks where the value from added intelligence may not be worth the price.<\/p>\n\n\n\n<p>For example, the&nbsp;<a href=\"https:\/\/www.wsj.com\/tech\/ai\/ais-costly-buildup-could-make-early-products-a-hard-sell-bdd29b9f\" target=\"_blank\" rel=\"noreferrer noopener\">Wall Street Journal<\/a>&nbsp;suggests using an LLM for tasks like email summarization is \u201clike getting a Lamborghini to deliver a pizza.\u201d Higher costs are, in part, due to the inability of AI systems to leverage economies of scale in the way standard software does. Each interaction requires intense calculation, so costs scale linearly with usage. Without a zero-marginal cost of reproduction, the common software subscription model becomes less tenable.<\/p>\n\n\n\n<p>Will consumers pay higher prices for conversational interfaces or prefer AI capabilities wrapped in cost-effective GUI? Ironically, this predicament is reminiscent of the early struggles GUIs faced. The processor logic &amp; memory speed needed to power the underlying bitmaps only became tenable when the price of RAM chips dropped years later. Let\u2019s hope history repeats itself.<\/p>\n\n\n\n<p>Another cost to consider is the&nbsp;<strong>security risk<\/strong>: what if your Lamborghini gets stolen during the pizza delivery? If you let people ask AI anything, some of those questions will be manipulative.&nbsp;<a href=\"https:\/\/developer.nvidia.com\/blog\/securing-llm-systems-against-prompt-injection\/\" target=\"_blank\" rel=\"noreferrer noopener\">Prompt injections<\/a>&nbsp;are attempts to infiltrate systems through natural language. The right sequence of words can turn an input field into an attack vector, allowing malicious actors to access private&nbsp;<a href=\"https:\/\/stackdiary.com\/chatgpts-training-data-can-be-exposed-via-a-divergence-attack\/\" target=\"_blank\" rel=\"noreferrer noopener\">information &amp; integrations<\/a>.<\/p>\n\n\n\n<p>So be cautious when positioning AI as a&nbsp;<a href=\"https:\/\/www.lindy.ai\/blog\/announcing-a-new-way-to-create-ai-employees\" target=\"_blank\" rel=\"noreferrer noopener\">member of the team<\/a>&nbsp;since employees are already regarded as the weakest link in cyber security defense. The wrong business logic could accidentally optimize the number of phishing emails your organization falls victim to.<\/p>\n\n\n\n<p>Good design can mitigate these costs by identifying where AI is most meaningful to users. Emphasize human-like conversational interactions at these moments but use more cost-effective elements elsewhere. Protect against prompt injections by partitioning sensitive data so it\u2019s only accessible by secure systems. We know LLMs aren\u2019t great at math anyway, so free them up for creative collaboration instead of managing boring billing details.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-generations-are-predictions\">Generations Are Predictions<\/h2>\n\n\n\n<p>In&nbsp;<a href=\"https:\/\/www.smashingmagazine.com\/2023\/08\/friction-feature-machine-learning-algorithms\/\" target=\"_blank\" rel=\"noreferrer noopener\">my previous Smashing article<\/a>, I explained the concept of algorithm-friendly interfaces. They view every interaction as an opportunity to improve understanding through bidirectional feedback. They provide system feedback to users while reporting performance feedback to the system. Their success is a function of maximizing data collection touchpoints to optimize predictions. Accuracy gains in predictive output tend to result in better user retention. So good data compounds in value by reinforcing itself through network effects.<\/p>\n\n\n\n<p>While my previous focus was on content recommendation algorithms, could we apply this to generative AI? While the output is very different, they\u2019re both predictive models. We can customize these predictions with specific data like the characteristics, preferences, &amp; behavior of an individual user.<\/p>\n\n\n\n<p>So, just as Spotify learns your musical taste to recommend new songs, we could theoretically personalize generative AI. Midjourney could recommend image generation parameters based on past usage or preferences. ChatGPT could invoke the right roles at the right time (hopefully with system status visibility).<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"768\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/14-abstraction-signals-1024x768.jpeg\" alt=\"\" class=\"wp-image-19397\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/14-abstraction-signals-1024x768.jpeg 1024w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/14-abstraction-signals-300x225.jpeg 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/14-abstraction-signals-768x576.jpeg 768w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/14-abstraction-signals-1536x1152.jpeg 1536w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/14-abstraction-signals-702x526.jpeg 702w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/14-abstraction-signals.jpeg 1600w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">A feedback loop in an algorithm-friendly interface. (Image source:&nbsp;<a href=\"https:\/\/www.maximillian.nyc\/\" target=\"_blank\" rel=\"noreferrer noopener\">Maximillian Piras<\/a>) (<a href=\"https:\/\/files.smashing.media\/articles\/designing-ai-beyond-conversational-interfaces\/14-abstraction-signals.jpeg\" target=\"_blank\" rel=\"noreferrer noopener\">Large preview<\/a>)<\/figcaption><\/figure>\n\n\n\n<p>This territory is still somewhat uncharted, so it\u2019s unclear how algorithm-friendly conversational interfaces are. The same discoverability issues affecting their usability may also affect their ability to analyze engagement signals. An inability to separate signal from noise will weaken personalization efforts. Consider a simple interaction like tapping a \u201clike\u201d button; it sends a very clean signal to the backend.<\/p>\n\n\n\n<p>What is the conversational equivalent of this? Inputting the word \u201clike\u201d doesn\u2019t seem like as reliable a signal because it may be mentioned in a simile or mindless affectation. Based on the insights from my previous article, the value of successful personalization suggests that any regression will be acutely felt in your company\u2019s pocketbook.<\/p>\n\n\n\n<p>Perhaps a solution is using another LLM as a reasoning engine to format unstructured inputs automatically into clear engagement signals. But until their data collection efficiency is clear,&nbsp;<strong>designers should ask if the benefits of a conversational interface outweigh the risk of worse personalization.<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-towards-the-next-layer-of-abstraction\">Towards The Next Layer Of Abstraction<\/h2>\n\n\n\n<p>As this new paradigm shift in computing evolves, I hope this is a helpful primer for thinking about the next interface abstractions. Conversational interfaces will surely be a mainstay in the next era of AI-first design. Adding voice capabilities will allow computers to augment our abilities without arching our spines through unhealthy amounts of screen time. Yet conversation alone won\u2019t suffice, as we also must design for needs that words cannot describe.<\/p>\n\n\n\n<p>So, if no interface is a panacea, let\u2019s avoid simplistic evolutionary tales &amp; instead aspire towards the principles of great experiences. We want an interface that is&nbsp;<strong>integrated, contextual, &amp; multimodal<\/strong>. It knows sometimes we can only describe our intent with gestures or diagrams. It respects when we\u2019re too busy for a conversation but need to ask a quick question. When we do want to chat, it can&nbsp;<em>see<\/em>&nbsp;what we see, so we aren\u2019t burdened with writing lengthy descriptions. When words fail us, it still gets the gist.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-avoiding-tunnel-visions-of-the-future\">Avoiding Tunnel Visions Of The Future<\/h2>\n\n\n\n<p>This moment reminds me of a cautionary tale from the days of mobile-first design. A couple of years after the iPhone\u2019s debut, touchscreens became a popular motif in collective visions of the future. But Bret Victor, the revered Human-Interface Inventor (<a href=\"http:\/\/worrydream.com\/#!\/Apple\" target=\"_blank\" rel=\"noreferrer noopener\">his title at Apple<\/a>), saw touchscreens more as a&nbsp;<em>tunnel vision<\/em>&nbsp;of the future.<\/p>\n\n\n\n<p>In his&nbsp;<a href=\"http:\/\/worrydream.com\/ABriefRantOnTheFutureOfInteractionDesign\/\" target=\"_blank\" rel=\"noreferrer noopener\">brief rant<\/a>&nbsp;on peripheral possibilities, he remarks how they ironically ignore touch altogether. Most of the interactions mainly engage our sense of sight instead of the rich capabilities our hands have for haptic feedback. How can we ensure that AI-first design amplifies all our capabilities?<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><\/p>\n<cite><em>\u201cA tool addresses human needs by amplifying human capabilities.\u201d<\/em><br><br><em>\u2014 Bret Victor in \u201c<\/em><a href=\"http:\/\/worrydream.com\/ABriefRantOnTheFutureOfInteractionDesign\/\" target=\"_blank\" rel=\"noreferrer noopener\">A Brief Rant on the Future of Interaction Design\u201d<\/a><\/cite><\/blockquote>\n\n\n\n<p>I wish I could leave you with a clever-sounding formula for when to use conversational interfaces. Perhaps some observable law stating that the mathematical relationship expressed by D\u221d1\/G elucidates that \u2018D\u2019, representing describability, exhibits an inverse correlation with \u2018G\u2019, denoting graphical utility \u2014 therefore, as the complexity it takes to describe something increases, a conversational interface\u2019s usability diminishes. While this observation may be true, it\u2019s not very useful.<\/p>\n\n\n\n<p>Honestly, my uncertainty at this moment humbles me too much to prognosticate on new design principles. What I can do instead is take a lesson from the recently departed Charlie Munger &amp;&nbsp;<a href=\"https:\/\/fs.blog\/inversion\/\" target=\"_blank\" rel=\"noreferrer noopener\">invert the problem<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"768\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/16-abstraction-inversion-1024x768.jpg\" alt=\"\" class=\"wp-image-19398\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/16-abstraction-inversion-1024x768.jpg 1024w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/16-abstraction-inversion-300x225.jpg 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/16-abstraction-inversion-768x576.jpg 768w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/16-abstraction-inversion-1536x1152.jpg 1536w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/16-abstraction-inversion-702x526.jpg 702w, https:\/\/uxmag.com\/wp-content\/uploads\/2024\/07\/16-abstraction-inversion.jpg 1600w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">We often design forwards by seeking brilliance, but sometimes we need to design backwards by inverting the problem to avoid stupidity. (Image source:&nbsp;<a href=\"https:\/\/www.maximillian.nyc\/\" target=\"_blank\" rel=\"noreferrer noopener\">Maximillian Piras<\/a>) (<a href=\"https:\/\/files.smashing.media\/articles\/designing-ai-beyond-conversational-interfaces\/16-abstraction-inversion.jpg\" target=\"_blank\" rel=\"noreferrer noopener\">Large preview<\/a>)<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-designing-backwards\">Designing Backwards<\/h2>\n\n\n\n<p>If we try to design the next abstraction layer looking forward, we seem to end up with something like a chatbot. We now know why this is an incomplete solution on its own.&nbsp;<strong>What if we look at the problem backward to identify the undesirable outcomes that we want to avoid?<\/strong>&nbsp;Avoiding stupidity is easier than seeking brilliance, after all.<\/p>\n\n\n\n<p>An obvious mistake to steer clear of is forcing users to engage in conversations without considering time constraints. When the time is right to chat, it should be in a manner that doesn\u2019t replace existing usability problems with equally frustrating new ones. For basic tasks of equivalent importance to delivering pizza, we should find practical solutions not of equivalent extravagance to driving a Lamborghini. Furthermore, we ought not to impose prompt engineering expertise as a requirement for non-expert users. Lastly, as systems become more human-like, they should not inherit our gullibility, lest our efforts inadvertently optimize for exponentially easier access to our private data.<\/p>\n\n\n\n<p>A more&nbsp;<strong>intelligent<\/strong>&nbsp;interface won\u2019t make those stupid mistakes.<\/p>\n\n\n\n<p><em>The article originally appeared in <a href=\"https:\/\/www.smashingmagazine.com\/2024\/02\/designing-ai-beyond-conversational-interfaces\/\" target=\"_blank\" rel=\"noreferrer noopener\">Smashing Magazine<\/a>.<\/em><\/p>\n\n\n\n<p><em>Featured image courtesy: <a href=\"https:\/\/unsplash.com\/@nordwood\" target=\"_blank\" rel=\"noreferrer noopener\">NordWood Themes<\/a>.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>As Artificial Intelligence evolves the computing paradigm, designers have an opportunity to craft more intuitive user interfaces. Text-based Large Language Models unlock most of the new capabilities, leading many to suggest a shift from graphical interfaces to conversational ones like a chatbot is necessary. However, plenty of evidence suggests conversation is a poor interface for<\/p>\n","protected":false},"author":2670,"featured_media":19448,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[1],"tags":[],"topics":[14,144,28,3165],"class_list":{"0":"post-19383","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-uncategorized","8":"topics-artificial-intelligence","9":"topics-conversational-design","10":"topics-design","11":"topics-ux","12":"entry"},"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v18.2.1 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces - UX Magazine<\/title>\n<meta name=\"description\" content=\"Explore the future of AI design beyond conversational interfaces! This article delves into how AI can move past simple chatbots to deliver richer, context-aware interactions. By understanding user intent and integrating seamlessly across platforms, AI can anticipate needs and provide personalized experiences. It&#039;s a must-read for anyone interested in the cutting-edge of UX design and AI innovation.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces\" \/>\n<meta property=\"og:description\" content=\"Explore the future of AI design beyond conversational interfaces! This article delves into how AI can move past simple chatbots to deliver richer, context-aware interactions. By understanding user intent and integrating seamlessly across platforms, AI can anticipate needs and provide personalized experiences. It&#039;s a must-read for anyone interested in the cutting-edge of UX design and AI innovation.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces\" \/>\n<meta property=\"og:site_name\" content=\"UX Magazine\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/uxmag\" \/>\n<meta property=\"article:published_time\" content=\"2024-08-13T11:25:17+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-12-26T11:46:44+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/08\/nordwood-themes-ubIWo074QlU-unsplash-1-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1707\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Maximillian Piras\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@uxmag\" \/>\n<meta name=\"twitter:site\" content=\"@uxmag\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Maximillian Piras\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"24 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces#article\",\"isPartOf\":{\"@id\":\"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces\"},\"author\":{\"name\":\"Nataliia Vlasenko\",\"@id\":\"https:\/\/uxmag.com\/#\/schema\/person\/7155568a86e268cd0e8ca7197f9487ca\"},\"headline\":\"When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces\",\"datePublished\":\"2024-08-13T11:25:17+00:00\",\"dateModified\":\"2024-12-26T11:46:44+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces\"},\"wordCount\":5201,\"publisher\":{\"@id\":\"https:\/\/uxmag.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces#primaryimage\"},\"thumbnailUrl\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/08\/nordwood-themes-ubIWo074QlU-unsplash-1-scaled.jpg\",\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces\",\"url\":\"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces\",\"name\":\"When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces - UX Magazine\",\"isPartOf\":{\"@id\":\"https:\/\/uxmag.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces#primaryimage\"},\"image\":{\"@id\":\"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces#primaryimage\"},\"thumbnailUrl\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/08\/nordwood-themes-ubIWo074QlU-unsplash-1-scaled.jpg\",\"datePublished\":\"2024-08-13T11:25:17+00:00\",\"dateModified\":\"2024-12-26T11:46:44+00:00\",\"description\":\"Explore the future of AI design beyond conversational interfaces! This article delves into how AI can move past simple chatbots to deliver richer, context-aware interactions. By understanding user intent and integrating seamlessly across platforms, AI can anticipate needs and provide personalized experiences. It's a must-read for anyone interested in the cutting-edge of UX design and AI innovation.\",\"breadcrumb\":{\"@id\":\"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces#primaryimage\",\"url\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/08\/nordwood-themes-ubIWo074QlU-unsplash-1-scaled.jpg\",\"contentUrl\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/08\/nordwood-themes-ubIWo074QlU-unsplash-1-scaled.jpg\",\"width\":2560,\"height\":1707},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/uxmag.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Artificial Intelligence\",\"item\":\"https:\/\/uxmag.com\/topics\/artificial-intelligence\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/uxmag.com\/#website\",\"url\":\"https:\/\/uxmag.com\/\",\"name\":\"UX Magazine\",\"description\":\"UX Magazine is a central, one-stop resource for everything related to user experience. We provide a steady stream of current, informative, and credible information about UX and related fields to enhance the professional and creative lives of UX practitioners and those exploring the field. Our content is driven and created by an impressive roster of experienced professionals\",\"publisher\":{\"@id\":\"https:\/\/uxmag.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/uxmag.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/uxmag.com\/#organization\",\"name\":\"UX Magazine\",\"alternateName\":\"uxmag\",\"url\":\"https:\/\/uxmag.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/uxmag.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/01\/UX-Magazine-logo-2.png\",\"contentUrl\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/01\/UX-Magazine-logo-2.png\",\"width\":2440,\"height\":428,\"caption\":\"UX Magazine\"},\"image\":{\"@id\":\"https:\/\/uxmag.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/uxmag\",\"https:\/\/x.com\/uxmag\",\"https:\/\/www.linkedin.com\/company\/ux-magazine\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/uxmag.com\/#\/schema\/person\/7155568a86e268cd0e8ca7197f9487ca\",\"name\":\"Nataliia Vlasenko\",\"url\":\"https:\/\/uxmag.com\/contributors\/nataliia-vlasenko\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces - UX Magazine","description":"Explore the future of AI design beyond conversational interfaces! This article delves into how AI can move past simple chatbots to deliver richer, context-aware interactions. By understanding user intent and integrating seamlessly across platforms, AI can anticipate needs and provide personalized experiences. It's a must-read for anyone interested in the cutting-edge of UX design and AI innovation.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces","og_locale":"en_US","og_type":"article","og_title":"When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces","og_description":"Explore the future of AI design beyond conversational interfaces! This article delves into how AI can move past simple chatbots to deliver richer, context-aware interactions. By understanding user intent and integrating seamlessly across platforms, AI can anticipate needs and provide personalized experiences. It's a must-read for anyone interested in the cutting-edge of UX design and AI innovation.","og_url":"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces","og_site_name":"UX Magazine","article_publisher":"https:\/\/www.facebook.com\/uxmag","article_published_time":"2024-08-13T11:25:17+00:00","article_modified_time":"2024-12-26T11:46:44+00:00","og_image":[{"width":2560,"height":1707,"url":"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/08\/nordwood-themes-ubIWo074QlU-unsplash-1-scaled.jpg","type":"image\/jpeg"}],"author":"Maximillian Piras","twitter_card":"summary_large_image","twitter_creator":"@uxmag","twitter_site":"@uxmag","twitter_misc":{"Written by":"Maximillian Piras","Est. reading time":"24 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces#article","isPartOf":{"@id":"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces"},"author":{"name":"Nataliia Vlasenko","@id":"https:\/\/uxmag.com\/#\/schema\/person\/7155568a86e268cd0e8ca7197f9487ca"},"headline":"When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces","datePublished":"2024-08-13T11:25:17+00:00","dateModified":"2024-12-26T11:46:44+00:00","mainEntityOfPage":{"@id":"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces"},"wordCount":5201,"publisher":{"@id":"https:\/\/uxmag.com\/#organization"},"image":{"@id":"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces#primaryimage"},"thumbnailUrl":"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/08\/nordwood-themes-ubIWo074QlU-unsplash-1-scaled.jpg","inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces","url":"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces","name":"When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces - UX Magazine","isPartOf":{"@id":"https:\/\/uxmag.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces#primaryimage"},"image":{"@id":"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces#primaryimage"},"thumbnailUrl":"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/08\/nordwood-themes-ubIWo074QlU-unsplash-1-scaled.jpg","datePublished":"2024-08-13T11:25:17+00:00","dateModified":"2024-12-26T11:46:44+00:00","description":"Explore the future of AI design beyond conversational interfaces! This article delves into how AI can move past simple chatbots to deliver richer, context-aware interactions. By understanding user intent and integrating seamlessly across platforms, AI can anticipate needs and provide personalized experiences. It's a must-read for anyone interested in the cutting-edge of UX design and AI innovation.","breadcrumb":{"@id":"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces#primaryimage","url":"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/08\/nordwood-themes-ubIWo074QlU-unsplash-1-scaled.jpg","contentUrl":"https:\/\/uxmag.com\/wp-content\/uploads\/2024\/08\/nordwood-themes-ubIWo074QlU-unsplash-1-scaled.jpg","width":2560,"height":1707},{"@type":"BreadcrumbList","@id":"https:\/\/uxmag.com\/articles\/when-words-cannot-describe-designing-for-ai-beyond-conversational-interfaces#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/uxmag.com\/"},{"@type":"ListItem","position":2,"name":"Artificial Intelligence","item":"https:\/\/uxmag.com\/topics\/artificial-intelligence"},{"@type":"ListItem","position":3,"name":"When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces"}]},{"@type":"WebSite","@id":"https:\/\/uxmag.com\/#website","url":"https:\/\/uxmag.com\/","name":"UX Magazine","description":"UX Magazine is a central, one-stop resource for everything related to user experience. We provide a steady stream of current, informative, and credible information about UX and related fields to enhance the professional and creative lives of UX practitioners and those exploring the field. Our content is driven and created by an impressive roster of experienced professionals","publisher":{"@id":"https:\/\/uxmag.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/uxmag.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/uxmag.com\/#organization","name":"UX Magazine","alternateName":"uxmag","url":"https:\/\/uxmag.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/uxmag.com\/#\/schema\/logo\/image\/","url":"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/01\/UX-Magazine-logo-2.png","contentUrl":"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/01\/UX-Magazine-logo-2.png","width":2440,"height":428,"caption":"UX Magazine"},"image":{"@id":"https:\/\/uxmag.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/uxmag","https:\/\/x.com\/uxmag","https:\/\/www.linkedin.com\/company\/ux-magazine\/"]},{"@type":"Person","@id":"https:\/\/uxmag.com\/#\/schema\/person\/7155568a86e268cd0e8ca7197f9487ca","name":"Nataliia Vlasenko","url":"https:\/\/uxmag.com\/contributors\/nataliia-vlasenko"}]}},"_links":{"self":[{"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/posts\/19383","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/users\/2670"}],"replies":[{"embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/comments?post=19383"}],"version-history":[{"count":0,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/posts\/19383\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/media\/19448"}],"wp:attachment":[{"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/media?parent=19383"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/categories?post=19383"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/tags?post=19383"},{"taxonomy":"topics","embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/topics?post=19383"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}