{"id":17037,"date":"2022-12-06T13:36:27","date_gmt":"2022-12-06T13:36:27","guid":{"rendered":"https:\/\/uxmag.com\/?p=17037"},"modified":"2022-12-07T15:47:53","modified_gmt":"2022-12-07T15:47:53","slug":"conscious-ai-models","status":"publish","type":"post","link":"https:\/\/uxmag.com\/articles\/conscious-ai-models","title":{"rendered":"Conscious AI models?"},"content":{"rendered":"\n<p id=\"3cad\">A few days ago, the Internet was taken by countless tweets, posts, and articles about&nbsp;Google&#8217;s LaMDA AI being conscious&nbsp;(or sentient) based on a conversation it had with an engineer. If you want, you can read it&nbsp;<a href=\"https:\/\/cajundiscordian.medium.com\/is-lamda-sentient-an-interview-ea64d916d917\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a>.<\/p>\n\n\n\n<p id=\"2919\">If you read it, you will also realize that it&nbsp;surely&nbsp;looks&nbsp;like a dialog between two people. But, appearances can be deceiving\u2026<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"48fb\">What IS LaMDA?<\/h3>\n\n\n\n<p id=\"05b9\">The name stands for &#8220;Language&nbsp;Model for&nbsp;Dialogue&nbsp;Applications&#8221;. It&#8217;s yet another&nbsp;massive language model&nbsp;trained by Big Tech to chat with users, but it&#8217;s not even the latest development. Google&#8217;s blog has an entry from more than one year ago called&nbsp;<a href=\"https:\/\/blog.google\/technology\/ai\/lamda\/\" target=\"_blank\" rel=\"noreferrer noopener\">&#8220;LaMDA: our breakthrough conversation technology&#8221;<\/a>. It&#8217;s a model built using&nbsp;<em>Transformers<\/em>, a popular architecture used in language models. Transformers are simple, yet powerful, and&nbsp;their power comes from their sheer size.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><em>LaMDA has 137 BILLION parameters, and it was trained on 1.5 TRILLION words (for more implementation details, check&nbsp;<a href=\"https:\/\/ai.googleblog.com\/2022\/01\/lamda-towards-safe-grounded-and-high.html\" target=\"_blank\" rel=\"noreferrer noopener\">this post<\/a>).<\/em><\/p><\/blockquote>\n\n\n\n<p id=\"bbe2\">To put it in perspective, and give away my age, the Encyclopaedia Britannica has only 40 million words in it. So, LaMDA had access to the equivalent of&nbsp;37,500 times more content&nbsp;than the most iconic encyclopaedia.<\/p>\n\n\n\n<p id=\"278b\">In other words, this model had access to pretty much&nbsp;<em>any kind of dialog that has ever been recorded<\/em>&nbsp;(in English, that is), and it had access to pretty much&nbsp;<em>every piece of information and knowledge produced by mankind<\/em>. Moreover, once the model is trained, it will forever &#8220;<em>remember<\/em>&#8221; everything it &#8220;<em>read<\/em>&#8220;.<\/p>\n\n\n\n<p id=\"d2c1\">It&#8217;s impressive? Sure! It&#8217;s an amazing feat of engineering? Of course! Does it produce human-like dialog? Yes, it does.<\/p>\n\n\n\n<p id=\"37c7\">But, is it conscious, or sentient? Not really\u2026<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><em>&#8220;Why not?! If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck, right?&#8221;<\/em><\/p><\/blockquote>\n\n\n\n<p id=\"96fe\">Yes, it is probably a duck. But that&#8217;s because&nbsp;we know what a duck is&nbsp;(and LaMDA knows it too, or at least it can talk about ducks just like you and me).<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"aacd\">What IS consciousness?<\/h4>\n\n\n\n<p id=\"2b86\">Unfortunately, the&nbsp;definition of consciousness&nbsp;is not as obvious as the definition of a duck. And, before the late 80s, no one was even trying to define it. The &#8220;<em>C-word<\/em>&#8221; \u2014 consciousness \u2014 was banned from the scientific discourse.<\/p>\n\n\n\n<p id=\"dd69\">Luckily, the scientific community forged ahead, and nowadays it has a much better understanding of consciousness. But the everyday usage of the term consciousness hasn&#8217;t changed, and it still covers a lot of&nbsp;<em>different<\/em>&nbsp;and&nbsp;<em>complex<\/em>phenomena.<\/p>\n\n\n\n<p id=\"0f18\">What does \u201cbeing conscious\u201d usually mean when one uses the term? For one, being awake \u2014 \u201c<em>she remained conscious after the crash<\/em>\u201d. Alternatively, being aware \u2014 \u201c<em>he is conscious of his faults<\/em>\u201d.<\/p>\n\n\n\n<p id=\"9c08\">In order to properly study a topic, though, it is required to properly&nbsp;<strong>define<\/strong>&nbsp;the object of study. And that&#8217;s what Stanislas Dehaene does in his book, &#8220;<strong>Consciousness and the Brain<\/strong>&#8220;, from which I am drawing the majority of the ideas in this section. The author distinguishes three concepts:<\/p>\n\n\n\n<ol class=\"wp-block-list\"><li><strong>vigilance<\/strong>: the state of wakefulness;<\/li><li><strong>attention<\/strong>: &#8220;<em>focusing mental resources on a specific piece of information<\/em>&#8220;;<\/li><li><strong>conscious access<\/strong>: &#8220;<em>the fact that some of the attended information eventually enters our awareness and becomes&nbsp;<\/em><strong><em>reportable<\/em><\/strong><em>&nbsp;to others<\/em>&#8221; (highlight is mine).<\/li><\/ol>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><em>Hold this thought: with&nbsp;conscious access, the information becomes&nbsp;reportable. We&#8217;ll get back to this soon!<\/em><\/p><\/blockquote>\n\n\n\n<p id=\"ea80\">For Dehaene, vigilance and attention are required, but not sufficient, and&nbsp;only conscious access qualify as genuine consciousness.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"a742\">Conscious Access<\/h4>\n\n\n\n<p id=\"5757\">It looks trivial, right? You see something, say, a flower, and you instantly become aware of its properties, color, smell, shape, etc.<\/p>\n\n\n\n<p id=\"b914\">So, you can safely say that you&#8217;re aware of everything your eyes see, right?<\/p>\n\n\n\n<p id=\"68f6\">Please watch the short video below (and&nbsp;don\u2019t scroll down past the video otherwise you\u2019ll spoil the answer!):<\/p>\n\n\n<p><iframe loading=\"lazy\" width=\"692\" height=\"519\" src=\"https:\/\/www.youtube.com\/embed\/vJG698U2Mvo\" title=\"selective attention test\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen=\"\"><\/iframe><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><em>Wait for it\u2026<\/em><\/p><\/blockquote>\n\n\n\n<p id=\"2eca\">So, did you see the&nbsp;gorilla&nbsp;in the video?<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><em>&#8220;Gorilla?! What are you talking about?&#8221;<\/em><\/p><\/blockquote>\n\n\n\n<p id=\"d456\">Most people will fail to see the gorilla the first time they watch this video. So, if you didn&#8217;t see it, watch it again, and look for the gorilla in it.<\/p>\n\n\n\n<p id=\"29ab\">What happened the first time? Do you think&nbsp;your eyes didn&#8217;t pick up the image of the gorilla? Is it even&nbsp;<em>possible<\/em>? Not really\u2026<\/p>\n\n\n\n<p id=\"e931\">Even though you were not be able to tell there was a gorilla in the video, the&nbsp;image of the gorilla&nbsp;was&nbsp;perceived by your eyes, transmitted to your brain, processed (to some extent), but, ultimately, ignored.<\/p>\n\n\n\n<p id=\"6ab6\">This is simply to say that&nbsp;there&#8217;s a lot going on behind the scenes, even if you&#8217;re not aware of it. As Dehaene puts it, a &#8220;staggering amount of unconscious processing occurs beneath the surface of our conscious mind&#8221;.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"0de1\">Reflexive Processing<\/h4>\n\n\n\n<p id=\"2cb7\">This&nbsp;unconscious processing is reflexive in nature: whenever there&#8217;s a stimulus \u2014 an&nbsp;input, like the image of the gorilla \u2014 there&#8217;s&nbsp;processing, and an associated&nbsp;output, a thought, is produced. These&nbsp;thoughts are accessible, but not accessed, and they &#8220;<em>lay dormant amid the vast repository of unconscious states<\/em>&#8220;, as Dehaene puts it.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><em>Does it look familiar? An input comes in, there&#8217;s processing, and an output comes out.&nbsp;That&#8217;s what a model does!<\/em><\/p><\/blockquote>\n\n\n\n<p id=\"dc8d\">What happens if you ask a question to a language model? Roughly speaking, that&#8217;s what happens:<\/p>\n\n\n\n<ol class=\"wp-block-list\"><li>It will parse your sentence and&nbsp;<em>split it into its composing words<\/em>;<\/li><li>Then, for each word, it will go over a&nbsp;ginormous lookup table to convert each word into a sequence of numbers;<\/li><li>The sequence of sequences of numbers (since you have many words) will be processed through&nbsp;<em>a ton of arithmetic operations<\/em>;<\/li><li>These operations result in&nbsp;<em>probabilities associated with every word in the vocabulary;<\/em>&nbsp;so the model can output&nbsp;the word that&#8217;s most likely the appropriate one every time.<\/li><\/ol>\n\n\n\n<p id=\"dcbb\">As you can see, there&#8217;s&nbsp;no reasoning&nbsp;of any kind in these operations. They follow an&nbsp;<em>inexorable logic<\/em>&nbsp;and produce outputs according to the statistical distribution of sequences of words the model had access to during training (those 1.5 trillion words I mentioned before).<\/p>\n\n\n\n<p id=\"83ac\">Once an input is given to it, the model is&nbsp;compelled to produce an output, in a reflexive manner. It simply&nbsp;cannot refuse&nbsp;to provide an answer,&nbsp;it does not have volition.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><em>In our brains, an output \u2014 a thought produced by some reflexive processing \u2014 that fails to enter our awareness cannot possibly be reported.<\/em><\/p><p><em>But, in a language model, every output is reported!<\/em><\/p><p><em>And THAT&#8217;s the heart of the question!<\/em><\/p><\/blockquote>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"44bf\">To Report or Not To Report<\/h4>\n\n\n\n<p id=\"b6db\">In the human brain,&nbsp;attended information can only be reported IF we have conscious access to it, that is, if it entered our awareness.<\/p>\n\n\n\n<p id=\"0b0c\">Well, since we&#8217;re used to communicating with&nbsp;other humans, it&#8217;s only logical that,&nbsp;if someone is&nbsp;reporting&nbsp;something to us, they&nbsp;MUST&nbsp;have had&nbsp;<em>conscious access<\/em>&nbsp;to it, right?<\/p>\n\n\n\n<p>But,&nbsp;what about a language model? We&nbsp;can&nbsp;communicate with it, and that&#8217;s amazing, but just because&nbsp;the model is reporting&nbsp;something to us,&nbsp;it does not mean it has conscious access to it. It turns out, the model cannot help itself,&nbsp;it must report at all times, it was built for it!<\/p>\n\n\n\n<p id=\"459e\">Language models used to be simpler: you&#8217;d give it some words, like &#8220;<em>roses are red, violets are\u2026<\/em>&#8220;, and it would reply &#8220;<em>blue<\/em>&#8221; just because it is, statistically speaking, more likely than &#8220;<em>red<\/em>&#8220;, &#8220;<em>yellow<\/em>&#8220;, or &#8220;<em>banana<\/em>&#8220;. These models would stumble, badly, when prompted with more challenging inputs. So, back then,&nbsp;<em>no one would&nbsp;ever&nbsp;question if these models were conscious or not<\/em>.<\/p>\n\n\n\n<p id=\"40c4\">What changed? Models got so big, training data got so massive, and computing power got so cheap, that it is relatively&nbsp;<em>easy to produce outputs that really look like they were produced by an intelligent human<\/em>. But they are&nbsp;still models, we&nbsp;<em>know<\/em>&nbsp;how they got trained, so&nbsp;why&nbsp;are we questioning ourselves if they became&nbsp;conscious?<\/p>\n\n\n\n<p>My guess here is because&nbsp;we would like them to be conscious!<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"bf96\">Steve, the Pencil<\/h3>\n\n\n\n<p id=\"b1fe\">I am a big fan of the series &#8220;Community&#8221;. In the first episode, Jeff Winger gives a speech that seems quite appropriate in the context of our discussion here:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><em>&#8220;\u2026 I can pick this pencil, tell you its name is Steve, and go like this (breaks the pencil in half, people gasp) and part of you dies a little bit on the inside because&nbsp;people can connect with anything. We can sympathize with a pencil\u2026&#8221; (highlights are mine)<\/em><\/p><\/blockquote>\n\n\n\n<p id=\"bf0f\">And that&#8217;s true, people can connect with anything, and&nbsp;people want to connect with others, even language models. So, it shouldn&#8217;t be surprising that we stare, marveled, at our own creation, and wonder \u2014 because it&nbsp;<em>feels<\/em>&nbsp;good.<\/p>\n\n\n\n<p id=\"a339\">And that&#8217;s actually a good thing!<\/p>\n\n\n\n<p id=\"ea23\">A sophisticated language model can be used to address loneliness in the elderly, for example. People can, and will,&nbsp;connect with the model, and treat it&nbsp;as if it were a real person, even if the&nbsp;model itself is not a conscious entity.&nbsp;The applications, both good and bad, are endless.<\/p>\n\n\n\n<p id=\"3139\">At this point, you&#8217;re probably asking yourself&nbsp;what would it take for a model to actually be conscious, according to the latest scientific criteria?<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"6134\">Autonomy<\/h3>\n\n\n\n<p id=\"5347\">If I had to summarize it in one word, it would be that:&nbsp;autonomy.<\/p>\n\n\n\n<p id=\"c596\">Unlike any language model, the human &#8220;brain is the seat of intense&nbsp;spontaneous activity&#8221; and it is &#8220;traversed by global patterns of internal activity originated from&nbsp;neuron&#8217;s capacity to self-activate&nbsp;in partially&nbsp;random&nbsp;fashion&#8221; (highlights are mine).<\/p>\n\n\n\n<p id=\"4d42\">This spontaneous activity gives rise to a &#8220;stream of consciousness&#8221;, described by Dehaene as an &#8220;uninterrupted flow of loosely connected thoughts, primarily shaped by our current goals, and occasionally seeking information from the senses&#8221;.<\/p>\n\n\n\n<p id=\"1403\">The brain is constantly&nbsp;generating thoughts by itself, processing them, and mixing them with external inputs received through our senses, but only&nbsp;a tiny minority of them ever enters our awareness.<\/p>\n\n\n\n<p id=\"8553\">The&nbsp;role of the consciousness, according to Dehaene, is to&nbsp;select, amplify, and propagate relevant thoughts. The thoughts that &#8220;make it&#8221; are &#8220;no longer processed in a reflexive manner, but can be&nbsp;pondered&nbsp;and&nbsp;reoriented at will&#8221; and they can be part of&nbsp;purely mental operations, completely&nbsp;detached from the external world, and they can last for an&nbsp;arbitrarily long duration.<\/p>\n\n\n\n<p id=\"20cf\">I&#8217;m sorry, but our current language models do not do&nbsp;any&nbsp;of these things\u2026<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"e9ab\">Final Thoughts<\/h3>\n\n\n\n<p id=\"4c0c\">That&#8217;s&nbsp;not&nbsp;an easy topic, and my line of argument here is heavily based on Stanislas Dehaene&#8217;s definition of&nbsp;consciousness, as it seems the most scientifically-sound definition I found.<\/p>\n\n\n\n<p id=\"3a2f\">In the end, it all boils down to&nbsp;how you define the duck.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><em>Finally, if you find it hard to believe that your brain is running&nbsp;multiple parallel processes&nbsp;without you even realizing it, watch this video \u2014 you&#8217;ll be surprised!<\/em><\/p><\/blockquote>\n\n\n<p><iframe loading=\"lazy\" width=\"850\" height=\"478\" src=\"https:\/\/www.youtube.com\/embed\/wfYbgdo8e-8\" title=\"You Are Two\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen><\/iframe><\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><a href=\"https:\/\/invisiblemachines.ai\/?utm_source=uxmag&amp;utm_medium=referral&amp;utm_campaign=article_consciousAImodels?&amp;utm_content=ad2\" target=\"_blank\" rel=\"noreferrer noopener\"><img decoding=\"async\" width=\"1024\" height=\"536\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/06\/02book-1024x536.jpg\" alt=\"\" class=\"wp-image-16671\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/06\/02book-1024x536.jpg 1024w, https:\/\/uxmag.com\/wp-content\/uploads\/2021\/06\/02book-300x157.jpg 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2021\/06\/02book-768x402.jpg 768w, https:\/\/uxmag.com\/wp-content\/uploads\/2021\/06\/02book-1536x804.jpg 1536w, https:\/\/uxmag.com\/wp-content\/uploads\/2021\/06\/02book.jpg 2000w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>A few days ago, the Internet was taken by countless tweets, posts, and articles about&nbsp;Google&#8217;s LaMDA AI being conscious&nbsp;(or sentient) based on a conversation it had with an engineer. If you want, you can read it&nbsp;here. If you read it, you will also realize that it&nbsp;surely&nbsp;looks&nbsp;like a dialog between two people. But, appearances can be<\/p>\n","protected":false},"author":2574,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[1],"tags":[],"topics":[3,14,15,149,25,28,36,116,121,122,2910],"class_list":{"0":"post-17037","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-uncategorized","7":"topics-accessibility","8":"topics-artificial-intelligence","9":"topics-augmented-reality","10":"topics-behavioral-science","11":"topics-customer-experience","12":"topics-design","13":"topics-empathy","14":"topics-usability","15":"topics-ux-education","16":"topics-ux-magazine","17":"topics-ux-world-changing-ideas","18":"entry"},"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v18.2.1 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Conscious AI models? - UX Magazine<\/title>\n<meta name=\"description\" content=\"If it looks like a duck\u2026\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/uxmag.com\/articles\/conscious-ai-models\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Conscious AI models?\" \/>\n<meta property=\"og:description\" content=\"If it looks like a duck\u2026\" \/>\n<meta property=\"og:url\" content=\"https:\/\/uxmag.com\/articles\/conscious-ai-models\" \/>\n<meta property=\"og:site_name\" content=\"UX Magazine\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/uxmag\" \/>\n<meta property=\"article:published_time\" content=\"2022-12-06T13:36:27+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-12-07T15:47:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/06\/02book-1024x536.jpg\" \/>\n<meta name=\"author\" content=\"Daniel Godoy\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@uxmag\" \/>\n<meta name=\"twitter:site\" content=\"@uxmag\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel Godoy\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/uxmag.com\/articles\/conscious-ai-models#article\",\"isPartOf\":{\"@id\":\"https:\/\/uxmag.com\/articles\/conscious-ai-models\"},\"author\":{\"name\":\"Daniel Godoy\",\"@id\":\"https:\/\/uxmag.com\/#\/schema\/person\/cdb2545f444b0f476f0962679a564b96\"},\"headline\":\"Conscious AI models?\",\"datePublished\":\"2022-12-06T13:36:27+00:00\",\"dateModified\":\"2022-12-07T15:47:53+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/uxmag.com\/articles\/conscious-ai-models\"},\"wordCount\":1908,\"publisher\":{\"@id\":\"https:\/\/uxmag.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/uxmag.com\/articles\/conscious-ai-models#primaryimage\"},\"thumbnailUrl\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/06\/02book-1024x536.jpg\",\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/uxmag.com\/articles\/conscious-ai-models\",\"url\":\"https:\/\/uxmag.com\/articles\/conscious-ai-models\",\"name\":\"Conscious AI models? - UX Magazine\",\"isPartOf\":{\"@id\":\"https:\/\/uxmag.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/uxmag.com\/articles\/conscious-ai-models#primaryimage\"},\"image\":{\"@id\":\"https:\/\/uxmag.com\/articles\/conscious-ai-models#primaryimage\"},\"thumbnailUrl\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/06\/02book-1024x536.jpg\",\"datePublished\":\"2022-12-06T13:36:27+00:00\",\"dateModified\":\"2022-12-07T15:47:53+00:00\",\"description\":\"If it looks like a duck\u2026\",\"breadcrumb\":{\"@id\":\"https:\/\/uxmag.com\/articles\/conscious-ai-models#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/uxmag.com\/articles\/conscious-ai-models\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/uxmag.com\/articles\/conscious-ai-models#primaryimage\",\"url\":\"\",\"contentUrl\":\"\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/uxmag.com\/articles\/conscious-ai-models#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/uxmag.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Artificial Intelligence\",\"item\":\"https:\/\/uxmag.com\/topics\/artificial-intelligence\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Conscious AI models?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/uxmag.com\/#website\",\"url\":\"https:\/\/uxmag.com\/\",\"name\":\"UX Magazine\",\"description\":\"UX Magazine is a central, one-stop resource for everything related to user experience. We provide a steady stream of current, informative, and credible information about UX and related fields to enhance the professional and creative lives of UX practitioners and those exploring the field. Our content is driven and created by an impressive roster of experienced professionals\",\"publisher\":{\"@id\":\"https:\/\/uxmag.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/uxmag.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/uxmag.com\/#organization\",\"name\":\"UX Magazine\",\"alternateName\":\"uxmag\",\"url\":\"https:\/\/uxmag.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/uxmag.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/01\/UX-Magazine-logo-2.png\",\"contentUrl\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/01\/UX-Magazine-logo-2.png\",\"width\":2440,\"height\":428,\"caption\":\"UX Magazine\"},\"image\":{\"@id\":\"https:\/\/uxmag.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/uxmag\",\"https:\/\/x.com\/uxmag\",\"https:\/\/www.linkedin.com\/company\/ux-magazine\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/uxmag.com\/#\/schema\/person\/cdb2545f444b0f476f0962679a564b96\",\"name\":\"Daniel Godoy\",\"url\":\"https:\/\/uxmag.com\/contributors\/daniel-godoy\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Conscious AI models? - UX Magazine","description":"If it looks like a duck\u2026","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/uxmag.com\/articles\/conscious-ai-models","og_locale":"en_US","og_type":"article","og_title":"Conscious AI models?","og_description":"If it looks like a duck\u2026","og_url":"https:\/\/uxmag.com\/articles\/conscious-ai-models","og_site_name":"UX Magazine","article_publisher":"https:\/\/www.facebook.com\/uxmag","article_published_time":"2022-12-06T13:36:27+00:00","article_modified_time":"2022-12-07T15:47:53+00:00","og_image":[{"url":"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/06\/02book-1024x536.jpg","type":"","width":"","height":""}],"author":"Daniel Godoy","twitter_card":"summary_large_image","twitter_creator":"@uxmag","twitter_site":"@uxmag","twitter_misc":{"Written by":"Daniel Godoy","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/uxmag.com\/articles\/conscious-ai-models#article","isPartOf":{"@id":"https:\/\/uxmag.com\/articles\/conscious-ai-models"},"author":{"name":"Daniel Godoy","@id":"https:\/\/uxmag.com\/#\/schema\/person\/cdb2545f444b0f476f0962679a564b96"},"headline":"Conscious AI models?","datePublished":"2022-12-06T13:36:27+00:00","dateModified":"2022-12-07T15:47:53+00:00","mainEntityOfPage":{"@id":"https:\/\/uxmag.com\/articles\/conscious-ai-models"},"wordCount":1908,"publisher":{"@id":"https:\/\/uxmag.com\/#organization"},"image":{"@id":"https:\/\/uxmag.com\/articles\/conscious-ai-models#primaryimage"},"thumbnailUrl":"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/06\/02book-1024x536.jpg","inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/uxmag.com\/articles\/conscious-ai-models","url":"https:\/\/uxmag.com\/articles\/conscious-ai-models","name":"Conscious AI models? - UX Magazine","isPartOf":{"@id":"https:\/\/uxmag.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/uxmag.com\/articles\/conscious-ai-models#primaryimage"},"image":{"@id":"https:\/\/uxmag.com\/articles\/conscious-ai-models#primaryimage"},"thumbnailUrl":"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/06\/02book-1024x536.jpg","datePublished":"2022-12-06T13:36:27+00:00","dateModified":"2022-12-07T15:47:53+00:00","description":"If it looks like a duck\u2026","breadcrumb":{"@id":"https:\/\/uxmag.com\/articles\/conscious-ai-models#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/uxmag.com\/articles\/conscious-ai-models"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/uxmag.com\/articles\/conscious-ai-models#primaryimage","url":"","contentUrl":""},{"@type":"BreadcrumbList","@id":"https:\/\/uxmag.com\/articles\/conscious-ai-models#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/uxmag.com\/"},{"@type":"ListItem","position":2,"name":"Artificial Intelligence","item":"https:\/\/uxmag.com\/topics\/artificial-intelligence"},{"@type":"ListItem","position":3,"name":"Conscious AI models?"}]},{"@type":"WebSite","@id":"https:\/\/uxmag.com\/#website","url":"https:\/\/uxmag.com\/","name":"UX Magazine","description":"UX Magazine is a central, one-stop resource for everything related to user experience. We provide a steady stream of current, informative, and credible information about UX and related fields to enhance the professional and creative lives of UX practitioners and those exploring the field. Our content is driven and created by an impressive roster of experienced professionals","publisher":{"@id":"https:\/\/uxmag.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/uxmag.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/uxmag.com\/#organization","name":"UX Magazine","alternateName":"uxmag","url":"https:\/\/uxmag.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/uxmag.com\/#\/schema\/logo\/image\/","url":"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/01\/UX-Magazine-logo-2.png","contentUrl":"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/01\/UX-Magazine-logo-2.png","width":2440,"height":428,"caption":"UX Magazine"},"image":{"@id":"https:\/\/uxmag.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/uxmag","https:\/\/x.com\/uxmag","https:\/\/www.linkedin.com\/company\/ux-magazine\/"]},{"@type":"Person","@id":"https:\/\/uxmag.com\/#\/schema\/person\/cdb2545f444b0f476f0962679a564b96","name":"Daniel Godoy","url":"https:\/\/uxmag.com\/contributors\/daniel-godoy"}]}},"_links":{"self":[{"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/posts\/17037","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/users\/2574"}],"replies":[{"embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/comments?post=17037"}],"version-history":[{"count":0,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/posts\/17037\/revisions"}],"wp:attachment":[{"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/media?parent=17037"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/categories?post=17037"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/tags?post=17037"},{"taxonomy":"topics","embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/topics?post=17037"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}