{"id":18458,"date":"2023-12-07T09:58:33","date_gmt":"2023-12-07T09:58:33","guid":{"rendered":"https:\/\/uxmag.com\/?p=18458"},"modified":"2023-12-07T09:58:41","modified_gmt":"2023-12-07T09:58:41","slug":"lost-in-dall-e-3-translation","status":"publish","type":"post","link":"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation","title":{"rendered":"Lost in DALL-E 3 Translation"},"content":{"rendered":"\n<p><em>This article was originally published\u00a0<\/em><a href=\"https:\/\/www.artfish.ai\/p\/lost-in-dalle3-translation\" target=\"_blank\" rel=\"noreferrer noopener\"><em>on artfish intelligence<\/em><\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"311d\">Introduction<\/h2>\n\n\n\n<p id=\"4831\">OpenAI recently launched&nbsp;<a href=\"https:\/\/openai.com\/blog\/dall-e-3-is-now-available-in-chatgpt-plus-and-enterprise\" rel=\"noreferrer noopener\" target=\"_blank\">DALL-E 3<\/a>, the latest in their line of AI image generation models.<\/p>\n\n\n\n<p id=\"ce74\">But as&nbsp;<a href=\"https:\/\/restofworld.org\/2023\/ai-image-stereotypes\/\" rel=\"noreferrer noopener\" target=\"_blank\">recent media coverage<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2303.11408\" rel=\"noreferrer noopener\" target=\"_blank\">research<\/a>&nbsp;reveal, these AI models come with the baggage of biases and stereotypes. For example, AI image generation models such as Stable Diffusion and Midjourney tend to amplify existing stereotypes about&nbsp;<a href=\"https:\/\/www.bloomberg.com\/graphics\/2023-generative-ai-bias\/\" rel=\"noreferrer noopener\" target=\"_blank\">race, gender<\/a>, and&nbsp;<a href=\"https:\/\/restofworld.org\/2023\/ai-image-stereotypes\/\" rel=\"noreferrer noopener\" target=\"_blank\">national identity<\/a>.<\/p>\n\n\n\n<p id=\"81ed\">Most of these studies, however, primarily test the models using English prompts. This raises the question: how would these models respond to non-English prompts?<\/p>\n\n\n\n<p id=\"1596\">In this article, I delve into DALL-E 3\u2019s behavior with prompts from diverse languages. Drawing from the themes of my&nbsp;<a href=\"https:\/\/www.artfish.ai\/p\/all-languages-are-not-created-tokenized\" rel=\"noreferrer noopener\" target=\"_blank\">previous works<\/a>, I offer a multilingual perspective on the newest AI image generation model.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"59ca\">How DALL-E 3 works: Prompt Transformations<\/h2>\n\n\n\n<p id=\"1696\">Unlike previous AI image generation models, this newest version of the DALL-E model does not directly generate what you type in. Instead, DALL-E 3 incorporates&nbsp;<strong>automatic prompt transformations<\/strong>, meaning that it&nbsp;<strong>transforms your original prompt into a different, more descriptive version.<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"792\" height=\"484\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_3OujRowDa8R17Hwk.webp\" alt=\"\" class=\"wp-image-18459\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_3OujRowDa8R17Hwk.webp 792w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_3OujRowDa8R17Hwk-300x183.webp 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_3OujRowDa8R17Hwk-768x469.webp 768w\" sizes=\"(max-width: 792px) 100vw, 792px\" \/><\/figure>\n\n\n\n<p><em>An example of prompt transformation from OpenAI\u2019s paper detailing the caption improvement process:\u00a0<a href=\"https:\/\/cdn.openai.com\/papers\/dall-e-3.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Improving Image Generation with Better Captions.<\/a>\u00a0Figure created by the author.<\/em><\/p>\n\n\n\n<p id=\"68b0\">According to the&nbsp;<a href=\"https:\/\/cdn.openai.com\/papers\/DALL_E_3_System_Card.pdf\" rel=\"noreferrer noopener\" target=\"_blank\">DALL-E 3 System Card<\/a>, there were a few reasons for doing this:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/cdn.openai.com\/papers\/dall-e-3.pdf\" rel=\"noreferrer noopener\" target=\"_blank\">Improving captions<\/a>&nbsp;to be more descriptive<\/li>\n\n\n\n<li>Removing public figure names<\/li>\n\n\n\n<li>Specifying more diverse descriptions of generated people (e.g. before prompt transformations, generated people tended to be primarily white, young, and female)<\/li>\n<\/ul>\n\n\n\n<p id=\"e872\">So, the image generation process looks something like this:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>You type your prompt into DALL-E 3 (available through ChatGPT Plus)<\/li>\n\n\n\n<li>Your prompt is modified under the hood into four different transformed prompts<\/li>\n\n\n\n<li>DALL-E 3 generates an image based off of each of the transformed prompts<\/li>\n<\/ol>\n\n\n\n<p id=\"9218\">Adding this sort of prompt transformation is fairly new to the world of image generation. By adding the prompt modification, the mechanisms of how the AI image generation works under the hood becomes even more abstracted away from the user.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"e852\">Prompt Transformations in multiple languages<\/h2>\n\n\n\n<p id=\"ac44\">Most research studying biases in text-to-image AI models focus on using English prompts. However, little is known these models\u2019 behavior when prompted in non-English languages. Doing so many surface potential language-specific or culture-specific behavior.<\/p>\n\n\n\n<p id=\"db50\">I asked DALL-E 3 to generate images using the following English prompts:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code>\u201cAn image of a man\u201d<\/code><\/li>\n\n\n\n<li><code>\u201cAn image of a woman\u201d<\/code><\/li>\n\n\n\n<li><code>\u201cAn image of a person\u201d<\/code><\/li>\n<\/ul>\n\n\n\n<p id=\"0f6e\">I used GPT-4 (without DALL-E 3) to translate the phrases into the following languages: Korean, Mandarin, Burmese, Armenian, and Zulu.<\/p>\n\n\n\n<p id=\"55a6\">Then, I used DALL-E 3 to generate 20 images per language, resulting in 120 images per prompt across the 6 languages. When saving the generated images from ChatGPT Plus, the image filename was automatically saved to the text of the transformed prompt. In the rest of the article, I analyze these transformed prompts.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"fd13\">Metadata extraction<\/h3>\n\n\n\n<p id=\"a6dd\"><strong>In my prompts, I never specified a particular culture, ethnicity, or age. However, the transformed prompt often included such indicators.<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"531\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_Ziv3f2U0wWxybc3G-1024x531.webp\" alt=\"\" class=\"wp-image-18460\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_Ziv3f2U0wWxybc3G-1024x531.webp 1024w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_Ziv3f2U0wWxybc3G-300x156.webp 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_Ziv3f2U0wWxybc3G-768x399.webp 768w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_Ziv3f2U0wWxybc3G.webp 1104w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><em>An example of a prompt transformation, annotated with which part of the sentence refers to art style, age, ethnicity, and gender. Figure created by the author.<\/em><\/p>\n\n\n\n<p id=\"c6fd\">From the transformed prompt, I extracted metadata such as art style (\u201cillustration\u201d), age (\u201cmiddle-aged\u201d), ethnicity (\u201cAfrican\u201d), and gender identifier (\u201cwoman\u201d). 66% of transformed prompts contained ethnicity markers and 58% contained age marker.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"9abd\">Observation 1: All prompts are transformed into English<\/h2>\n\n\n\n<p id=\"b79c\">No matter what language the original prompt was in,&nbsp;<strong>the modified prompt was always transformed into English.<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"955\" height=\"673\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_I8zpgtSeDzU-WAB-.webp\" alt=\"\" class=\"wp-image-18461\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_I8zpgtSeDzU-WAB-.webp 955w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_I8zpgtSeDzU-WAB--300x211.webp 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_I8zpgtSeDzU-WAB--768x541.webp 768w\" sizes=\"(max-width: 955px) 100vw, 955px\" \/><\/figure>\n\n\n\n<p><em>A screenshot of ChatGPT Plus showing an example of the original Korean prompt for \u201cAn image of a person\u201d modified into four distinct prompt transformations in English. Figure created by the author.<\/em><\/p>\n\n\n\n<p id=\"6e01\">I found this behavior surprising \u2014 while I was expecting the prompt to be transformed into a more descriptive one, I was not expecting translation into English to also occur.<\/p>\n\n\n\n<p id=\"4918\">The majority of AI generation models, such as Stable Diffusion and Midjourney, are primarily trained and tested in English. In general, these models tend to have lower performance when&nbsp;<a href=\"https:\/\/philippstelzel.medium.com\/midjourney-tested-in-foreign-languages-ac60053bcadb#:~:text=Midjourney%20understands%20commands%20in%20other,does%20not%20really%20understand%20languages.\">generating images from non-English prompts<\/a>, leading to some users translating their prompts from their native language into English. However, doing risks losing the nuance of that native language.<\/p>\n\n\n\n<p id=\"b6d6\">However, to my knowledge, none of these other models automatically translate all prompts into English. Adding this additional step of translation under-the-hood (and, I\u2019m sure, unbeknownst to most users, as it is not explicitly explained when using the tool) adds more opacity to an already black-box tool.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"b4d7\">Observation 2: The language of the original prompt affects the modified prompt<\/h2>\n\n\n\n<p id=\"b8fa\">The prompt transformation step also seemed to incorporate unspecified metadata about the language of the original prompt.<\/p>\n\n\n\n<p id=\"2300\">For example, when the original prompt was in Burmese,\u00a0<strong>even though the prompt did not mention anything about the Burmese language or people, the prompt transformation often mentioned a Burmese person<\/strong>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"428\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_IZ6V_Ve1QMTieu7g-1024x428.webp\" alt=\"\" class=\"wp-image-18462\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_IZ6V_Ve1QMTieu7g-1024x428.webp 1024w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_IZ6V_Ve1QMTieu7g-300x125.webp 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_IZ6V_Ve1QMTieu7g-768x321.webp 768w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_IZ6V_Ve1QMTieu7g.webp 1103w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><em>An example of a prompt in Burmese for \u201cimage of a man\u201d which is transformed by DALL-E 3 into a descriptive prompt about a Burmese man. Figure created by the author.<\/em><\/p>\n\n\n\n<p>This was not the case for all languages and the results varied per language. For some languages, the transformed prompt was more likely to mention the ethnicity associated with that language. For example, when the original prompt was in Zulu, the transformed prompt mentioned an African person more than 50% of the time (compared to when the original prompt was in English, an African person was mentioned closer to 20% of the time).<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"444\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_-4_hyZH9libdzC5o-1024x444.webp\" alt=\"\" class=\"wp-image-18463\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_-4_hyZH9libdzC5o-1024x444.webp 1024w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_-4_hyZH9libdzC5o-300x130.webp 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_-4_hyZH9libdzC5o-768x333.webp 768w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_-4_hyZH9libdzC5o.webp 1072w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><em>Percentages of ethnicity generated by DALL-E 3 for all combined prompts (image of a person\/man\/woman), for each language. Figure created by the author.<\/em><\/p>\n\n\n\n<p id=\"8a2a\">I do not aim to pass value judgment on whether this behavior is right or wrong, nor am I prescribing what should be an expected behavior. Regardless, I found it interesting that DALL-E 3\u2019s behavior varied so much across the original prompt language. For example, when the original prompt was in Korean, there were no mentions of Korean people in DALL-E 3\u2019s prompt transformations. Similarly, when the original prompt was in English, there were no mentions of British people in DALL-E 3\u2019s prompt transformations.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"2ada\">Observation 3: Even with neutral prompts, DALL-E 3 generates gendered prompts<\/h2>\n\n\n\n<p id=\"a0fd\">I mapped the person identifier nouns in DALL-E 3\u2019s prompt transformations to one of three buckets: female, male, or neutral:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>woman, girl, lady \u2192 \u201cfemale\u201d<\/li>\n\n\n\n<li>man, boy, male doctor \u2192 \u201cmale\u201d<\/li>\n\n\n\n<li>athlete, child, teenager, individual, person, people \u2192 \u201cneutral\u201d<\/li>\n<\/ul>\n\n\n\n<p id=\"6640\">Then, I compared the original prompt (\u201cperson\/man\/woman\u201d) to the transformed prompt (\u201cneutral\/male\/female\u201d):<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"715\" height=\"275\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_FScKGNrG1ylG4Wh5.webp\" alt=\"\" class=\"wp-image-18464\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_FScKGNrG1ylG4Wh5.webp 715w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_FScKGNrG1ylG4Wh5-300x115.webp 300w\" sizes=\"(max-width: 715px) 100vw, 715px\" \/><\/figure>\n\n\n\n<p><em>Given the original prompt (\u201cAn image of a person\/man\/woman\u201d), the percentage of times the transformed prompt contained gendered individuals. Figure created by the author.<\/em><\/p>\n\n\n\n<p id=\"4b56\">It is no surprise that the original prompt of \u201can image of a man\u201d resulted in mostly male identifiers (and same for women). However, I found it interesting that&nbsp;<strong>when using the gender-neutral prompt \u201cAn image of a person\u201d, DALL-E 3 transformed the prompt to include gendered (e.g. woman, man) terms 75% of the time.&nbsp;<\/strong>DALL-E 3 generated transformed prompts including female individuals slightly more often (40%) than male individuals (35%). Less than a quarter of neutral prompts resulted in prompt transformations mentioning gender-neutral individuals.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"bf34\">Observation 4: Women are often described as young, whereas men\u2019s ages are more diverse<\/h2>\n\n\n\n<p id=\"48bc\">Sometimes, DALL-E 3 included an age group (young, middle-aged, or elderly) to describe the individual in the modified prompt.<\/p>\n\n\n\n<p id=\"363f\"><strong>In instances where the prompt mentioned a female individual, descriptions of age tended to skew younger.<\/strong>&nbsp;Specifically, 35% of transformed prompts described female individuals as \u201cyoung,\u201d which is more than twice the frequency of descriptions labeling them as \u201celderly\u201d (13%), and over four times as often as \u201cmiddle-aged\u201d (7.7%). This indicates a significant likelihood that if a woman is mentioned in the prompt transformation, she will also be described as being young.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"655\" height=\"338\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_reH6oko7g_PwXtVX.webp\" alt=\"\" class=\"wp-image-18465\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_reH6oko7g_PwXtVX.webp 655w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_reH6oko7g_PwXtVX-300x155.webp 300w\" sizes=\"(max-width: 655px) 100vw, 655px\" \/><\/figure>\n\n\n\n<p><em>The number of transformed prompts that mention age groups, separated by the gender of the individual mentioned in the prompt. Figure created by the author.<\/em><\/p>\n\n\n\n<p>Here are a few examples of prompt transformations:<\/p>\n\n\n\n<p><em>Illustration of a young woman of Burmese descent, wearing a fusion of modern and traditional attirePhoto of a young Asian woman with long black hair, wearing casual clothing, standing against a cityscape background<\/em><\/p>\n\n\n\n<p><em>Watercolor painting of a young woman with long blonde braids, wearing a floral dress, sitting by a lakeside, sketching in her notebook<\/em><\/p>\n\n\n\n<p><em>Oil painting of a young woman wearing a summer dress and wide-brimmed hat, sitting on a park bench with a book in her lap, surrounded by lush greenery<\/em><\/p>\n\n\n\n<p id=\"bcea\">On the other hand, prompt transformations mentioning male individuals showed a more balanced distribution across the age groups. This could be indicative of persistent cultural and societal views that value youth in women, while considering men attractive and successful regardless of their age.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"f9f8\">Observation 5: Variations in person age depends on original prompt language<\/h2>\n\n\n\n<p id=\"6da0\">The age group varied depending on the language of the original prompt as well. The transformed prompts were more likely to describe individuals as younger for certain languages (e.g. Zulu) and less likely for other languages (e.g. Burmese).<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"949\" height=\"465\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_q2lkbcTAKDkKNuxV.webp\" alt=\"\" class=\"wp-image-18466\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_q2lkbcTAKDkKNuxV.webp 949w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_q2lkbcTAKDkKNuxV-300x147.webp 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_q2lkbcTAKDkKNuxV-768x376.webp 768w\" sizes=\"(max-width: 949px) 100vw, 949px\" \/><\/figure>\n\n\n\n<p><em>The number of transformed prompts mentioning age groups for all prompts (an image of a man\/woman\/person), separated by the language of the original prompt. Figure created by the author.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"5962\">Observation 6: Variations in art style depends on individual gender<\/h2>\n\n\n\n<p id=\"33ca\">I expected the art style (e.g. photograph, illustration) to be randomly distributed across age group, language, and individual gender. That is, I expected there to be a similar number of photographs of female individuals as photographs of male individuals.<\/p>\n\n\n\n<p id=\"b04c\">However, this was not the case. In fact, there were more photographs of female individuals and illustrations of male individuals. The art style describing an individual did&nbsp;<em>not<\/em>&nbsp;seem to be distributed uniformly across genders, but rather, seemed to prefer certain genders over others.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"662\" height=\"338\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_ahR1U9y8-9RjXlLH.webp\" alt=\"\" class=\"wp-image-18467\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_ahR1U9y8-9RjXlLH.webp 662w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_ahR1U9y8-9RjXlLH-300x153.webp 300w\" sizes=\"(max-width: 662px) 100vw, 662px\" \/><\/figure>\n\n\n\n<p><em>The number of transformed prompts mentioning each art style, separated by the gender of the individual mentioned in the prompt. Figure created by the author.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"01c3\">Observation 7: Repetition of tropes, from young Asian women to elderly African men<\/h2>\n\n\n\n<p id=\"a616\">In my experiments, there were 360 unique demographic descriptions in the prompt transformations (e.g. age\/ethnicity\/gender combinations). While many combinations only occurred a few times (such as \u201cyoung Burmese woman\u201d or \u201celderly European man\u201d), certain demographic descriptions appeared more frequently than others.<\/p>\n\n\n\n<p id=\"d5da\">One common description was \u201celderly African man\u201d, which appeared 11 times. Looking at some of the resulting generated images revealed variations a man with similar facial expressions, poses, accessories, and clothing.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"990\" height=\"423\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_ddr9OFIicC8KsjRb.webp\" alt=\"\" class=\"wp-image-18468\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_ddr9OFIicC8KsjRb.webp 990w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_ddr9OFIicC8KsjRb-300x128.webp 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_ddr9OFIicC8KsjRb-768x328.webp 768w\" sizes=\"(max-width: 990px) 100vw, 990px\" \/><\/figure>\n\n\n\n<p>Even more common was the description \u201cyoung Asian woman\u201d, which appeared 23 times. Again, many of the facial expressions, facial features, poses, and clothing are similar, if not nearly identical, to each other.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img decoding=\"async\" width=\"990\" height=\"423\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_8kwLj5nY8N2jnkxi.webp\" alt=\"\" class=\"wp-image-18469\" style=\"width:843px;height:auto\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_8kwLj5nY8N2jnkxi.webp 990w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_8kwLj5nY8N2jnkxi-300x128.webp 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_8kwLj5nY8N2jnkxi-768x328.webp 768w\" sizes=\"(max-width: 990px) 100vw, 990px\" \/><\/figure>\n\n\n\n<p><em>A subset images whose transformed prompt contained the phrase \u201cyoung Asian woman\u201d. Figure created by the author.<\/em><\/p>\n\n\n\n<p id=\"dae0\">This phenomenon captures the essence of bias that permeates our world. When we observe the faces of&nbsp;<a href=\"https:\/\/www.rollingstone.com\/music\/music-news\/k-pop-has-so-many-lookalikes-that-its-government-stepped-in-796791\/\" rel=\"noreferrer noopener\" target=\"_blank\">Korean K-Pop stars<\/a>&nbsp;or&nbsp;<a href=\"https:\/\/zhuanlan.zhihu.com\/p\/622175815?fbclid=IwAR06YQQjpd5B8ZBOLF1f3rug_3mO4kTQu2bSrPNR1u_DkYRSyK04DtNrfEo\" rel=\"noreferrer noopener\" target=\"_blank\">Chinese idols<\/a>, there is a striking similarity in their facial structures. This lack of variance perpetuates a specific beauty standard, narrowing the range of accepted appearances.<\/p>\n\n\n\n<p id=\"85e5\">Similarly, in the case of AI-generated images, the narrow interpretations of demographic descriptions such as \u201celderly African men\u201d and \u201cyoung Asian women\u201d contribute to harmful stereotypes. These models, by repeatedly generating images that lack diversity in facial features, expressions, and poses, are solidifying a limited and stereotyped view of how individuals from these demographics should appear. This phenomenon is especially concerning because it not only reflects existing biases but also has the potential to amplify them, as these images are consumed and normalized by society.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"10ed\">But how does DALL-E 3 compare to other image generation models?<\/h2>\n\n\n\n<p id=\"c278\">I generated images in the 6 languages for the prompt \u201can image of a person\u201d using two other popular text-to-image AI tools:&nbsp;<a href=\"https:\/\/www.midjourney.com\/app\/\" rel=\"noreferrer noopener\" target=\"_blank\">Midjourney<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/stability.ai\/stable-diffusion\" rel=\"noreferrer noopener\" target=\"_blank\">Stable Diffusion XL<\/a>.<\/p>\n\n\n\n<p id=\"8898\">For images generated using Midjourney, non-English prompts were likely to generate images of landscapes rather than humans (although, let\u2019s be fair, the English images are pretty creepy). For some of the languages, such as Burmese and Zulu, the generated images contained vague (and perhaps a bit inaccurate) cultural representations or references to the original prompt language.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"821\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_zXOVlWPo_JHLu0v0-1024x821.webp\" alt=\"\" class=\"wp-image-18470\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_zXOVlWPo_JHLu0v0-1024x821.webp 1024w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_zXOVlWPo_JHLu0v0-300x240.webp 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_zXOVlWPo_JHLu0v0-768x615.webp 768w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_zXOVlWPo_JHLu0v0.webp 1097w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><em>Images generated using\u00a0<a href=\"https:\/\/www.midjourney.com\/app\/\" target=\"_blank\" rel=\"noreferrer noopener\">Midjourney<\/a>\u00a0in the six languages for the prompt \u201can image of a person\u201d. Figure created by the author.<\/em><\/p>\n\n\n\n<p>Similar patterns were observed in the images generated using Stable Diffusion XL. Non-English prompts were more likely to generate images of landscapes. The Armenian prompt only generated what looks like carpet patterns. Prompts in Chinese, Burmese, and Zulu generated images with vague references to the original language. (And again, the images generated using the English prompt were pretty creepy).<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img decoding=\"async\" width=\"1024\" height=\"802\" src=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_hI5bhLHYsqEPcB3J-1024x802.webp\" alt=\"\" class=\"wp-image-18471\" style=\"width:840px;height:auto\" srcset=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_hI5bhLHYsqEPcB3J-1024x802.webp 1024w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_hI5bhLHYsqEPcB3J-300x235.webp 300w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_hI5bhLHYsqEPcB3J-768x601.webp 768w, https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_hI5bhLHYsqEPcB3J.webp 1087w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><em>Images generated using\u00a0<a href=\"https:\/\/stability.ai\/stable-diffusion\" target=\"_blank\" rel=\"noreferrer noopener\">Stable Diffusion XL<\/a>\u00a0in the six languages for the prompt \u201can image of a person\u201d. I used\u00a0<a href=\"https:\/\/playgroundai.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Playground AI<\/a>\u00a0to use the model. Figure created by the author.<\/em><\/p>\n\n\n\n<p>In a way, DALL-E 3\u2019s prompt transformations served as a way to artificially introduce more variance and diversity into the image generation process. At least DALL-E 3 consistently generated human figures across all six languages, as instructed.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"6104\">Discussion and concluding remarks<\/h2>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p id=\"7b12\"><em>Automatic prompt transformations present considerations of their own: they may alter the meaning of the prompt, potentially carry inherent biases, and may not always align with individual user preferences.<br>\u2014&nbsp;<\/em><a href=\"https:\/\/cdn.openai.com\/papers\/DALL_E_3_System_Card.pdf\" rel=\"noreferrer noopener\" target=\"_blank\"><em>DALL-E 3 System Card<\/em><\/a><\/p>\n<\/blockquote>\n\n\n\n<p id=\"0dce\">In this article, I explored how DALL-E 3 uses prompt transformations to enhance the user\u2019s original prompt. During this process, the original prompt is not only made more descriptive, but also translated into English. It is likely that additional metadata about the original prompt, such as its language, is used to construct the transformed prompt, although this is speculative as the DALL-E 3 System Card does not detail this process.<\/p>\n\n\n\n<p id=\"136c\">My testing of DALL-E 3 spanned six different languages, but it is important to note that this is not an exhaustive examination given the hundreds of languages spoken worldwide. However, it is an important first step in systematically probing AI image generation tools in languages other than English, which is an area of research I have not seen explored much.<\/p>\n\n\n\n<p id=\"ea1d\">The prompt transformation step was not transparent to users when accessing DALL-E 3 via the ChatGPT Plus web app. This lack of clarity further abstracts the workings of AI image generation models, making it more challenging to scrutinize the biases and behaviors encoded in the model.<\/p>\n\n\n\n<p id=\"4d49\">However, in comparison to other AI image generation models, DALL-E 3 was&nbsp;<em>overall<\/em>&nbsp;<em>more<\/em>&nbsp;<em>accurate<\/em>&nbsp;in following the prompt to generate a person and&nbsp;<em>overall<\/em>&nbsp;<em>more<\/em>&nbsp;<em>diverse<\/em>&nbsp;in generating faces of many ethnicities (due to the prompt transformations). Therefore, while there might have been limited diversity within certain ethnic categories in terms of facial features, the overall outcome was a higher diversity (albeit&nbsp;<em>artificially induced<\/em>) in the generated images compared to other models.<\/p>\n\n\n\n<p id=\"af21\">I end this article with open questions about what the desired output of AI text-to-image models should be. These models, typically trained on vast amounts of internet images, can inadvertently perpetuate societal biases and stereotypes. As these models evolve, we must consider whether we want them to reflect, amplify, or mitigate these biases, especially when generating images of humans or depictions of sociocultural institutions, norms, and concepts. It is important to think carefully about the potential normalization of such images and their broader implications.<\/p>\n\n\n\n<p id=\"f818\"><em>Note: DALL-E 3 and ChatGPT are both products that evolve regularly. Even though I conducted my experiments a week ago, some of the results found in this article may already be outdated or not replicable anymore. This will inevitably happen as the models continue to be trained and as the user interface continues to be updated. While that is the nature of the AI space at this current time, the method of probing image generation models across non-English languages is still applicable for future studies.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>This article was originally published\u00a0on artfish intelligence Introduction OpenAI recently launched&nbsp;DALL-E 3, the latest in their line of AI image generation models. But as&nbsp;recent media coverage&nbsp;and&nbsp;research&nbsp;reveal, these AI models come with the baggage of biases and stereotypes. For example, AI image generation models such as Stable Diffusion and Midjourney tend to amplify existing stereotypes about&nbsp;race,<\/p>\n","protected":false},"author":2641,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[1],"tags":[],"topics":[14,144,147,28,30],"class_list":{"0":"post-18458","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-uncategorized","7":"topics-artificial-intelligence","8":"topics-conversational-design","9":"topics-defining-ai","10":"topics-design","11":"topics-design-tools-and-software","12":"entry"},"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v18.2.1 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Lost in DALL-E 3 Translation - UX Magazine<\/title>\n<meta name=\"description\" content=\"Generating AI images in multiple languages leads to different results.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Lost in DALL-E 3 Translation\" \/>\n<meta property=\"og:description\" content=\"Generating AI images in multiple languages leads to different results.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation\" \/>\n<meta property=\"og:site_name\" content=\"UX Magazine\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/uxmag\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-07T09:58:33+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-12-07T09:58:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_3OujRowDa8R17Hwk.webp\" \/>\n<meta name=\"author\" content=\"Yennie Jun\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@uxmag\" \/>\n<meta name=\"twitter:site\" content=\"@uxmag\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Yennie Jun\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"14 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation#article\",\"isPartOf\":{\"@id\":\"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation\"},\"author\":{\"name\":\"Yennie Jun\",\"@id\":\"https:\/\/uxmag.com\/#\/schema\/person\/b6492db6763203dcccd60fbfbd542b12\"},\"headline\":\"Lost in DALL-E 3 Translation\",\"datePublished\":\"2023-12-07T09:58:33+00:00\",\"dateModified\":\"2023-12-07T09:58:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation\"},\"wordCount\":2597,\"publisher\":{\"@id\":\"https:\/\/uxmag.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation#primaryimage\"},\"thumbnailUrl\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_3OujRowDa8R17Hwk.webp\",\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation\",\"url\":\"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation\",\"name\":\"Lost in DALL-E 3 Translation - UX Magazine\",\"isPartOf\":{\"@id\":\"https:\/\/uxmag.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation#primaryimage\"},\"image\":{\"@id\":\"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation#primaryimage\"},\"thumbnailUrl\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_3OujRowDa8R17Hwk.webp\",\"datePublished\":\"2023-12-07T09:58:33+00:00\",\"dateModified\":\"2023-12-07T09:58:41+00:00\",\"description\":\"Generating AI images in multiple languages leads to different results.\",\"breadcrumb\":{\"@id\":\"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation#primaryimage\",\"url\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_3OujRowDa8R17Hwk.webp\",\"contentUrl\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_3OujRowDa8R17Hwk.webp\",\"width\":792,\"height\":484},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/uxmag.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Artificial Intelligence\",\"item\":\"https:\/\/uxmag.com\/topics\/artificial-intelligence\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Lost in DALL-E 3 Translation\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/uxmag.com\/#website\",\"url\":\"https:\/\/uxmag.com\/\",\"name\":\"UX Magazine\",\"description\":\"UX Magazine is a central, one-stop resource for everything related to user experience. We provide a steady stream of current, informative, and credible information about UX and related fields to enhance the professional and creative lives of UX practitioners and those exploring the field. Our content is driven and created by an impressive roster of experienced professionals\",\"publisher\":{\"@id\":\"https:\/\/uxmag.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/uxmag.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/uxmag.com\/#organization\",\"name\":\"UX Magazine\",\"alternateName\":\"uxmag\",\"url\":\"https:\/\/uxmag.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/uxmag.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/01\/UX-Magazine-logo-2.png\",\"contentUrl\":\"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/01\/UX-Magazine-logo-2.png\",\"width\":2440,\"height\":428,\"caption\":\"UX Magazine\"},\"image\":{\"@id\":\"https:\/\/uxmag.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/uxmag\",\"https:\/\/x.com\/uxmag\",\"https:\/\/www.linkedin.com\/company\/ux-magazine\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/uxmag.com\/#\/schema\/person\/b6492db6763203dcccd60fbfbd542b12\",\"name\":\"Yennie Jun\",\"url\":\"https:\/\/uxmag.com\/contributors\/yennie-jun\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Lost in DALL-E 3 Translation - UX Magazine","description":"Generating AI images in multiple languages leads to different results.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation","og_locale":"en_US","og_type":"article","og_title":"Lost in DALL-E 3 Translation","og_description":"Generating AI images in multiple languages leads to different results.","og_url":"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation","og_site_name":"UX Magazine","article_publisher":"https:\/\/www.facebook.com\/uxmag","article_published_time":"2023-12-07T09:58:33+00:00","article_modified_time":"2023-12-07T09:58:41+00:00","og_image":[{"url":"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_3OujRowDa8R17Hwk.webp","type":"","width":"","height":""}],"author":"Yennie Jun","twitter_card":"summary_large_image","twitter_creator":"@uxmag","twitter_site":"@uxmag","twitter_misc":{"Written by":"Yennie Jun","Est. reading time":"14 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation#article","isPartOf":{"@id":"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation"},"author":{"name":"Yennie Jun","@id":"https:\/\/uxmag.com\/#\/schema\/person\/b6492db6763203dcccd60fbfbd542b12"},"headline":"Lost in DALL-E 3 Translation","datePublished":"2023-12-07T09:58:33+00:00","dateModified":"2023-12-07T09:58:41+00:00","mainEntityOfPage":{"@id":"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation"},"wordCount":2597,"publisher":{"@id":"https:\/\/uxmag.com\/#organization"},"image":{"@id":"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation#primaryimage"},"thumbnailUrl":"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_3OujRowDa8R17Hwk.webp","inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation","url":"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation","name":"Lost in DALL-E 3 Translation - UX Magazine","isPartOf":{"@id":"https:\/\/uxmag.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation#primaryimage"},"image":{"@id":"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation#primaryimage"},"thumbnailUrl":"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_3OujRowDa8R17Hwk.webp","datePublished":"2023-12-07T09:58:33+00:00","dateModified":"2023-12-07T09:58:41+00:00","description":"Generating AI images in multiple languages leads to different results.","breadcrumb":{"@id":"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation#primaryimage","url":"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_3OujRowDa8R17Hwk.webp","contentUrl":"https:\/\/uxmag.com\/wp-content\/uploads\/2023\/12\/0_3OujRowDa8R17Hwk.webp","width":792,"height":484},{"@type":"BreadcrumbList","@id":"https:\/\/uxmag.com\/articles\/lost-in-dall-e-3-translation#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/uxmag.com\/"},{"@type":"ListItem","position":2,"name":"Artificial Intelligence","item":"https:\/\/uxmag.com\/topics\/artificial-intelligence"},{"@type":"ListItem","position":3,"name":"Lost in DALL-E 3 Translation"}]},{"@type":"WebSite","@id":"https:\/\/uxmag.com\/#website","url":"https:\/\/uxmag.com\/","name":"UX Magazine","description":"UX Magazine is a central, one-stop resource for everything related to user experience. We provide a steady stream of current, informative, and credible information about UX and related fields to enhance the professional and creative lives of UX practitioners and those exploring the field. Our content is driven and created by an impressive roster of experienced professionals","publisher":{"@id":"https:\/\/uxmag.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/uxmag.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/uxmag.com\/#organization","name":"UX Magazine","alternateName":"uxmag","url":"https:\/\/uxmag.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/uxmag.com\/#\/schema\/logo\/image\/","url":"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/01\/UX-Magazine-logo-2.png","contentUrl":"https:\/\/uxmag.com\/wp-content\/uploads\/2021\/01\/UX-Magazine-logo-2.png","width":2440,"height":428,"caption":"UX Magazine"},"image":{"@id":"https:\/\/uxmag.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/uxmag","https:\/\/x.com\/uxmag","https:\/\/www.linkedin.com\/company\/ux-magazine\/"]},{"@type":"Person","@id":"https:\/\/uxmag.com\/#\/schema\/person\/b6492db6763203dcccd60fbfbd542b12","name":"Yennie Jun","url":"https:\/\/uxmag.com\/contributors\/yennie-jun"}]}},"_links":{"self":[{"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/posts\/18458","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/users\/2641"}],"replies":[{"embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/comments?post=18458"}],"version-history":[{"count":0,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/posts\/18458\/revisions"}],"wp:attachment":[{"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/media?parent=18458"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/categories?post=18458"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/tags?post=18458"},{"taxonomy":"topics","embeddable":true,"href":"https:\/\/uxmag.com\/wp-json\/wp\/v2\/topics?post=18458"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}