

























Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
A study that uses a recognition task to demonstrate the existence and consistency of false memories for certain pop culture images, a phenomenon known as the Visual Mandela Effect (VME). The study investigates the characteristics of VME-apparent images and reveals that they elicit false memories despite no difference in prior exposure. The document also discusses possible explanations for the VME and its implications for memory research.
What you will learn
Typology: Study notes
1 / 33
This page cannot be seen from the preview
Don't miss anything!
Running head: Shared and specific false memories 1 The Visual Mandela Effect as evidence for shared and specific false memories across 2 people 3 Deepasri Prasad^1 , Wilma A Bainbridge^1 4 1 Department of Psychology, University of Chicago, Chicago, IL, 60637 5 Corresponding author: Deepasri Prasad, Department of Psychology, University of Chicago, 6 5848 South University Avenue, Chicago, IL, 60637; email: prasadee@uchicago.edu; tel: (773) 7 7995 - 2139
8 The Visual Mandela Effect as evidence for shared and specific false memories across 9 people 10 11 Abstract 12 The Mandela Effect is an internet phenomenon describing shared and consistent false 13 memories for specific icons in popular culture. The Visual Mandela Effect (VME) is a Mandela 14 Effect specific to visual icons (e.g., the Monopoly Man is falsely remembered with a monocle) 15 and has not yet been empirically quantified or tested. In Experiment 1 ( N =100), we demonstrate 16 that certain images from popular iconography elicit consistent, specific false memories. In 17 Experiment 2 ( N =60), using eye-tracking-like methods, we find no attentional or visual 18 differences that drive this phenomenon. There is no clear difference in the natural visual 19 experience of these images (Experiment 3), and these VME-errors also occur spontaneously 20 during recall (Experiment 4; N= 50). These results demonstrate that there are certain images for 21 which people consistently make the same false memory error, despite majority of visual 22 experience being the canonical image. 23 24 Statement of Relevance 25 There are widespread informal reports on the internet of many people having the same specific 26 false memory for certain items, usually from pop culture, called the Mandela Effect. Surprisingly, 27 there are even images that people falsely remember, despite almost never seeing this false 28 memory in the world (the Visual Mandela Effect, VME). However, there has actually been no 29 experimental confirmation of the VME, nor any attempt to characterize these false memories. 30 Using a recognition task, we demonstrate that people have specific and consistent false 31 memories for certain pop culture images, showing that the VME does exist. Furthermore, our 32 study suggests that these false memories are not driven by low-level feature differences or 33 attentional differences and that they can occur during recall. These data add to a growing body 34 of surprising results showing consistency in what people remember, by demonstrating new 35 evidence that there is also consistency in what people misremember. 36 37 Key words: visual memory; recognition; visual recall; memory errors, drawing paradigm
72 and extra leaves (Blake et al., 2015). These results show that even highly familiar stimuli are 73 susceptible to schema-consistent errors. 74 75 76 77 In casual pop science discussions, schema theory has been mentioned as a possible 78 explanation of the VME (Dagnall & Drinkwater, 2018), although not formally investigated. 79 Indeed, in many reported VMEs, the erroneous feature is schema-consistent. The Monopoly 80 Man, for example, is the quintessential older rich man, and a monocle is stereotypical of this 81 schema. However, there are defining aspects of the VME that this theory cannot fully account 82 for. In previous studies of highly familiar stimuli, while falsely remembered features were 83 schema-consistent, they varied across people. Conversely, the false features reported from the 84 VME are specific and consistent across people. Furthermore, the VME has only been reported 85 for a select few visual icons; if it were simply a matter of schema-consistent errors, we would Fig. 1. Which image is the real version? (a) The original, unaltered images that are reported to be affected by VME. People only have exposure to this version of the images in their daily lives. (b) The reported VME-induced representation that people have a strong false memory for, despite never seeing this version before. Their purported VME-representations are a golden leg for C3PO, a cornucopia for the Fruit of the Loom logo, a tail for Curious George, a monocle for the Monopoly Man, a black-tipped tail for Pikachu, and no bar between the ‘V’ and the ‘W’ of the Volkswagen logo (c) An alternative, matched manipulation on the same feature that has not been reported as a VME representation.
86 expect to see more images affected by the VME phenomenon. Given the high specificity and 87 consistency reported with the VME, the schema explanation does not suffice by itself. 88 89 Consistent memory performance is not unique to the VME; people are highly consistent in their 90 accurate memory for images, universally remembering some over others (Bainbridge et al., 91 2013; Bainbridge, Dilks, & Oliva, 2017; Bainbridge, 2019). These results suggest that a 92 proportion of what dictates memory performance is intrinsic to the stimulus and independent of 93 individual experience. These studies investigating the intrinsic memorability of images also 94 show that images, despite being of a similar schema (e.g., faces), can elicit highly consistent 95 memory behaviors across participants (Bainbridge et al., 2013). While memorability work has 96 mainly focused on successful recognition of images, some work also suggests high consistency 97 in false recognition (Bainbridge et al., 2013). Applying this to the VME, perhaps something 98 about the images themselves is what drives the effect. 99 100 Here, we provide empirical evidence demonstrating that there are certain images that elicit a 101 specific false memory, despite high familiarity and confidence. In Experiment 1, we examine the 102 VME through a forced choice recognition task, where participants are asked to pick the 103 canonical version of an image amongst a set of manipulated versions. In Experiment 2, we 104 characterize the perceptual and attentional forces behind this effect through a computer-based 105 method analogous to eye-tracking. In Experiment 3, we quantify the natural visual experience 106 for these images by scraping real-world images from the Internet. Finally, in Experiment 4, we 107 test if the VME occurs during free recall with an image drawing paradigm. We reveal that the 108 VME exists for certain images, showing that there are consistencies in people’s false memories 109 for both recognition and recall. We observe these memory errors both when participants report 110 long-term knowledge (Experiments 1 and 4) and when they report short-term memory 111 (Experiments 2 and 4). Furthermore, we find that differences in the visual inspection of an 112 image do not account for these false memories (Experiment 2), and there is no universal facet 113 of real-world viewing experience that explains the effect (Experiment 3). These results suggest 114 that there may not be a unitary account driving the VME, despite the high consistency across 115 people. 116 117 Open Practices Statement 118 The stimuli and data for all experiments are publicly available at https://osf.io/7cmwf/. None of 119 the experiments reported in this article were preregistered.
154 position/orientation change of a feature; and color change of a feature. The 80 manipulations 155 included twenty-six feature subtractions, twenty-one feature additions, twenty-four feature 156 changes, three position/orientation changes, and six color changes. Manipulations were 157 intended to be relatively schema consistent and to fit the original design of the image. The 158 manipulation type was not predictive of the correct answer and the absence or presence of a 159 feature on an image did not always mean the image was the original. Thus, participants could 160 not determine the correct image just from reasoning about the types of manipulations in the 161 three versions of the image. All stimuli are accessible through our publicly available repository 162 on the Open Science Framework (https://osf.io/7cmwf/). 163 164 Participants 165 Participants on Amazon Mechanical Turk (AMT), an online crowdsourcing platform for tasks, 166 were screened for location (U.S) and English comprehension. Participants were excluded if 167 more than 50% of the questions were not answered. Since Experiment 1 served as a first 168 exploratory analysis of the VME, we aimed to recruit a high number of participants to make 169 judgments on each image; ultimately, 100 AMT participants (Females = 35, Mean Age = 39. 170 years, Std Age = 12.3) successfully completed this task. No personally identifiable information 171 was collected from any participants, and participants had to acknowledge participation in order 172 to continue, following the guidelines approved by the University of Chicago Institutional Review 173 Board (IRB19-1395). 174 175 Procedure 176 Each participant saw all forty image sets with all three versions. Presentation order of both the 177 image sets and the versions within the set were randomized. Participants were asked to choose 178 the correct version (i.e., the canonical version) of the image from the three versions and rate 179 their confidence in their choice on a 1 to 5 Likert scale (1 = not at all confident, 5 = extremely 180 confident). 181 They also rated their familiarity with the image concept on a 1 to 5 Likert scale (1 = not at all 182 familiar, 5 = extremely familiar) and how many times they had seen the image concept before 183 (0, 1-10, 11-50, 51-100, 101-1000, 1000+). See Figure 2 for an example question.
186 Results 187 Each image had an average of 98.1 responses. If a participant skipped the image choice 188 question, follow-up questions for that image were not included in the analysis. In every 189 experiment of this study, an alpha level of 0.05 was used for all tests. 190 191 For an image to show VME, five criteria must be met: one, the image must have a low 192 identification accuracy; two, there must be a specific incorrect version of the image falsely 193 recognized; three, these incorrect responses are highly consistent across people; four, these 194 images show low accuracy even when they are rated as being familiar; and five, these 195 responses on these images have high confidence even though incorrect. Towards the first 196 criterion, we calculated the percent accuracy for each image (Figure 3A). Five images had an 197 identification accuracy below chance (< 33%), while two others were close to chance (34% and 198 35%). To determine if a specific incorrect manipulation was chosen for these images, we first 199 labeled the two different manipulations: for each image concept, Manipulation 1 refers to the 200 manipulation that had the highest proportion of incorrect responses and Manipulation 2 refers to 201 the remaining manipulation. Then, to determine if the proportion of correct responses was Fig. 2. Forced choice recognition task. An example question from the forced choice recognition task. For 40 icons, participants were asked to choose the correct version from a set of three, rate their confidence in their choice, their familiarity with the image concept, and the approximate number of times they have seen the image concept before.
227 To analyze how familiarity and confidence compare between VME-apparent images and the 228 other images, the mean confidence, mean familiarity, and accuracy for each image were 229 compared (Figure 4). As expected, there was a significant correlation between mean confidence 230 and mean familiarity across all images (r = 0.90, p = 2.64×10-^15 ), indicating that people made 231 more confident responses for the items that were familiar. However, there was a surprisingly Fig. 3. Seven images show shared and specific incorrect responses. (a) Seven images had identification accuracy below or near chance (33%). For these seven images, denotated by the arrows, the proportion of correct responses and Manipulation 1 responses were independent (all χ^2 ≥ 6.089; all p < 0.01), with a significantly high proportion of Manipulation 1 responses, indicating that a specific incorrect version was chosen. (b) The Spearman rank correlation across the 10 ,000 shuffled participant halves showed high consistency across participants for both correct responses (ρ = 0.922, p < 0.0001) and Manipulation 1 responses (ρ = 0.876, p < 0.0001). VME-apparent images, indicated by the colored-in asterisks, fall at the extremes of the proportions.
232 much lower correlation between accuracy and confidence (r = 0.59, p = 5.61×10-^5 ) and between 233 accuracy and familiarity (r = 0.42, p = 0.007), although still significant. Visual inspection of these 234 plots showed that eight images fell below the distribution (Figure 4B, C), where these images 235 had low accuracy but high familiarity and confidence ratings. Seven of these eight images were 236 the seven VME-apparent images identified by the chi-square test. To determine if VME- 237 apparent images were driving these decreased correlations between memory accuracy with 238 familiarity and confidence, we ran a permutation test (Figure 4 ). We randomly dropped seven 239 images from the sample and calculated the new correlation coefficient 1000 times. We 240 compared the distribution of these correlation coefficients for these randomly removed samples 241 to the correlation coefficient of the sample when only the seven VME-apparent images were 242 removed. If the correlation increased when VME-apparent images were removed from the data 243 compared to a random set of seven, it would suggest that these specific images were negatively 244 impacting the correlation. There was a significant effect of VME-apparent images on the 245 correlation between accuracy and familiarity (p = 0.0 44 ) and between accuracy and confidence 246 (p = 0.001) but not for the correlation between confidence and familiarity (p = 0.54 0 ). This 247 suggests that the seven VME-apparent images do not fit into the overall trend; for these images, 248 accuracy does not increase with increasing familiarity and confidence. Furthermore, their 249 accuracy is surprisingly low given the reported familiarity and confidence people had with these 250 images. There was also no significant difference between the number of times participants had 251 seen VME-apparent images before and the number of times they had seen the images that 252 were correctly identified before (Wilcoxon rank sum, z-value = 0.64, p = 0.523), supporting the 253 idea that there is no difference in prior exposure between VME-apparent images that induce 254 false memory and the images that do not.
268 Experiment 2 269 We next sought to understand why these seven images induce a specific shared false memory. 270 Specifically, we wanted to test how attention and perceptual processing of the images would 271 influence these false memories—are the features that elicit false memories viewed differently 272 from those that are correctly remembered? For example, perhaps the VME occurs because the 273 feature is not fixated upon during normal perception, and thus people “fill in” that region with 274 prior knowledge. Another possibility is that the VME could result from a source memory error, 275 where participants have actually seen a non-canonical version of the icon prior to the 276 experiment and think that is the correct version. To address these possible explanations, we 277 conducted an experiment using a short-term memory task with a technique similar to eye- 278 tracking. By using a short-term memory task, we could test if these errors still occur shortly after 279 the perception of the canonical image. In addition, the mouse tracking technique allowed us to 280 see how participants observe these images, and how inspection patterns may relate to later 281 false memories. We also examined the differences between images using low-level visual 282 feature analysis, and polled participants’ intuitions for their decisions. 283 284 Methods 285 Stimuli 286 Because only seven of the forty image sets used in Experiment 1 were VME-apparent, we 287 tested the VME-apparent image concepts and a matched subset of seven non-VME control 288 image concepts, selected as those that were high in accuracy but matched in familiarity 289 (independent t-test, t(12) = 0.11, p = 0.911; VME-apparent: M = 3.78, SD = 0.17; matched 290 subset: M = 3.77, SD = 0.15). 291 292 Participants 293 Participants on Prolific, an online crowdsourcing platform for tasks, were screened for location 294 (US) and English comprehension. Participants were excluded if they failed the attentional check 295 or if they did not follow task instructions; one participant was excluded for not moving their 296 mouse during the task. 60 Prolific participants (Females = 32 , Mean Age = 34. 7 years, SD Age 297 = 13. 5 ) successfully completed this task. The participant sample size was selected to match the 298 sample size of a related experiment (see BubbleView Experiment in SOM-R). No personally 299 identifiable information was collected from any participants, and participants had to
300 acknowledge participation in order to continue, following the guidelines approved by the 301 University of Chicago Institutional Review Board (IRB19-1395). 302 303 Procedure 304 To determine how people viewed these images, we used MouseView, a mouse-tracking method 305 analogous to eye-tracking (Anwyl-Irvine et al., 2021). Generally, a target image is obscured by a 306 white overlay, and MouseView creates a circular aperture that moves with the computer mouse 307 to reveal the image underneath it (Figure 5). By moving their cursor around, participants can 308 continuously uncover small sections of the target image at a time, imitating foveation. 309 MouseView measures have been shown to significantly predict real human fixation measures 310 (Anwyl-Irvine et al., 2021). In the current experiment, all MouseView images were presented at 311 a size of 500 x 500 pixels with an aperture of 5% of the viewing window. We used the jsPsych 312 implementation of MouseView (de Leeuw, 2015) and hosted the experiment on Cognition, an 313 online jsPsych experiment platform ( Cognition. Run Experiments Online , n.d.). 314 315 316 317 This experiment consisted of two main phases: the study phase and the test phase. Each phase 318 had fourteen trials, one for each VME and matched non-VME image. For the study phase, 319 participants were directed to examine each image, by using their mouse cursor to move around 320 and reveal the image underneath a white overlay. Participants only saw the correct version of Fig. 5. Experiment 2 methods. During the study phase, participants inspected the correct version of 7 VME images and 7 matched non-VME images. A blurred version of the image was shown for 250ms, and then participants had to move a circular aperture to reveal the image underneath, approximating fixation behavior. Participants were told to examine each image, and then were allowed to move onto the next trial after at least 5s of inspection. During the test phase, participants had to indicate their memory for the images they inspected, choosing between the correct original image or an incorrect manipulated version. Finally, participants answered a series of questions about why they chose that image, their confidence in their decision, and their level of familiarity with the image concept.
355 = 4.19 × 10 -^4 ; BF 10 = 57.28, very strong evidence for the alternate hypothesis; VME-apparent: M 356 = 52.86%, SD = 1 5. 02 %; matched subset: M = 82.38%, SD = 6. 07 %). This low accuracy for the 357 VME image set is remarkable, given that participants had just seen the correct image minutes 358 prior during the study phase, yet still chose the false version to indicate their memory. 359 360 We asked participants to report their reasoning behind choosing one image version over the 361 other during the test phase. For each image, we split the responses by those who responded 362 correctly and those who responded incorrectly. We determined the proportion of responses that 363 mentioned the manipulated feature in their reasoning and the proportion of responses that were 364 guesses or had unclear reasoning. For both image types (VME and matched), when participants 365 answered correctly, they most often attributed this to memory for a specific feature (for 78.32% 366 of the VME responses and 71.57% of the matched responses; no significant difference, χ^2 = 367 3.446, p = 0.063), e.g., they “only saw the fruit, not the cornucopia” when inspecting the Fruit of 368 the Loom logo. However, participants who chose the incorrect variation of the VME-apparent 369 image concepts also reported remembering seeing the manipulated feature, even though they 370 did not. For example, for the Fruit of the Loom logo, participants reported that, “I’m pretty sure it 371 had a basket” and “there was a cornucopia in the image I saw” (see the OSF repository for all 372 responses: https://osf.io/7cmwf/). In fact, incorrect responses to VME-apparent images were 373 more often attributed to memory of the manipulated feature (66.54%) than those to matched 374 non-VME images (44.92%), which instead tended to be more guess-based (chi-square test of 375 independence, χ^2 = 10.466, p = 0.001). 376 377 In this experiment we collected MouseView cursor data, a comparable measure to eye-tracking, 378 to determine if there were inspection differences that may drive VME. We analyzed this data on 379 two key levels: by the condition and by the image. For the condition-level analysis, we examined 380 the effects of three factors on the inspection density: condition (VME/matched), response type 381 (correct/incorrect), and area of image (manipulated area/unchanged area). We found a 382 significant effect for area, where average inspection density was higher in the area that was 383 manipulated than outside that area (F(1, 48) = 35.91, p = 2.58 × 10-^7 , ηp^2 =0.428; BF 10 = 2.69 × 384 105 , extreme evidence for the alternate hypothesis). This could be due to the low inspection 385 outside of the image borders itself; and indeed, when we look at an alternate measure of 386 maximum inspection density inside or outside of the manipulated area, there is no longer a 387 significant effect (p = 0.091; BF 01 = 1.33, anecdotal evidence for the null hypothesis). However, 388 importantly, even for average inspection density, there was no difference in inspection behavior
389 related to whether the image caused VME or not (p = 0.143; BF 01 = 1. 90 , anecdotal evidence for 390 the null hypothesis), nor whether participants responded correctly or not (p = 0.952; BF 01 = 6.06, 391 moderate evidence for the null hypothesis). There were also no significant interactions across 392 factors (all p > 0.50; all BF 01 > 3.70, moderate evidence for the null hypothesis). This indicates 393 that inspection behavior did not differ between images that caused false memories and those 394 that did not. 395 396 397 Fig. 6. Inspecting differences do not drive VME. Inspection density heatmaps using the MouseView interface; for two example VME-apparent images, and two example matched images. (a) shows the inspection density map of inspecting behavior for participants who correctly remembered the images, while (b) shows the inspection density map of inspecting behavior for participants who had a false memory for the images. (c) shows the difference in inspection behavior between those who correctly versus incorrectly remembered the images. Crucially, there were no significant differences in inspecting behavior related to the type of image that was being viewed (VME or matched), or whether their memory was correct or incorrect.
432 whether correct or incorrect, they mentioned strong memories of the manipulated feature in 433 guiding their choices. By analyzing inspection behavior through mouse tracking, we found no 434 differences in inspection patterns related to the type of image or people’s memory, showing that 435 images do not cause the VME because of differences in attention or perception. Although this 436 experiment does not capture all facets of perceptual encoding, it does suggest that participants 437 are not misremembering these VME features because they fail to look at them. We also find that 438 the VME is not due to differences in low level visual features. Finally, although few people report 439 thinking the experiment was about VME, more than two-thirds report having heard of the 440 Mandela Effect prior to the experiment. 441 442 Experiment 3 443 While we observed no link in Experiment 2 between inspecting behavior and false memories, it 444 is possible that these false memories were caused by differences in the accumulated viewing 445 experience of the cultural icons over time. For example, perhaps people incorrectly remember 446 the color of C3PO’s leg because his legs are rarely shown in the Star Wars movies. Or perhaps 447 they have even seen the VME version of C3PO, given that the Mandela Effect has been 448 covered in the popular media. Indeed, while only a few participants guessed that Experiment 2 449 was about VME, many people reported having heard of the phenomenon previously. To quantify 450 the real-world visual experience of these icons, we conducted an experiment where we 451 automatically scraped images of these icons from Google Images, and then quantified the 452 presence of VME features. 453 454 Methods 455 Procedure 456 To approximate the natural viewing experience of the VME-apparent icons, we automatically 457 scraped the top 100 Google Image results for each icon (Clinton, 2020/2021), generated by 458 queries of the name of each icon (e.g., “Volkswagen”, “Pikachu”, “Monopoly”, etc). We then 459 categorized these images into three different groups: 1) those that are unable to show the VME 460 because the feature is not present (e.g., a headshot of C3PO without his legs visible), 2) those 461 that show the full feature and have no VME in the image (e.g., a full-body photo including 462 C3PO’s silver leg), and 3) those that show the full feature and do have the VME in the image 463 (e.g., a full-body photo of C3PO with two golden legs). For images that do show the VME in the 464 image, we also categorized how many of these images originated from sources specifically 465 describing the Mandela Effect.
467 Results 468 The distribution of scraped images across these four categories (unable to show VME, no VME 469 in the image, VME in the image, and VME in the image from a source about the VME) are 470 shown for each of the seven VME-apparent images in Figure 7 (also see Figure SU 3 in SOM-U; 471 all scraped images are on our OSF repository: https://osf.io/7cmwf/). 472 473 We observed high variation in the natural experience with these image concepts. For C3PO, a 474 majority of his images do not contain his legs (51%) or do actually contain the VME (24%; 475 golden legs), suggesting people’s visual experiences of his legs may be incomplete or 476 inaccurate. For other examples (Where’s Waldo and Volkswagen), a substantial portion of the 477 images show the VME (28% and 18%, respectively), although there are still more examples of 478 the canonical feature (44% and 74%, respectively). For the remainder of the image concepts, 479 the majority of the scraped images do show the correct feature of importance without a VME. 480 For these examples (Pikachu, Fruit of the Loom, the Monopoly Man, and Curious George), 481 people will rarely, if ever, encounter the VME version in the real world. For some (Fruit of the 482 Loom, Curious George), when the VME version is encountered, it is in the context of a source 483 about the Mandela Effect, suggesting the existence of the memory error precedes visual 484 examples of it. Therefore, the VME can also occur in spite of extensive experience with the 485 correct version of the image. 486 487 488