Research


Here are examples of recent, ongoing, and/or planned lines of research.

Co-Witness Contamination Effects – If two or more people witness a crime together, afterward they are likely to talk with each other about what they saw. Prior research demonstrates that if a witness’s comments to a co-witness contain errors, there is some likelihood that the co-witness will later report those errors independently. We expect that under some conditions when people report details that they only heard a co-witness describe they are aware of that fact (i.e., they know that their reports are based on things the co-witness said) but that under other conditions they do not realize the true source of their memories (and may even experience illusory recollections of having witnessed detail that they had merely heard about). Our aim is to understand the cognitive mechanisms that underlie these various sorts of false reports. Prior to the pandemic, we were using the MORI technique, in which two subjects sit side by side viewing the same screen but see slightly different versions of a video presented on that screen. Rolling with the punches, in a series of projects led by PhD student Eric Mah we transitioned to using Zoom to work with pairs of subjects.

Eyewitness Suspect IdentificationWith former grad student Mario Baldassari and current PhD student Eric Mah, my lab has several different lines of research on eyewitness suspect identification.  Our earlier work emphasized evidence that eyewitnesses’ confidence can well predict the accuracy of their eyewitness identification decisions under certain conditions.  More recently we have attempted to manipulate the extent to which witnesses/subjects rely on relative versus absolute judgement strategies when taking a lineup identification test.  With Kara Moore and others, we studied effects of the instructions participants receive before viewing the (simulated) crime and seeing the “culprit” on subsequent identification performance.  In 2023 Eric Mah and I will begin working on an international collaborative project led by Heather Flowe (Birmingham) assessing effects of VR lineups/.
Truthiness – Maryanne Garry of Victoria University in Wellington, New Zealand, has long been exploring effects of exposure to photographs on memory and belief. I made some modest contributions to one of the first studies in this line, in which we found that showing undergraduates doctored photos that depicted themselves during childhood in the basket of a hot air balloon led some subjects to apparently come to believe that they remembered taking a hot-air balloon ride during childhood. Subsequently, we reported evidence that being provided with a childhood photo that was related to but did not specifically depict a childhood pseudoevent also fostered the development of false memories. More recently, we have been investigating procedures in which very brief presentations of tangentially related photographs alongside various kinds of statements bias people toward believing that those statements are true. We believe that photographs provide a rich source of thoughts and images that, under certain conditions, people tend to mistake as evidence in support of whatever hypothesis they are entertaining (e.g., the idea that they had a particular childhood experience, or that a particular statement is true). We borrowed Stephen Colbert’s term “truthiness” to describe such effects, and were rewarded when he did a quite lengthy spot on our research as part of his show (several years ago).  Truthiness seems a replicable phenomenon, but the effect is small and we are trying to discover conditions that increase it.

Recognition Response Bias – As explained above, on a yes/no test of recognition memory, subjects are to say “Yes” to test items that had been presented on a study list and “No” to test items that had not been presented on the study list. According to Signal Detection Theory, performance on such a test is determined by two things. One is the extent to which the test taker accesses more evidence of having studied an item for items that really had been studied than for items that had not been studied. If evidence-strength is equivalent for studied and non-studied items (e.g., because the person didn’t pay much attention to the items during the study phase), performance will be poor, and if evidence strength is much greater for studied than for non-studied items then performance will be good. Very often, average evidence strength is greater for studied than for non-studied items, but the two distributions overlap, meaning that there are some non-studied items for which the person experiences more evidence of having studied those items than they do for some of the studied items. This puts the test-taker in a tricky position, at risk of making false alarms (i.e., judging that a nonstudied item had been studied) and/or misses (i.e., judging that a studied item had not been studied). According to Signal Detection Theory, in this situation the test-taker sets a criterion: If there is X amount of evidence of oldness, then I’ll say Yes, and if there is less than X then I’ll say No. The idea is that this criterion can be set to be very high (only say Yes if there is a LOT of evidence of oldness, in which case false-alarms will be rare but misses will be common) or to be very low (say Yes even if there is just a little bit of evidence of oldness, in which case misses will be rare but false alarms common) or anywhere in between. We refer to this as a “response bias:” A conservative bias means that one only says Yes if there is a lot of evidence, a liberal bias means that one says Yes even if there is only a little bit of evidence, and a neutral bias (which could also be called an absence of bias) means that false alarm and miss rates are approximately equal.

In a straightforward recognition memory study with, say, common words or simple line drawings, on average response bias tends to be neutral. A couple of years ago we stumbled upon the finding that response bias to novel and rich materials (e.g., scans of great paintings) tends to be conservative (few false alarms, many misses).  First with Justin Kantner, then with Kaitlyn Fallow and (most recently) Majd Hawily, my lab seens to understand the cognition mechanisms that give rise to this materials-based difference in recognition response bias. In related work, also with Justin, we have been exploring recognition response bias as a stable individual difference variable or trait. Even with familiar materials that typically give rise to a neutral bias on average, individual subjects differ widely in response bias, with some being conservative, others liberal, and others neutral. That could simply be due to measurement error, but Justin Kantner and I found that individuals who are liberal on one recognition test tend to be liberal on another, even if the two tests are superficially different and widely separated in time. We are trying to understand how individual differences in response bias come about and what their implications are for models of recognition memory and for other, related judgments.

Study/Test Versus Continuous Recognition – Most studies of recognition memory use a two-staged material:  First there is a study phase, in which subjects are exposed to to-be-remembered materials (either with intentional or incidental learning instructions) and then later (often after a delay interval) there is a test phase in which subjects judge whether or not each test probe had appeared during the study phase.  Everyday life doesn’t separate out the study and test phases so clearly.  As you move through the environment, you encounter some things you’ve seen before and some things you haven’t ever seen before and you may recognize some of the things you’ve seen before and note the noveltly of some of the things you haven’t seen before, but at the same time you are making those memory judgments you are also creating new memory records of your experiences with those items.  Later, you may re-encounter and recognize something.  In everyday life, the study and tests phases are one and the same.  Helen Williams and I looked for direct, head-to-head comparisons of these two ways of testing recognition memory and came up with almost nothing, so we gave it a go but that work is currently on the back shelf.