Week 1

Donald Blough’s chapter talked about a series of experiments that measured pigeons’ reaction times to gain insights into their visual perception, attention and decision-making. He also contrasted some parts of the results to human research and it yielded interesting insights.

He started with a few experiments on visual perception and visual search. In the first study, pigeons were put in dark chambers and pecked at a spot of various luminance for food rewards. The data showed an interesting pattern:

Point A, as the author argued, showed the dissociation between the scotopic (rod) and the photopic (cone) system, which was later replicated in human visual system too (!).

This study is simple, but the findings are fascinating (especially that people later found the same phenomenon in humans). I do wonder, though, whether using only 3 birds as subjects usually yield reliable data in animal research. I understand that each of them completed a huge number of trials, but I still wonder what if there are individual differences… Are the results from 3 birds usually generalizable enough?

The article then talked about visual search experiments, and I found the comparison of search asymmetry between pigeons and humans particularly interesting. For humans, it was obviously easier to find, for example, one Q in many Os, compared to one O in many Qs, as the target (Q) has a distinctive feature while the distractor (O) does not. However, pigeons did not show this search asymmetry and had similar performance for both. The author argued that besides species difference, another possibility is that humans had way more experiences encountering similar symbols in daily life through reading, and the search asymmetry (or the lack or it) may simply reflect the different perceptual experiences between pigeons and humans.

I think this possibility the author raised could be tested through experiments. For example, instead of artificial symbols, make the subjects (both pigeons and humans) distinguish between something pigeons have more experiences with but humans don’t, such as seeds of different shapes. Another possibility is to conduct the same experiment on humans who don’t have any reading experiences, either adults or kids who haven’t learned reading. If pigeons show more search asymmetry with stimuli they are more familiar with, and/or humans who lack experiences in reading show less search asymmetry with the symbols they are not familiar with, it would indicate that the author may be right. I wonder if there are already studies that have tested the author’s hypothesis.

The author then discussed the impact of prior beliefs or expectations on the RT to a stimulus. Same as humans, pigeons responded faster to a target that was same (rather than different) as the previous one, or a target that was primed by another signal. This effect is similar to the effect of reducing the target-distractor similarity, so they conducted two further experiment to explore the relation between expectation and recognition. With the reaction time data, they concluded that the priming effect is likely just tuning the pigeon’s (or human’s) attention to certain features of the search image.

One complaint I have with all of the figures so far is that none of them included an error bar, and the author didn’t include any information on the statistics (e.g., standard error) either. It is hard to make an inference from these figures. In particular, in Fig. 6.3, where the author mentioned that the asymmetry for pigeons is the opposite of humans, the pigeon data in the figure didn’t actually show much asymmetry, and it is very hard to make the author’s conclusion without knowing if the difference is simply due to error. Same thing applies to Fig. 6.6 left panel, where the non-target orientation curve looks quite flat, while the author claimed there was a difference (left side was lower). The author himself also critiqued later that “researchers almost always present RT measurements as averages“, but this is exactly what he did too in the first part of this article…

The next section of this article explored the distributions and models for RT rather than just presenting the averages. Importantly, the RT distributions revealed data patterns that we wouldn’t be able to tell through just the means. First, it showed that pigeons sometimes “fast guessed” the answer without “thinking” as soon as the stimulus appeared. Second, the difference between RT distributions for different stimuli revealed that pigeons indeed discriminate the stimuli successfully.

The author further studied RT distributions by modeling them with first an exponential function plus a Gaussian function (“ex-Gaussian”), and then a random walk model (RWP). Ex-Gaussian modeling found that the RT differences between different levels of target-distractor (T-D) similarity is only determined by the exponential decay constant, indicating the T-D similarity only affected the momentary probability of target detection. The RWP model further quantified the impact of T-D similarity on RT. Specifically, two parameters (step size and bias) of this RT model was linked to changes in T-D similarity and rewards, respectively. The author also further discussed the Pavlovian association between stimulus and rewards, as reflected in the RT data.

One thing I always wonder about computational models for cognition is that you might never know if they are “correct”, or if the process happens in the model and the process happens in the brain are actually completely different, but they just happened to yield superficially similar result distributions — after all, an infinite number of different models could yield superficially similar results, given limited amount of input. It is true that computational modeling gives us new insights to the data, but I was also somewhat unsure whether or not they also provide us accurate knowledge about the brain, unless this knowledge is confirmed through other ways…

Now here is a huge digress from the actual paper. I initially put this part at the beginning, but ended up writing too much about it so I decided to move it to the end. I did spend quite a while thinking about this and enjoyed the process, so I hope you don’t mind.

As I write this, I just realized this is the first real animal research paper I have ever read (if talks and news articles that summarize animal research don’t count). I really want write about a rather small thing in this article I found interesting. The author just briefly mentioned this which intrigued me:

…birds peck at the target with high probability and great persistence, even when food is delivered on fewer than 10% of the target presentations. This procedure makes possible experimental sessions of 1000 or more trials…

Judging by his tone and the rest of this article, I guess this fact is probably common sense for animal researchers, but still quite surprising to me who have only run studies with human subjects. If we run an experiment of more than 100 or 200 trials, the data would start to look funny because our participants sometimes spend 3 minutes on a normally 20-second trial for no obvious reason, or just stop trying and sleep on the desk for 40 minutes (yes, this happens). Moreover, these were after we asked participants to leave their phones outside the testing room — I have no doubt that the data would be even more terrible if we didn’t.

I’m not trying to blame the participants, though — I’m not sure I can sit in a room doing those repetitive trials for 100 times myself. But a pigeon can. A robot (i.e., a computer program) can do even better — it can do anything for however many times you tell it to. Yet we think humans are intelligent and robots are stupid. It makes me wonder if “getting bored” is actually an indicator for higher level cognition, and the inability to focus on repetitive stimuli is an evolutionarily advantageous skill.

I’m not sure if this could be a testable hypothesis, but it would actually makes sense if true. The other side of “getting bored” is that as humans, we strongly prefer novelty and complexity. We have this preference since infancy as if it’s programmed into our gene: Infants spend more time looking at novel complex objects, and get bored or habituated to familiar and repeated stimuli (e.g., Oakes, 2010). As I think about it, this preference is quite likely evolutionarily advantageous — it might have allowed our ancestors to be able to create and use tools or solutions that were non-existing before, which is quite a distinction between us and other species.

(So I guess our undergrad participants not doing so well in studies might have something to do with how humans evolved to be intelligent… Hmmmm.)

I think this is still relevant to robots, too. AI researchers have been to making their algorithms behave more like human, intentionally or unintentionally, to achieve better performance on certain tasks. As I’ll discuss later, there definitely are similarities between algorithms and real biological processes, including the above one, but most of the time they are still quite far away from each other, even when they have the same name (e.g., “neural networks”). Introducing biological and psychological concepts into AI research is quite fascinating and has been proved successful (e.g., the attention mechanism used in language translation), which why I would like to include the “robots” part in this blog.

The preference for novel complex stimuli versus repetitive stimuli does remind me of a few things in AI research. For example, reinforcement learning algorithms are perhaps some of the algorithms that are most closely related to the actual cognitive phenomenon in human and animals. In AI-based reinforcement learning, the trade-off between exploration and exploitation is basically a preference your robot would have for exploring unknown territories versus just exploiting known areas that are promising. In this situation, a pigeon may lean to exploitation of an existing solution in the realm of all possible solutions, but a human may not be so interested.

Similarly, a lot of computer vision (i.e., highly automated image processing) algorithms are also set to “prefer” complex features of images, such as edges and corners, instead of flat areas with no intensity change (“prefer” here means they care about certain features in an image and ignore the others, in order to recognize the image or extract information from it). This also ends up making an algorithm perform similarly to the visual perception of human.

I wonder if making more algorithms “prefer” novelty and complexity would improve their performance. For example, when training a neural network, would it be possible to somehow (semi-manually?) put more weight on features that are rarely seen, in order to increase generalizability? Maybe that could be something to try.

References
Blough, D. S. (2006). Reaction-time explorations of visual perception, attention, and decision in pigeons. Comparative cognition: Experimental explorations of animal intelligence, 89-105.
Oakes, L. M. (2010). Using habituation of looking time to assess mental processes in infancy. Journal of Cognition and Development11(3), 255-268.Chicago

One thought on “Week 1

  1. In my reaction to your blog post, I quote from your post, then reaction follows.

    “I do wonder, though, whether using only 3 birds as subjects usually yield reliable data in animal research. I understand that each of them completed a huge number of trials, but I still wonder what if there are individual differences… Are the results from 3 birds usually generalizable enough?”

    This is a great question. Psychophysics research requires a huge investment in training for each subject. The goal of psychophysics is to characterize a particular sensory/perception process through repeated testing, and systematically manipulating the variable of interest to determine the shape of a psychophysical function or a threshold, all within a single subject. Assuming that the process is relatively similar across all members of a particular species (e.g., all humans, all monkeys, all rats, all pigeons), only a few subject are typically studied. This has been the convention within psychophysics research. It is different than hypothesis testing in which an experimental group is compared to one or more control groups.

    “The author argued that besides species difference, another possibility is that humans had way more experiences encountering similar symbols in daily life through reading, and the search asymmetry (or the lack or it) may simply reflect the different perceptual experiences between pigeons and humans.”

    Another possibility, in addition to the ones you raised in your blog post, is that the stimulus used in the reported studies (mostly small shapes with variations in their features) is not a strong dimension for the pigeon like it is for the human. Other studies using a color singleton found a strong feature-positive effect in the pigeon (I believe this was referenced in the Qadri & Cook paper).

    “One complaint I have with all of the figures so far is that none of them included an error bar, and the author didn’t include any information on the statistics (e.g., standard error) either.”

    Review papers often don’t include the statistics of the studies they cover. The stats and more details about the results can be found in the empirical reports themselves. As far as error bars are concerned, Don Blough was trained in the Skinnerian tradition of running few subjects in depth in psychophysics experiments, in which more attention is given to individual performance rather than group performance. Error bars are not only not meaningful, but are actually misleading in data graphs plotting within-subject data. When aggregate data from a few subjects is shown, the n is so small that error bars would potentially drown out the function (e.g., in Figure 6.1 in Blough).

    “It is hard to make an inference from these figures. In particular, in Fig. 6.3, where the author mentioned that the asymmetry for pigeons is the opposite of humans, the pigeon data in the figure didn’t actually show much asymmetry, and it is very hard to make the author’s conclusion without knowing if the difference is simply due to error.”

    Actually, when first describing these results (page 87 left column), Blough states that the pigeons show no difference in slopes for the two conditions while humans show the typical asymmetry.

    “It is hard to make an inference from these figures. In particular, in Fig. 6.3, where the author mentioned that the asymmetry for pigeons is the opposite of humans, the pigeon data in the figure didn’t actually show much asymmetry, and it is very hard to make the author’s conclusion without knowing if the difference is simply due to error.”

    True, but in his defense, in the original empirical papers he discusses whether all the pigeons showed similar patterns or not. Plus, the next sentence in the chapter qualifies his first.

    “One thing I always wonder about computational models for cognition is that you might never know if they are “correct”, or if the process happens in the model and the process happens in the brain are actually completely different, but they just happened to yield superficially similar result distributions — after all, an infinite number of different models could yield superficially similar results, given limited amount of input.”

    You are correct. This is why computational models are most valuable, that is, lead to the greatest insights into actual process or mechanisms, when they not only curve fit prior data but also make empirically testable predictions. In addition, stimulus analytic and neuroscientific studies can further evaluate the validity of the model as an account for the empirical data.

    In regards to the last section of your blog, motivation is a critical variable that is often overlooked in comparative analyses, especially when comparing humans to nonhumans. Laboratory pigeons (and other laboratory animals in general) live in relatively sterile environments, with restricted and limited opportunities for environmental enrichment. The daily operant foraging session can be the highlight of the day both in terms of food and of an interesting break from the monotony. Human participants suffer just the opposite. They are brought from a rich life to a boring room with an often boring task working for points which hold no value beyond the context of the experiment itself.
    Your intelligence = getting bored easily hypothesis, this is similar to the view that curiosity and playful manipulation of one’s environment is a characteristic of intelligence. Rats and primates, parrots and corvids are more playful and curious than are pigeons.

    Like

Leave a comment

Design a site like this with WordPress.com
Get started