Top Secret

The previous post was meant to have some other commentary attached, but a time crisis intervened and I just published it as is. So, for example, the fact that it was half-heartedly responding to some Genuary prompts passed unmentioned. As did my regret at not managing to do that more properly. I was meaning to, but kept getting derailed by the curse of the drinking classes.

On the subject of which, fuck it, having posted one lecture I might as well include the whole lot. I’m probably not supposed to publish those outside UCL’s ivory tower, which is why that list is unlisted, but let’s face it, posting to WT is equivalent to a top secret classification anyway. I also have recordings of the two hour visual perception lecture I gave last week — my last lecture of this academic year, I’m happy to say, though I’ve had fun doing them — but I don’t feel like dealing with it at the moment. Maybe later.

Still most of this term’s Perception & Interfaces practical sessions to give. I’m currently revisiting the content of the auditory perception one, for delivery this Thursday morning. There’s some fun stuff there but it doesn’t feel like enough. I’m open to suggestions.

Pre-emptive

Eventually the need to make the slides and actually deliver the vision lecture outpaced the urge to blog prospective content, and today was crunch day. I think it went okay, although I wound up skipping the attention section due to lack of time. Two hours is both a fucking eternity and not quite enough. Students were spared the cocktail party analogy with rock stars and crypto bros doing coke in the toilets and nobody shutting the hell up. Maybe it’s for the best.

I will write and post the remaining episodes — I have already performed the material, after all — but I need to catch a breath first. Similarly, pre-rec videos will be found on this YouTube playlist, although “pre-” is a bit of a misnomer since only the first two are up so far; again, they’re coming, but there are a lot of competing claims on my time just now. Most urgently, content for tomorrow’s lab session and the late summer exam for COMP0142.

In the meantime, have some slides. (At some point I will tweak these too, at least to comply with license attribution requirements, but for now I doubt anyone will notice.)

6. Side By Side

Signals from the retina — finally — travel out from the eye along the optic nerve — through the blind spot — and head to the brain for really quite a lot more processing. Much of which, if we’re honest, is not very well understood at all.

The first major stop along the visual pathway is the optic chiasm, a sort of traffic intersection where the signals from both eyes come together, only to be split up and shipped out in different company. The signals from the nasal half of each retina — ie, the side closest to the nose — cross over to the other side of the brain, where they join the signals from the temporal half of the other retina — the side closest to the temple. Image formation in the eye, like any camera, flips up-down and left-right. So the nasal retina of the left eye captures the left hand side of the field of view, as does the temporal retina of the right eye, while the right nasal and left temporal retinas capture the right hand side of the field of view.

From this point on, the visual information is grouped according to which side of the world it’s coming from, rather than which eye. Everything over on the right hand side of the world gets routed to the left hemisphere of the brain, while everything on the left hand side of the world gets routed to the right hemisphere. Each hemisphere still keeps track of which eye each of the signals for its half of the world originated from, but in a real sense it just loses sight of the whole other half of the world. (It’s actually a bit less than half — there’s some shared space in the middle that gets covered by both sides of the brain.)

The hemifields from each eye overlap quite a bit, but between them they cover more territory than either individually. Only the nasal edition has a blind spot, for example — one of the reasons we are almost never aware of this hole in our vision is that it can be filled in from the other eye. Though the more fundamental reason is that our brain doesn’t want us to be aware, and it is 100% calling the shots.

Because the eyes are spatially separated, they see things from a different angle. (Your life is a sham, etc. If I were properly committed to the pre-rec version of this lecture, which rest assured I am not, I would probably have to drag up at this point for a stupid 2 second Zaza insert. Y’all ain’t ready. Fortunately, neither am I.) Comparing and contrasting the two views is super useful for stereoscopic depth perception, so it makes sense to bring them both together for visual processing. The half view split is really quite unintuitive, though, and it can lead to some apparently strange pathologies.

If you lose — or for whatever reason start without — sight in one of your eyes, your field of view will be a bit more limited, as will your depth perception, but both sides of your brain will still be processing both halves of the world and you’ll perceive both left and right. But if the visual processing pathway on one side of your brain is disrupted, downstream of the optic chiasm — say by a stroke or traumatic brain injury — then your ability to see that half of the world may be impaired as well. This is hemianopsia — sometimes hemianopia — or half blindness. In this condition your eyes still work just fine, capturing images like everyone else’s. You’re still sensing both halves of the world, but you can’t perceive one of them.

After the optic chiasm, the visual signals travel to the the thalamus, which is kind of the main routing hub for sensory information coming into the brain; specifically to the LGN or lateral geniculate nucleus — nuclei really, since there’s one each side. Exactly what perceptual processing goes on there is not known for sure, but importantly the LGN doesn’t just take feedforward sensory input from the eyes, it also takes feedback input from further along in the visual system — indeed, there are more feedback connections than feedforward.

One of the things the LGN is believed to do is to take a kind of temporal derivative of the visual information — comparing it with what was previously seen and picking out changes, in a sort of analogue of the spatial differencing performed in the retinal ganglion cells. Again, we’re fine tuning for salience and novelty is interesting.

Another of the things the LGN is thought to be doing is applying some attentional filtering — selecting some aspects of the visual signals and deselecting others to focus in on what we’re especially interested in perceiving right now — what we’re paying attention to. Unlike the relatively static processes of change detection, this kind of selectivity is dynamic — it’s not always picking out the same features, it varies from moment to moment, depending on what we’re doing and thinking at the time. Some of those feedback signals from the cortex are saying, in effect: “boring, boring, meh, boring, whoa hang on a second there buddy, tell me more”. And the LGN does.

All of this is still happening at a pretty low level — tweaking nerve firings corresponding to tiny patches of space and moments of time, not yet assembled into objects or events, not yet translated into knowledge or behaviour. But these fragments are the matter of visual perception, and yet again we see that the brain is right there in the mix at every stage, exerting itself, sifting and organising, amplifying and attenuating, making connections and also, whenever necessary, making stuff up.

5. Colour

Photoreceptor neurons, whether rods or cones, either release glutamate or they don’t, there’s no middle ground. They are not equally responsive to light of different wavelengths, but the wavelength isn’t captured by the output, it’s just a yes/no kind of deal. Collectively, the pattern of releases from such a neuron tells you whether it is being stimulated more or less strongly, but it doesn’t tell you if the stimulation is strong because it’s at a wavelength the neuron is more sensitive to or just because there’s a lot of light — the releases would look the same either way. To be able to separate those cases, we would need more information. Fortunately, at least for most of us, more information is available.

The spectral responses of the different kinds of photoreceptors are illustrated below. (Note that each curve’s vertical extent is normalised so the peak sensitivity is at 100% in each case — rods are still way more sensitive that cones, but we don’t need to care about that right now.)

Based on Wikimedia Commons image by user Francois~frwiki, licensed CC BY-SA 4.0

Importantly, while all rods have the same sensitivity, cones come in three different flavours with different absorption spectra. These flavours are sometimes termed blue, green and red, as in the diagram, but are also more accurately, if less picturesquely, known as short, medium and long. If cones of all three types are illuminated by light of the same colour, they will exhibit differing responses — and the differences can be used spectroscopically to determine what colour the light was.

At least, kinda.

Let’s say some blueish light of about 480 nm comes along:

Stimulation by 480 nm blue light

The short wavelength “blue” cones are actually stimulated least by this light, the long “red” cones a bit more, and the medium “green” ones most of all. Importantly, this particular combination of relative stimulation levels does not occur for any other wavelength, and so we can infer that the incoming light is that blueish 480 nm.

Except.

I mean come on, you’re reading this on an RGB screen, how much surprise can I honestly expect to milk out of it?

That uniqueness of inference only works if we know the light is all the same wavelength. That is almost never true, but it’s not a horribly misleading assumption when what you’re seeing started off as broad spectrum white light from the sun and then bounced off a bunch of objects that absorbed a lot of those wavelengths and just left some narrow band to hit your eyes. Which would mostly have been the case most of the time for most of the history of life on Earth. So it’s a pretty decent model for evolution to have hardwired into our biology.

Obviously these days we spend rather a lot of time looking at light from artificial sources for which the model doesn’t hold at all. But — and this is really pretty fortunate for a big chunk of modern human culture and technology — our visual system is still stuck on the veldt, looking for lions or whatever. We can bamboozle it with cheap parlour tricks — and we have gotten really good at those tricks.

You all know how this works: take three different light sources that mostly stimulate the long, medium and short cones respectively. Call ’em, oh I don’t know, red, green and blue? Shine them in the right proportions on your eyes and behold: suddenly you’re seeing orange or turquoise or mauve. Those wavelengths aren’t actually there — does mauve even have a wavelength? — but that doesn’t make your perception of them any less real.

We call these ingredients primary colours, but in what sense are they primary? There’s nothing special about red, green and blue from the point of view of light. Wavelengths don’t mix to make other wavelengths. Violet light is violet, it’s not some mixture of blue and red. Except: what is violetness anyway? As noted back in part 2, colour isn’t a property of light, it’s a perceptual property. I can confidently assert that 425 nm light isn’t a mixture of red and blue, but violet is whatever I see it is.

So primary colours really are primary, in the sense that we can use them to stimulate the perception of (very roughly speaking) any other colour, including some that don’t really exist in the visible spectrum at all. Their primariness — like colour itself — is a product of our physiology and neural processing.

Sing out, Louise. You’ve heard this song before, you probably know the words by now.

And you also know this one: the mechanics of visual perception are complex and fragile and things can go awry.

Colour perception is not universal and you cannot rely on everyone perceiving colours the same way you do — whatever way that is. Various mutations affect the cone cells, leading to a variety of colour vision deficiencies — from somewhat attenuated ability to distinguish between some colours to (rarely) complete monochromacy. These mutations are often sex-linked — ie, the affected proteins are on the X chromosome — so they run in families and mostly affect males.

This can be a problem because colour is a really handy and widely used design feature and popular carrier of meaning. Red means stop, green means go, yellow means go really fast. If you’re constructing an interface or visualising data, you’re probably going to want to put colour to work. Just consider the graphs above — colour is the key distinguishing feature of the three cone lines. So it’s a teensy bit problematic that up to 8% of the men reading this (pretend for a moment than anyone is going to read this) might struggle to tell two of them apart.

(Okay, yes, I’m wrapping this chapter with a rhetorical flourish and it’s really a lie — those particular lines are distinguishable by luminance as well as colour, so 8% of men will probably muddle through — but the point stands more broadly, just go with it.)