The retina is the thin tissue layer on the inside rear of the eyeball that is responsible for transduction of incoming light into electrochemical neural signals and also for several additional integration and processing steps on those signals. It is emphatically not just a simple sensor and indeed is considered to be part of the central, rather than peripheral, nervous system — an extension of the brain.
The retina is organised depthwise into several functional layers containing different types of neurons. These layers are inverted in humans and other vertebrates, meaning that processing proceeds from back to front. Light passes through all the layers, is then detected, and then the transduced nerve signals propagate back up through the layers, getting combined and filtered along the way. The results of all that processing then get delivered to the brain, but note that at that point we’re all the way to the front of the retina — the opposite side from the brain — so the top layer axons — the output cables that go to make up the optic nerve — need to pass back through the retina, making a hole with no detectors — the blind spot.
There are two different types of photoreceptive neurons in the back layer, known as rods and cones after the shapes of their outer segments (the bit of the cell where the actual photoreceptors live). These differ in their spatial distribution and connectivity and in their sensitivity to light — we’ll come back to this shortly.
Unusually, the rods and cones are depolarised at rest — that is, in the absence of a light stimulus the voltage difference across their membrane is relatively low, causing the neuron to release its neurotransmitter — glutamate — frequently. Incoming photons interact with photosensitive proteins in the outer segment, inducing a conformational change that in turn causes a cascade of signalling events that lead to hyperpolarisation of the neuron, making the interior more negative, which reduces its glutamate release. The changing release rate is received and processed by the next layer of neurons, the bipolar and horizontal neurons, and passed on in turn to the uppermost layer of retinal ganglion and amacrine neurons, which aggregate and process the visual information further before sending the resulting signals onward to the brain.
The rods and cones are distributed extremely non-uniformly over the retina. There’s a small area in the centre of the eye’s field of view known as the fovea, which contains only cones, no rods at all, and they’re packed in very densely. Outside the fovea there’s a mixture of both rods and cones, with the rods dominating almost everywhere — there are lot more rods than cones overall, of the order of 100 million compared to a mere 6 million or so cones. Both types become much sparser towards the periphery, and of course in the blind spot, which is about 20° nasal of the fovea, there are none.
In total there are about 100 times as many rods and cones as there are ganglion cells, so the signals going from the ganglions to the brain for visual processing can’t be just one-to-one reportage of the raw light detection. Rather, the data are summarised and transformed to pick out features of interest or smooth out noise or amplify weak signals. This funnelling of the photoreceptor signals into aggregated ganglion outputs is known as convergence — multiple photoreceptor neurons converge on each ganglion cell — and again it is not at all uniform across the eye. Rods in the periphery exhibit high convergence, with a single ganglion cell combining signals from hundreds or thousands of distinct receptors, improving sensitivity in those regions at a cost of spatial resolution. On the other hand, the densely packed cones of the fovea have very low convergence, preserving the fine spatial detail of detection and allowing for much greater visual acuity.
Rods are much more sensitive to light than cones. In bright conditions the cones respond strongly, allowing fine detail to be observed. But in dim conditions the fovea is pretty unresponsive, and vision is dominated by the high convergence periphery, meaning the acuity is much lower, and some objects may not be picked up by central vision, becoming only visible from the corner of your eye. Spooky!
Cones are also responsible for colour detection, as we’ll discuss next post, whereas rods do not distinguish colour — so night vision tends to be pretty monochromatic.
Exposure to light leads to desensitisation of the photoreceptors because the photosensitive pigments that do the detection are bleached in the process. These are continually replenished, but the process takes time. So the overall sensitivity of the retina is not fixed, it depends on how much light has been seen lately. If you’ve been out in bright daylight and go into a dark room, your sensitivity will have been depleted by the brightness and you won’t be able to see well. Whereas if you’ve been in the dark a long time, your receptor population will be fully replenished and you’ll have maximum sensitivity. This replenishment is known as adaptation — the eye has adapted to the darkness and is better able to see. Subsequent exposure to a lot of light will once again bleach the receptor pigments and reduce sensitivity. The time course of adaptation comes from the interplay between the rate of bleaching and the rate of replenishment.
As well as just aggregating the incoming signals, ganglion cells also pick out local spatial features, in particular brightness differences — edges — within the patch of retina whose photoreceptor cells that converge onto them. This patch is known as the receptive field — the region of the incoming image to which the neuron is receptive. (This concept is not limited to ganglion cells and we’ll encounter it throughout the processes of visual perception.)
Most retinal ganglion cells exhibit a centre-surround organisation to their receptive fields, whereby the response of the ganglion cell depends on the difference in activity at the centre of the receptive field and that in the surrounding region. The mechanism for this is lateral inhibition — the signal evoked by one region of photoreceptors is inhibited by the the signals from its neighbours. Ganglion cells may be off-centre — responding maximally to a dark middle and bright surround — or on-centre — responding to a bright centre and dark surround.
This kind of feature extraction isn’t a physiological necessity — the neurons could be wired up in arbitrary other ways. It’s a matter of perceptual utility. Edges are potentially important markers of stuff happening in the world — where one object begins and another ends. They represent things we might want to interact with, or run away from. Regions of uniform illumination evoke less response, because in some sense they are less perceptually interesting.
Once again, we note that perception is an active, cognitive process, rather than a passive consumptive one. Even before we leave the eye we’re already sifting and selecting and making value judgements.