We might argue that our time perceptual system has to deal with an adaptive trade-off that trades between ‘speed of perceptual availability’ and ‘accuracy of temporal integration.
The speed of perceptual availability describes how quickly a sensory feature that occurs in the world becomes perceptually available as a consciously experience of that feature. Accuracy of temporal integration refers to how accurately a percept represents the relative times at which features occur in event time. For a percept to be accurate in this respect the brain must temporally integrate the sensory features occurring at one time into one coherent percept, thus segregating them from features that occur at other times.
The standard brain time theory of time perception does well on the speed aspect of this trade-off. It states that a sensory feature is experienced the moment that it is processed in the relevant neural mechanism. No delays are added to compensate for the processing of sensory features that might take a longer time to process.
This speed comes at the cost of skimping on the accuracy front. This leads to what I call the problem of desynchronisation.
The problem of desynchronisation describes the issue of how the different kinds of sensory information originating from a common event become desynchronised as they are processed by our various neural mechanisms. This desynchronisation takes place because:
1. Our neural mechanisms responsible for processing sensory information are fragmented with respect to the feature, modality, and timescale of the feature to be processed. This has the consequence that sensory information is processed at different speeds and encoded in different ways, and,
2. Features in our environment constantly change, but the time at which features change does not match up with the time at which these specific features are processed.
The claim made by (1) is supported by evidence about how process speeds vary across modalities and within modalities. For example, audio processing precedes visual processing (Arnold et al., 2001).
The second claim (2) follows from (1) (varying sensory processing times) and the fact that feature changes in our environment happen at their own pace, with no regard for how quickly we can process them.
Due to (1) and (2) we should expect desynchronization to take place between the time at which changes in features occur in our environment and the time at which the processing is finished. The desynchronisation phenomenon is also aptly described by Hogendoorn (2022):
“[I]nformation from a single sensory event becomes available to perception not at any single moment, but over a range of moments. For the same reason, the most recent information available to perception at any given moment will have originated from different moments in the environment for different features, and therefore do not belong together to the same percept. This causes a temporal binding problem: as the brain processes the continuous stream of desynchronised sensory input, how does it infer what happened when?”
One must explain how this temporal binding is achieved to make sense of temporal behaviour that requires perceptions that are temporally bound across modalities and timescales. We can state the desynchronisation problem as follows:
‘How do we perceive events as temporally integrated percepts when sensory information is desynchronised?’
Since brain time view is enslaved to the times of neural processing, this view would entail that it would be near impossible to solve the problem of desynchronisation. Remember that according to the principle of temporal isomorphism and the principle of minimal neural delay, perceptions from different neural mechanisms do not wait for each other to be presented together in coherent perceptions. But if the sensory features become desynchronised in experience, how can we, as Hogendoorn writes “infer what happened when”?
If desynchronisation is severe enough, the brain time thesis entails that we would perceive the external world as a mess of features being perceived at different times. To perceive coherent and temporally bound events the differing neural processing delays and the changes in external features would have to (by sheer luck or pre-established harmony) be synched. But these are not synched, yet we do perceive them as such.
Let’s look at one possible solution that the brain time view might adopt to solve the problem of desynchronisation.
The brain time view holds that perception is realised when sensory stimuli reach their perceptual endpoint (is processed by its relevant neural mechanisms). The perceptual endpoint is essentially a claim about which neural correlates are responsible for producing conscious experiences.
Here I argue that we might try and reformulate how to understand the idea of the perceptual endpoint. Instead of its usual meaning where it refers to the finished processing in a local sensory mechanism, we might take it to refer to a sort of global perceptual endpoint.
This would mean that the processing of some sensory feature cannot to be considered finished until some global processing mechanism has processed it. This pushes the perceptual endpoint further up the information processing hierarchy.
Let us ignore how to model such a global endpoint (if you are interested look at the Global Neuronal Workspace Theory (Dehaene et al., 2006).
Under this interpretation, the perceptual endpoint refers to a global event where the sensory information processed by different neural mechanisms is integrated to form one coherent representation that includes sensory information stemming from multiple modalities.
This global event can then function as the basis for temporal integration between sensory information that is otherwise processed locally and at different times. If the simple brain time theory adopts this idea of global perceptual endpoints, it has temporal integration built into the thesis of temporal isomorphism.
The thesis will now state that the time sensory features are experienced is when their global perceptual endpoint is reached, and this happens only when every sensory feature of some event has been processed to be represented together.
Let us give an example of how this solves the problem of desynchronization. A touch on the toe will take about 200ms longer to reach the global integration mechanism than a sound. Meanwhile, the world changes every 100 ms. We could imagine that when a sound gets to the global integration mechanism, the mechanism “knows” to wait ~200ms longer in case a touch on the toe is coming in. Only then does the global integration mechanism deliver its output. The global integration mechanism restrains the sound signal for ~200ms as an unconscious process and uses that sound signal rather than the newer one that has gotten to it in the meantime. Hence synching up our perceptions to cohere with the relative changes in the environment
With this change in place, the simple brain time theory can get the accuracy of temporal integration right. Hooray!
But, all is not good. Because the cost of this solution has the consequence of exacerbating the other side of the trade-off.
Shifting to global perceptual endpoints significantly reduces the speed of perceptual availability. Under this interpretation, perceptions are not made available the moment features are locally processed. The speed of perceptual availability is decreased by the amount of time it takes for the slowest piece of sensory information to be processed to play its part in the global broadcast.
Of course, it is not the case that we always have to integrate every feature from some event, but even in cases with only two different modalities, waiting for the slowest signal would increase the neural delay significantly. Moreover, there is no obvious way in which the system could know that it needed to wait more in some cases and less in others. This means that our perceptual system might always have to wait for information from the slowest modality.
Shifting from local to global perceptual endpoints might solve the problem of desynchronization, but it worsens the problem of neural delay, and it does so by smuggling delays that compensate for event time into the processing story.
Remember, the consequence of the brain time view is that every piece of neural delay that is added to the neural processing of sensory features is added to the time at which we judge some event to take place in the world.
So, endorsing global perceptual endpoints would thus lead us to ‘live’ much longer in the past, than if perceptions were determined by local perceptual endpoints.
The most apparent solution for the brain time view thus can only manage to account for one side of the trade-off at a time. This is due to its commitment to the thesis of temporal isomorphism: The subjective time of experience is determined by the time of the neural process realising the experience.
I take it that the brain-time view’s failure to account for the trade-off is a good enough reason to start pondering whether we might be able to develop a theory that avoids commitment to this thesis. In the next post, I will reveal a first jab at how such a theory might look.