Two theories of the feeling of meaning
What causes a thing in one's life to feel meaningful or purposeful? Meaning seems like a sort of higher order motivation - The driving force which creates motivation. I will present two mechanisms that I believe might create meaning: low confidence reward prediction, and association. These theories seem to match our observations quite well and they are well fitted for being explained further and tested on a lower (cellular) level of neuroscience. Each mechanism is a theory of the creation of meaning, though I am not sure if they work together, or if they are just different formulations of the same mechanism.
1) Reward prediction theory of meaning
background
a) reinforcement learning
Reinforcement learning describes how animals learn from experience, specifically how positive and negative outcomes of practice actions are used by the brain to adapt behaviour in such a way that future actions are more likely to yield positive outcomes.
Imagine an agent (eg. an animal) is at each time able to take one of a number of actions, and depending on the action chosen, and depending on external influence and chance, the state of the agent changes. Let `V` be the value of a current state of the animal. `\mathbb{E}`is the expectation value, `g` is a discount factor with `0<g<1` that makes future rewards less relevant for assessing the current state's value. Further, let `R_t` be the reward received at time `t`. Dopamine can be interpreted as a signal indicating the received reward compared to the reward that the animal expected to receive before: `DA~(R-\hat{R})` (`~` means proportional, `R` is the reward, `\hat{R}` the expected reward). In machine learning, the current value of a state `V` is typically written as `V=\mathbb{E}[ \sum_{t=1}^{\infty} g^t R_{t+1}]`, which sums up expected future rewards `R_t` (`R_t` are random variables), such that the rewards from far in the future state transitions are weighted less than rewards from soon coming state transitions (via the discount factor g). The idea is, that we can predict the next timestep's reward quite well, but our estimation of `\mathbb{E}[R_t]` will be flawed more for state transitions which are very far in the future. This is in line with human behaviour: We prioritise actions that bring immediate rewards, likely because for long term investments more can go wrong.
b) classical conditioning
Classical conditioning is achieved by repeatedly presenting a stimulus B after a stimulus A. Stimulus A needs to be eliciting automatic / involuntary responses in the animal. Soon, the animal will also elicit the same responses after being presented with stimulus A.
Reward prediction
If the response to a stimulus is dopamine release, and the stimuli are thought of as states (like in reinforcement learning), then classical conditioning will have the effect of reward prediction. Say, A comes before B repeatedly, and the animal's brain's reaction to B is originally to release dopamine. Then after some repetitions, dopamine will be released already at state A, but no dopamine will be released at state B which comes afterwards. By this mechanism, our brain shifts reward signals to earlier times, by predicting future rewards. This is useful because then we can just pick the best action by finding the action with the highest expected dopamine release. Long term payoffs of the action will be considered if and only if they have been already classically conditioned to elicit an immediate reward. In symbols, this mechanism can be represented as:
Before conditioning: A-->B-->C(R) (I will also write this as A-->B-->C-->R with sloppy notation)
after conditioning: A-->B(R)-->C
even later: A(R)-->B-->C
Now what does this look like for long prediction chains A-->B-->C-->.....-->R, where the reward is only expected after a long chain of unsure state transitions? By the principle of the discount factor from reinforcement learning, we can expect that after conditioning A will be rewarded, but we can also expect the reward to be lower than R was before conditioning.
To summarize:
1) We predict rewards, and dopamine is released as soon as the predicted value of the current state increases. For example, if food is immediately rewarding, then cooking predicts that reward, and therefore part of the dopamine could be released already during cooking, or maybe even during buying groceries.
2) Here, far in the future events weigh less in current state value estimation.
My theory for how meaning relates to reward prediction is the following:
'Unlikely or future rewards make meaning instead of rewards.'
Schematically, if A-->B-->C-->D-->R, then A causes a brain state filled with the feeling of meaning, while D causes a brain state filled with the feeling of reward. Long chains and complex networks of states which cause each other have a meaningful state at the beginning if they have some reward states at the end. This theory can explain pretty much everything I can come up with that causes a feeling of meaning:
Some examples:
learning a sport as a young child is meaningful because it predicts a number of potential rewards: You might become a professional or a teacher at the sport, you might life a longer life due to health advantages, you might have it easier learning other sports, ....
Believing in a religious system is meaningful because the afterlife is far in the future and not guaranteed to exist 100%.
Social relationships with people are meaningful because the payoff is typically not immediately clear. Potentially long lasting connections are meaningful, while immediately paying off connections are rewarding, but not meaningful.
In all these examples I would argue that they cause meaning as opposed to dopamine release, because the reward predicted from the situations is far in the future and unsure. Here, I assume the feeling of meaning is caused by something else than dopamine. This may be false, but my point stays. In my opinion the theory works quite well in practise. It would be interesting to find an evolutionary reason for a distinction between meaning and reward, however.
2) Association theory of meaning
This theory postulates that things in our lives (things = events in the past and future, objects, people, skills, hobbies etc.) have intrinsic meaning (which is not further explained by this theory, and inherited meaning, which is caused by the association of a meaningful thing A with another thing B. Inherited meaning is attached to the thing B in this situation, proportional to the level of meaning of the thing A and proportional to the strength of the association. This creates a network of things, and the process of meaning creation can be written as:
`a_i \leftarrow \sum_j w_{ij} \cdot a_j + b_i`, where `W` is a symmetric matrix (corresponding to the association strengths), b_i is the intrinsic meaning of the thing `i` and `a_i` is the level of total meaning of the thing i. The arrow `\leftarrow` indicates that this a step in a process, an update to the current vector a. The equation is similar to the activity update rule of artificial neural networks. One could call a after one step a first order meaning vector, or one could require the process to run an infinite time (and introduce stabilisation terms).
Some examples:
Your friend gives you intrinsic meaning, and doing sport gives you intrinsic meaning. Consequently, doing sport with your friend creates two-way inherited meaning. Now even if you do the sport alone, it will be more meaningful (the meaning inherited from your friend via the newly created association), and if you hang out with that friend (but don't do the sport at that time) it will still feel more meaningful, because it reminds you of the sport.
Having your friend and your brother get to know each other may create inherited meaning.
Learning about Hopfield networks, an idea which connects neuroscience and physics may create additional inherited meaning (if you like the subjects, that is!).
All of those examples have inheritance in both directions, but one-way inheritance is possible. Meaning is felt not at the moment of association formation, but at engaging with one of the associated things.
'Associations create meaning' - Why is that? I argue that the association of a thing A with a thing B is likely to cause A to be helpful to improve B in some way, and if B is meaningful, that makes A meaningful. For example, if you do sports with your friend, that friend may help you with the sport, or in the other direction: the sport may help your friendship. This is an opportunity to explain the association theory with the reward prediction theory. Strong association between A and B, where B is meaningful (=is predicted to cause long term rewards), leads to a high likelihood that knowledge/skills in A causes a reward over the mediator B. This association effect would be an alternative reward prediction method to classical conditioning.
3) Conclusion
The literature has ideas which seem vaguely related, but I could not find anything concrete or comparable.
Steger (2012) defines: "Meaning is the web of connections, understandings, and interpretations that help us comprehend our experience (coherence) and formulate plans directing our energies to the achievement of our desired future (purpose). Meaning provides us with the sense that our lives matter (significance), that they make sense (coherence), and that they are more than the sum of our seconds, days, and years (significance).". The idea of coherence may have something in common with my association theory, although it may also be relating to semantic meaning in the sense of relational frame theory (stating that semantic meaning comes from associations between concepts) in the quote. The definition of purpose as something relating to our (long term) future is in line with my argument that meaning is a long term reward prediction.
Heintzelman et al. 2014 argues that the feeling of meaning provides information about coherence it is adaptive for an organism to strive for predictability (eg. routines/familiar environment). This can also be seen as an example of meaning through associations, however: A familiar environment is a context in which there are a lot of associations between things. Therefore the association theory explains high meaning in familiar environments.
I should also mention that these two theories are both suitable to be connected to the neuronal level. Hebbian learning is the very basis of our understanding of learning at the neural level. It states "Neurons that fire together wire together", meaning that two neurons strengthen their connection as a result of simultaneous firing. The resulting equation on the most simple level looks like this: `\Delta w_{ij}=\eta x_i x_j`, meaning that the change of connection strength `w_{ij}` from `j` to `i` is a constant `\eta` times the activities of neuron `i` times the activity of neuron `j`. This idea can be used to model reinforcement learning in a structurally valid way (Minconi 2017, Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks).
Comments
Post a Comment