Great post and summary of the progression of thinking in this space. I went through a similar analysis myself when I wrote my book "An Engineer's Search for Meaning". I concluded the book with the following recommendations:
1. Our ultimate (and automatic) goal is to perform actions that expand and enrich life and consciousness in all its forms.
2. A good way to achieve this is to periodically check and realign our actions with the SixCEED Tendencies that universe itself displays inherently (Coherence, Complexity, Continuity of Existence or Identity, Curiosity, Creativity, Consciousness, Evolution, Emergence and Diversity).
3. A good way to tune into these universal tendencies is to practice mindfulness at all times.
4. Put your highest trust in evidence and reason, but don’t turn it into dogma or zealotry because there are a lot of unknowns, uncertainties and nebulosity in reality. So one must always remain humble, willing to learn and improve.
Excellent and insightful analysis, thank you Jonah! You haven't mentioned Dave Snowden and his influential "Cynefin" approach, which to me seems to lead to something similar, but I'll leave you or others to do the comparison.
At the risk of being predictable to those who know my own tendencies, what I'd add to this (and I don't see it as taking away anything) is the collective dimension. From what I experience as well as what I read, decisions made collectively in the context of a well-functioning group tend to be of better quality than those taken by individuals alone — whatever the depth of their private analysis, reflection and meditation. And this is what I would bring into the conversation. Yes, by all means, work with your own individual "lived experience, emotion and intuition", and then bring that back to the trusted group for collective discernment. That's how Quaker concerns are supposed to work, however seldom it actually happens. And that's similar to what many people have expressed over the ages, from traditional and indigenous circle practices to several contemporary writers.
What I'm most keen on here is to bring to awareness the residual latent individualism carried over from the "modern" paradigm, and to address that, alongside the very helpful questioning you have set out above.
Thanks Simon, I think that’s a great point and I’d suggest complements what I wrote. I guess I might question whether ‘lived experience, emotion and intuition’ has to be understood in an individualistic way. I’m thinking here of the way Heidegger thinks of the pre-propositional, practical structure of ‘being-in-the-world’ as being structured by ‘being-with’. Vervaeke also speaks of relationship and community in meaning-making, though you may be right that this deserves to be foregrounded more.
What a super point! Well yes... lived experience as shared — the importance of many practices like check-in; reflective listening; etc. help us to feel the commonality of lived experience; … emotion, shared through empathy can hop over to the collective realm; … and then there's intuition. I guess this is the one where people have least experience of it being collective. Collective Presencing is one practice. Quaker Meeting is another. Maybe the "in" in "intuition" is what suggests it's within an individual? And when I experience a collective sense in that kind of mode, I'm not used to calling it intuition. Quakers call it "ministry". In Collective Presencing it's often called speaking *from* the middle (not just "to" the middle)
Thinking about these things brings me back to a frustrating part of my life where I was trying to make a sensible PhD on extreme uncertainty in wildlife conservation. It seemed to me after a while that it was the moral uncertainty that eclipsed all the others, huge as they were.
And it seems that every attempt to conceptualise uncertainty ends up saying 'OK but there's this even greater and deeper uncertainty outside all that.'
To my left brain - if you like - this is nonsense, pure and simple. There is no such thing as inherently unquantifiable uncertainty. You can't put a probability on something I have never even conceived? Sure I can. I toss a coin I can put p(heads), p(tails), p(edge), p(a bird flies off with it), p(something else because I can't be bothered to keep listing things) and p(something I have never even conceived), and I can think of numbers for all of those and any 'even more outside' uncertainty you can conceptualise.
This also means that 'the left brain cannot appreciate the right brain's wisdom' also looks like nonsense (to the left brain). Give me evidence that I can reach a better result by handing the whole thing over to a wise intuitive thinker (or my own right brain) instead of listing options and probabilities and - hey - I'll do that.
But then the uncertainty is: why is it my decision?
There's always this point where the analysis turns back on itself.
Looking at your summary of Ord and McAskill's view it gives me a strong dose of that 'trying to climb out of your own head feeling.
"You could try, for example, to see how the value of working to reduce AI risk varies on different philosophical and ethical approaches, and then weight these values by how likely it is that each of those approaches is the right one. If the impact of working on AI risk still comes out higher than working on climate change, then you still have a clear practical guide to action."
Ok but is that the right way to think about it at all? The idea that there is 'a right one' seems like only the start. What if it's true that just trying to be this analytical about it deadens your soul to the extent you become an evil person? What if ...
'Model uncertainty' is often considered as applying only to quantitative models but any way of thinking about something - including narrative - can be seen as a model. So I don't know if there is really any deeper or more radical uncertainty than model uncertainty. And, when it comes to morality, the model uncertainty is just so vast.
I feel the wisest statement quoted here is Hilary Greaves's 'I think we're just in a very uncomfortable situation.' Damn straight.
I really like and empathise with this, it feels like something I might have said myself a few years ago. But what I’d say now is that you’ve only got half the picture (literally the first half of the article!)
From a left brain perspective, sure the massive uncertainties are going to be ‘uncomfortable’, and you’ll be asking yourself ‘why is it my decision’. But that’s because you haven’t fully followed the specific version of the left-brain, cognitive science advice outlined here, which is to - not always, but in specific contexts - hand over control to the right brain, which gives you not only the gut feels that take you to action, but also a sense of holistic, meaningfulness, which can be a comfort that outweighs and can even integrate the left-brain’s discomfort with uncertainty.
You don’t need to know that you’ll get a better result that way - in fact, the right-brain approach is premised on the impossibility of knowing this to the left-brain’s satisfaction.
But the value of the right brain approach in such contexts is not just an intellectual conclusion - it’s something that needs to be practised.
I was a little disappointed to see that your solution to the issues of Effective Altruism were mostly about how to create better decision models, rather than questioning the entire premise of whether such "rational" utilitarian goals are the goal of altruism in the first place. I say this as a die-hard utilitarian.
I would suggest that choosing altruism that is meaningful to the giver is actually the better strategy. First, this maximizes the emotional benefits of the altruistic act on the giver, thus reinforcing the altruistic impulse and making future acts more likely. This effect is multiplied when giving is local and community relationships are built. Writing a check that may save lives halfway around the world will not have this impact.
Meaningful giving can also sustain your attention for longer periods of time. When a project has deep personal meaning, you are willing to dive in and keep giving over long periods of time. Get personally involved in the charity. Once again, much bigger possible impact than cutting a check.
This approach also does a better job of distributing resources to all of the causes that need it. If we all chose the cause with the greatest utility, you would end up with a few causes getting all the attention. Ones that just improve quality of life, or only help a small number of people, would be neglected. If you have a loved one with a rare disease, you will dedicate much more effort to curing it than you would if you decided to fight climate change or buy mosquito nets instead.
Effective altruism is great for rich people looking to write checks. For everyone else, find a cause that speaks to your passions and get involved.
Thanks for the comment - perhaps I wasn't sufficiently clear about the way that Wise Altruism is very much about choosing altruistic goals that are meaningful to the giver. Wisdom is where rationality and meaning coincide. So I'd be happy to take your utilitarian argument about the effectiveness of meaningful giving as complementary rather than in tension to what I wrote, were it not for the fact that wisdom and meaningfulness are somewhat in tension with the die-hard utilitarianism behind that argument. I think the fact that it's possibly more effective from a utilitarian standpoint to give meaningfully can be a reason to adopt Wise Altruism as an approach, but only if that assessment of utilitarian effectiveness is itself made wisely, i.e. with an awareness of radical uncertainty and the assumptions underlying that assessment.
The way it was described, it seemed more like wise altruism was advocating an intuitive decision making approach when models were insufficient, but still accepting the basic premise of EA to try to maximize impact on global suffering. I was just it a conference with John Vervaeke the last couple of days, so I'm sure he'd also agree that this was just a difference in focus and wording, not a fundamental disagreement.
I'd go so far as to suggest that the overall utilitarian benefit to humanity is actually greater when we all choose altruism that is personally meaningful instead of looking at potential impact.
In general, most of the repugnant outcomes from utilitarian thought experiments break down when you consider the actual emotional reactions they produce in real human societies. So, I'm a die-hard utilitarian in that I really try to focus on which systems produce societies that are measurably happier, not one that is eager to advocate every morally counter-intuitive act that could potentially be justified by a "greatest good" argument.
Great post and summary of the progression of thinking in this space. I went through a similar analysis myself when I wrote my book "An Engineer's Search for Meaning". I concluded the book with the following recommendations:
1. Our ultimate (and automatic) goal is to perform actions that expand and enrich life and consciousness in all its forms.
2. A good way to achieve this is to periodically check and realign our actions with the SixCEED Tendencies that universe itself displays inherently (Coherence, Complexity, Continuity of Existence or Identity, Curiosity, Creativity, Consciousness, Evolution, Emergence and Diversity).
3. A good way to tune into these universal tendencies is to practice mindfulness at all times.
4. Put your highest trust in evidence and reason, but don’t turn it into dogma or zealotry because there are a lot of unknowns, uncertainties and nebulosity in reality. So one must always remain humble, willing to learn and improve.
Excellent and insightful analysis, thank you Jonah! You haven't mentioned Dave Snowden and his influential "Cynefin" approach, which to me seems to lead to something similar, but I'll leave you or others to do the comparison.
At the risk of being predictable to those who know my own tendencies, what I'd add to this (and I don't see it as taking away anything) is the collective dimension. From what I experience as well as what I read, decisions made collectively in the context of a well-functioning group tend to be of better quality than those taken by individuals alone — whatever the depth of their private analysis, reflection and meditation. And this is what I would bring into the conversation. Yes, by all means, work with your own individual "lived experience, emotion and intuition", and then bring that back to the trusted group for collective discernment. That's how Quaker concerns are supposed to work, however seldom it actually happens. And that's similar to what many people have expressed over the ages, from traditional and indigenous circle practices to several contemporary writers.
What I'm most keen on here is to bring to awareness the residual latent individualism carried over from the "modern" paradigm, and to address that, alongside the very helpful questioning you have set out above.
Thanks Simon, I think that’s a great point and I’d suggest complements what I wrote. I guess I might question whether ‘lived experience, emotion and intuition’ has to be understood in an individualistic way. I’m thinking here of the way Heidegger thinks of the pre-propositional, practical structure of ‘being-in-the-world’ as being structured by ‘being-with’. Vervaeke also speaks of relationship and community in meaning-making, though you may be right that this deserves to be foregrounded more.
What a super point! Well yes... lived experience as shared — the importance of many practices like check-in; reflective listening; etc. help us to feel the commonality of lived experience; … emotion, shared through empathy can hop over to the collective realm; … and then there's intuition. I guess this is the one where people have least experience of it being collective. Collective Presencing is one practice. Quaker Meeting is another. Maybe the "in" in "intuition" is what suggests it's within an individual? And when I experience a collective sense in that kind of mode, I'm not used to calling it intuition. Quakers call it "ministry". In Collective Presencing it's often called speaking *from* the middle (not just "to" the middle)
Thinking about these things brings me back to a frustrating part of my life where I was trying to make a sensible PhD on extreme uncertainty in wildlife conservation. It seemed to me after a while that it was the moral uncertainty that eclipsed all the others, huge as they were.
And it seems that every attempt to conceptualise uncertainty ends up saying 'OK but there's this even greater and deeper uncertainty outside all that.'
To my left brain - if you like - this is nonsense, pure and simple. There is no such thing as inherently unquantifiable uncertainty. You can't put a probability on something I have never even conceived? Sure I can. I toss a coin I can put p(heads), p(tails), p(edge), p(a bird flies off with it), p(something else because I can't be bothered to keep listing things) and p(something I have never even conceived), and I can think of numbers for all of those and any 'even more outside' uncertainty you can conceptualise.
This also means that 'the left brain cannot appreciate the right brain's wisdom' also looks like nonsense (to the left brain). Give me evidence that I can reach a better result by handing the whole thing over to a wise intuitive thinker (or my own right brain) instead of listing options and probabilities and - hey - I'll do that.
But then the uncertainty is: why is it my decision?
There's always this point where the analysis turns back on itself.
Looking at your summary of Ord and McAskill's view it gives me a strong dose of that 'trying to climb out of your own head feeling.
"You could try, for example, to see how the value of working to reduce AI risk varies on different philosophical and ethical approaches, and then weight these values by how likely it is that each of those approaches is the right one. If the impact of working on AI risk still comes out higher than working on climate change, then you still have a clear practical guide to action."
Ok but is that the right way to think about it at all? The idea that there is 'a right one' seems like only the start. What if it's true that just trying to be this analytical about it deadens your soul to the extent you become an evil person? What if ...
'Model uncertainty' is often considered as applying only to quantitative models but any way of thinking about something - including narrative - can be seen as a model. So I don't know if there is really any deeper or more radical uncertainty than model uncertainty. And, when it comes to morality, the model uncertainty is just so vast.
I feel the wisest statement quoted here is Hilary Greaves's 'I think we're just in a very uncomfortable situation.' Damn straight.
I really like and empathise with this, it feels like something I might have said myself a few years ago. But what I’d say now is that you’ve only got half the picture (literally the first half of the article!)
From a left brain perspective, sure the massive uncertainties are going to be ‘uncomfortable’, and you’ll be asking yourself ‘why is it my decision’. But that’s because you haven’t fully followed the specific version of the left-brain, cognitive science advice outlined here, which is to - not always, but in specific contexts - hand over control to the right brain, which gives you not only the gut feels that take you to action, but also a sense of holistic, meaningfulness, which can be a comfort that outweighs and can even integrate the left-brain’s discomfort with uncertainty.
You don’t need to know that you’ll get a better result that way - in fact, the right-brain approach is premised on the impossibility of knowing this to the left-brain’s satisfaction.
But the value of the right brain approach in such contexts is not just an intellectual conclusion - it’s something that needs to be practised.
I was a little disappointed to see that your solution to the issues of Effective Altruism were mostly about how to create better decision models, rather than questioning the entire premise of whether such "rational" utilitarian goals are the goal of altruism in the first place. I say this as a die-hard utilitarian.
I would suggest that choosing altruism that is meaningful to the giver is actually the better strategy. First, this maximizes the emotional benefits of the altruistic act on the giver, thus reinforcing the altruistic impulse and making future acts more likely. This effect is multiplied when giving is local and community relationships are built. Writing a check that may save lives halfway around the world will not have this impact.
Meaningful giving can also sustain your attention for longer periods of time. When a project has deep personal meaning, you are willing to dive in and keep giving over long periods of time. Get personally involved in the charity. Once again, much bigger possible impact than cutting a check.
This approach also does a better job of distributing resources to all of the causes that need it. If we all chose the cause with the greatest utility, you would end up with a few causes getting all the attention. Ones that just improve quality of life, or only help a small number of people, would be neglected. If you have a loved one with a rare disease, you will dedicate much more effort to curing it than you would if you decided to fight climate change or buy mosquito nets instead.
Effective altruism is great for rich people looking to write checks. For everyone else, find a cause that speaks to your passions and get involved.
Thanks for the comment - perhaps I wasn't sufficiently clear about the way that Wise Altruism is very much about choosing altruistic goals that are meaningful to the giver. Wisdom is where rationality and meaning coincide. So I'd be happy to take your utilitarian argument about the effectiveness of meaningful giving as complementary rather than in tension to what I wrote, were it not for the fact that wisdom and meaningfulness are somewhat in tension with the die-hard utilitarianism behind that argument. I think the fact that it's possibly more effective from a utilitarian standpoint to give meaningfully can be a reason to adopt Wise Altruism as an approach, but only if that assessment of utilitarian effectiveness is itself made wisely, i.e. with an awareness of radical uncertainty and the assumptions underlying that assessment.
The way it was described, it seemed more like wise altruism was advocating an intuitive decision making approach when models were insufficient, but still accepting the basic premise of EA to try to maximize impact on global suffering. I was just it a conference with John Vervaeke the last couple of days, so I'm sure he'd also agree that this was just a difference in focus and wording, not a fundamental disagreement.
I'd go so far as to suggest that the overall utilitarian benefit to humanity is actually greater when we all choose altruism that is personally meaningful instead of looking at potential impact.
In general, most of the repugnant outcomes from utilitarian thought experiments break down when you consider the actual emotional reactions they produce in real human societies. So, I'm a die-hard utilitarian in that I really try to focus on which systems produce societies that are measurably happier, not one that is eager to advocate every morally counter-intuitive act that could potentially be justified by a "greatest good" argument.