Morality: relative versus objective; is-ought problem

I see what you're saying, Neil, and you've shared some interesting thoughts. I'll share some thoughts of my own along a slightly different tack, because I'm not so sure that the is-ought problem cannot be overcome with logic - of a sort - anyway: by wrapping its solution up in a single premise. I've been reluctant to formalise the premises which I think lead to the conclusion of the fundamental objective moral principle(s), but it would be possible to do so, and, in doing so, we might be able to "evade" the is-ought problem as a "logical" problem simply through the use of a premise like this:

"If [the supporting premises] hold, then we ought to abide by [the fundamental moral principle]".

(Followed of course by the supporting premises, and the conclusion - that we ought to abide by the fundamental moral principle - by modus ponens).

We would then support this "wrapped-up solution-premise" to the is-ought problem via the sort of intuitive reasoning-framing which we've each independently shared already in this thread.

So, when I wrote earlier that we have "not yet developed a logic" for converting an is to an ought, this need not be a fatal "logical" problem.

And that it is possible to convert an is to an ought in this way is, I think, necessitated by two facts: (1) that, clearly, we humans do believe in the bindingness of morality, and thus that "oughts" have a fundamental reality to them (they are not mere statements of opinion), and, (2) that there does not appear any other realistic way of getting to these fundamentally real "oughts" other than through "is"es.

With respect to #2, there are various other possibilities that might be suggested, the prime candidate in our Western culture being divine command theory, but clearly this is inadequate as a solution. If morality is determined by mere decree (even if divine), then it is arbitrary, yet we know on a deep level that it is not. I think we could dispense with all of the other candidates in a similar way, and thus be left with a second reason to accept the "naturalistic" bridging of the is-ought fallacy: that some bridge is necessary, and this is the only plausible candidate.

With respect to Sam's central thesis: that science can determine the more specific moral rules based on the fundamental one that well-being ought to be promoted and harm minimised, I would be very cautious in accepting it outright. I think that Grorganic's contributions to this thread have been very useful in this respect, in the sense that he might be seen to have suggested that different cultures have different notions of what "well-being" constitutes (specifically) in the first place. If Sam is willing to incorporate the values and will of different cultures into the scientific equation, then this problem might be soluble, however I think Grorganic's point that much harm can be done by imposing foreign, supposedly objective, moral rules onto a culture is a solid one, and should give us caution about the universal-isability of every moral rule. Some of them might very much be culture-dependent. For the sake of brevity I won't illustrate my point with examples. I am not sure whether Sam addresses this anywhere in his book (because I still have not read it) but I would be surprised if he does. He is at times somewhat imperialist in his thinking.

Finally: would anybody object to a separation out of this "Veganism" thread of these latter posts which concern this discussion of moral grounding? It would in my eyes be nice to keep the veganism thread about veganism proper, and split out into a separate thread this more general moral discussion which really doesn't have much to do with veganism itself. @David Bailey might be kind enough to do it for us, otherwise @Alex might be kind enough to (again) give me temporary admin privs to do it myself.
 
Nevermind, I seem to still have those privs, so can perform the split myself assuming there are no objections.

I'm all for it. It has been a good discussion and others likely do not know of it because it is in a thread on veganism.
 
I see what you're saying, Neil, and you've shared some interesting thoughts. I'll share some thoughts of my own along a slightly different tack, because I'm not so sure that the is-ought problem cannot be overcome with logic - of a sort - anyway: by wrapping its solution up in a single premise. I've been reluctant to formalise the premises which I think lead to the conclusion of the fundamental objective moral principle(s), but it would be possible to do so, and, in doing so, we might be able to "evade" the is-ought problem as a "logical" problem simply through the use of a premise like this:

"If [the supporting premises] hold, then we ought to abide by [the fundamental moral principle]".

(Followed of course by the supporting premises, and the conclusion - that we ought to abide by the fundamental moral principle - by modus ponens).

We would then support this "wrapped-up solution-premise" to the is-ought problem via the sort of intuitive reasoning-framing which we've each independently shared already in this thread.

So, when I wrote earlier that we have "not yet developed a logic" for converting an is to an ought, this need not be a fatal "logical" problem.

And that it is possible to convert an is to an ought in this way is, I think, necessitated by two facts: (1) that, clearly, we humans do believe in the bindingness of morality, and thus that "oughts" have a fundamental reality to them (they are not mere statements of opinion), and, (2) that there does not appear any other realistic way of getting to these fundamentally real "oughts" other than through "is"es.

With respect to #2, there are various other possibilities that might be suggested, the prime candidate in our Western culture being divine command theory, but clearly this is inadequate as a solution. If morality is determined by mere decree (even if divine), then it is arbitrary, yet we know on a deep level that it is not. I think we could dispense with all of the other candidates in a similar way, and thus be left with a second reason to accept the "naturalistic" bridging of the is-ought fallacy: that some bridge is necessary, and this is the only plausible candidate.

With respect to Sam's central thesis: that science can determine the more specific moral rules based on the fundamental one that well-being ought to be promoted and harm minimised, I would be very cautious in accepting it outright. I think that Grorganic's contributions to this thread have been very useful in this respect, in the sense that he might be seen to have suggested that different cultures have different notions of what "well-being" constitutes (specifically) in the first place. If Sam is willing to incorporate the values and will of different cultures into the scientific equation, then this problem might be soluble, however I think Grorganic's point that much harm can be done by imposing foreign, supposedly objective, moral rules onto a culture is a solid one, and should give us caution about the universal-isability of every moral rule. Some of them might very much be culture-dependent. For the sake of brevity I won't illustrate my point with examples. I am not sure whether Sam addresses this anywhere in his book (because I still have not read it) but I would be surprised if he does. He is at times somewhat imperialist in his thinking.

Finally: would anybody object to a separation out of this "Veganism" thread of these latter posts which concern this discussion of moral grounding? It would in my eyes be nice to keep the veganism thread about veganism proper, and split out into a separate thread this more general moral discussion which really doesn't have much to do with veganism itself. @David Bailey might be kind enough to do it for us, otherwise @Alex might be kind enough to (again) give me temporary admin privs to do it myself.

Laird,

I haven't forgotten about this post. I have actually been going crazy reading about this topic, and I haven't yet come to a conclusion, but I feel I have made tremendous progress.

But first I want to say that your solution wouldn't be a logical one, since plausible reasoning is not logical reasoning. By injecting PR into the proposed logical postulate, the argument fails to be a logical one, and the is-ought problem still holds. I feel that it is true that you cannot derive an ought from an is, but what I have come to find is that requiring a logical basis for a science of morality is unreasonable, for it holds a proposed science of morality to a higher standard than even physics. Further, and this is a key point I have come to realize, is that science cannot even establish an "is" without an "ought." What I mean is that science is not value-free, and values go into the very process of science, including what evidence we value, what reasoning methods we value, what methodologies we value, what metaphysical beliefs we value, etc. This is actually a fundamental point of Poppers, in the observations are theory-laden. Essentially, without values, one cannot even come to establish an "is" about the world. Fundamentally, science has no is-ought problem. It may be true that there is no logical way to derive either from either, but the fact is that science is not founded on logic, and in practice, there can be no is-ought gap.

The other thing that I have been getting into is the field of virtue ethics, which I find highly interesting and is getting more along the lines of how many religions, particularly Hinduism and Buddhism, go about morality, in that one has to train oneself for virtues to "be a good person" rather than just follow consequentialist (utilitarian, act, or rule consequentialism), or deontological duties/rules, viz. not lying because there are rules against it or consequences to lying does not make you an honest person.

Another area I have been working on I think you may find interesting. It has to do with development of morality. In developmental literature there is neuroscientific evidence of how our brains change and are involved in moral judgment as we mature. It appears that we have an innate aversion to harm in others, and at an early age we have this aversion and can make a distinction between this and conventional rules, yet we may lack ability to discriminate between harming a person and an object, and intentional vs. accidental harm. As we become older, we see a greater innervation between affective and cognitive systems, which allows us to make finer distinctions and also use the ability to reason.

This, I think, further bridges the gap between our initial views. While it does seem that moral judgments still are based largely on affective systems, it also seems that as we mature we use, or can use, more reasoning and discriminative ability to make better judgments. Someone that may not be as "morally mature" may make mostly rash emotion-based judgments with little thought, and their emotion systems themselves may be over-active. Yet in someone more mature, their emotional systems are more balanced, and contribute to appropriate response, and the ability to discriminate and reason can allow for this system to make better judgments. It is important, in my opinion, to note that emotions play a rational role, in that they are necessary for appropriate response and to motivate behavior. It is either excessive and uncontrolled emotion or too little (or no) emotional reaction that leads to irrationality.

So with this, and considering virtue ethics, it seems that we can see that morality has an innate basis, and that over time we begin to express more and more mature moral judgment, and certainly our environment/culture shapes these judgments as well. But with training, we can come to make better moral judgments that use the emotions rationally and reduce cognitive biases inherent in your every day type of moral judgments that are common.

This, I think, may possibly also address the problem of why moral philosophers don't behave any more morally than non-moral philosophers: Without a practice or training of morality and ethics, and the focus on essentially "puzzle solving" in ethical dilemmas rather than one including other aspects of morality such as virtues, you have merely academic knowledge on certain puzzles to solve with little to no real world applicability. It also seems that there is great divide over moral views such as consequentialism (and many views within this), deontology, etc, and I cannot understand why there needs to be just one view that must be right. Perhaps this is from some sort of "logical" desire, but to me I see it to be reasonable that consequentialism (act or rule) can contribute, as well as virtue ethics. Why cannot one come to a reasonable conclusion weighing all these perspectives?

I am working to take this further, because the ability to train and make better moral judgments tells me that there is a way I can link this to the rest of my epistemology. I will let you know what I find.

By the way, perhaps we should move this to the proponents skeptics subforum so this will get more attention. I think it's lost over here in the other stuff subforum.
 
Neil, you ethics beast you. Much good stuff in your last post. Glad you're getting into this topic and finding it fruitful.

First, though, before responding to your latest, backing up a bit: I have now read Sam Harris's book, The Moral Landscape, in full, other than some endnotes. I am torn between writing and publishing a full and independent review on my personal website, and responding more contextually but also fully in this thread, and have kind of started on a half-way approach. Not sure whether I'll make up my mind on which way to go and finish it. In the meantime, this is my provisional summary review, taken from my working introduction: 'This book's success is at the same time its failing: in taking a very broad view of morality, and rarely "getting into the nitty gritty", most (but not all) of the claims it makes with respect to moral theory, including the fundamental ones, through being uncontroversial and even self-evident are very agreeable and thus satisfying, however they are also so general as to be of limited utility, and thus simultaneously unsatisfying'.

I would take pains to emphasise the parenthesised comment though: "but not all". And I would also take pains to note that I am referring solely to Sam's views on morality: his views on metaphysics, including free will, and his views on religion, are mostly (but again, not always) objectionable to me.

Now, let's take your post point-by-point:

  • The argument I proposed is certainly (deductively) logical (i.e. at least valid if not sound), the only question is whether the key premise is cogent. You suggest that it is given force by mere plausible reasoning - but the premises in many other arguments are given force by nothing more persuasive, yet equally "obvious", so ... should this really be a problem?
  • I understand what you're saying about science's "is"es being based on (evidential) values which might be framed as "ought"s. It is only when you say that there is "no logical way to derive either from either" [the "either"s being oughts and ises] that I wonder whether perhaps you're going too far. Perhaps you are taking too strict and limited a view of logic, or perhaps we need a different word which simply is broader, such as "reasonability". Perhaps we can say that from the perspective of "reasonability", if not strict "logic", we can derive an ought from an is.
  • Re virtue ethics: I haven't studied it much but it sounds interesting.
  • Re your thoughts on bridging the gap between our initial views with the idea that "As we become older, we see a greater innervation between affective and cognitive systems, which allows us to make finer distinctions and also use the ability to reason": yes, totally agreed. "Moral maturity" is, like "plausible reasoning", another helpful term that you've introduced into this discussion, and potentially applicable in the context of virtue ethics too.
  • Re your comment that "It is important, in my opinion, to note that emotions play a rational role, in that they are necessary for appropriate response and to motivate behavior. It is either excessive and uncontrolled emotion or too little (or no) emotional reaction that leads to irrationality": yes, this seems very plausible. We are in a constant ocean of choices: if we do not have some "feeling" for which choices are right, before we even exercise our rational and critical judgement to refine them, then decision-making becomes very, very difficult; on the other hand, if our "feeling" for which choice is right is too powerful, then it might eclipse our rational and critical judgement, and, being thus unrefined, potentially lead to a biased and mistaken choice. Further, I'd suggest that the more we consistently exercise our judgement, the more we bring our emotions into line with our judgements (i.e. train our emotions), and the more assistance they can be towards our fulfilment of our self-determined principles. This brings into focus your idea of "practice" trumping "theory" with respect to morality.
  • Finally, I am very sympathetic to this: 'It also seems that there is great divide over moral views such as consequentialism (and many views within this), deontology, etc, and I cannot understand why there needs to be just one view that must be right. Perhaps this is from some sort of "logical" desire, but to me I see it to be reasonable that consequentialism (act or rule) can contribute, as well as virtue ethics. Why cannot one come to a reasonable conclusion weighing all these perspectives?' Well put. Despite my interest in ethics as a field, I have never committed to a single theoretical view; perhaps the closest I've come is to the (unique?) view that I've expressed in previous posts: that ethics is like a "tree of branching principles", although in reality this tree might be more like a "web". But I agree, to stick rigidly to one view might not be so helpful; the best approach is probably one of blending and synthesis. And often, one view can anyway be expressed or formulated in terms of another, even if not 100% accurately.

Apologies for the late response, I have been struggling with motivation lately.

Re moving this thread to the proponents vs skeptics subforum, I'm not so sure this topic is a good fit. It seems likely that both proponents and skeptics (of psi / the paranormal) might reasonably and not reliably fall on either side of the questions as to whether morality is objective or relative, and whether or not there is an is-ought gap. i.e. I am not so sure what the debate in this thread would be about in terms of the focus of that subforum. I'm open, though, to the views of others. Since Alex has left me permanently with the originally-intended-to-be-temporary privilege of moving threads, I will use it on this thread if there is enough support for it. Anybody else want to offer their views on this?
 
Neil, you ethics beast you. Much good stuff in your last post. Glad you're getting into this topic and finding it fruitful.

First, though, before responding to your latest, backing up a bit: I have now read Sam Harris's book, The Moral Landscape, in full, other than some endnotes. I am torn between writing and publishing a full and independent review on my personal website, and responding more contextually but also fully in this thread, and have kind of started on a half-way approach. Not sure whether I'll make up my mind on which way to go and finish it. In the meantime, this is my provisional summary review, taken from my working introduction: 'This book's success is at the same time its failing: in taking a very broad view of morality, and rarely "getting into the nitty gritty", most (but not all) of the claims it makes with respect to moral theory, including the fundamental ones, through being uncontroversial and even self-evident are very agreeable and thus satisfying, however they are also so general as to be of limited utility, and thus simultaneously unsatisfying'.

I would take pains to emphasise the parenthesised comment though: "but not all". And I would also take pains to note that I am referring solely to Sam's views on morality: his views on metaphysics, including free will, and his views on religion, are mostly (but again, not always) objectionable to me.

Laird, when you have the review up please send me the link; I would be very interested to read it.

I see what you're saying about the specifics, although on the other hand, I think we also have to admit that the field would be relatively new, and would have to try to start simply and evolve and refine the terms and how things are measured. I am fairly sure that he discusses this in his book, and he gives the analogy to health which I found helpful.

The concern I would really have comes from this immaturity to the science and lessons from psychology. Early findings shouldn't be given much weight when it comes to making actual policy changes since it would be all too easy to basically create large-scale social experiments. We don't need the equivalent of behavioralism, lobotomies on an industrial scale, and electroshock therapy, especially coming from a science of ethics! How embarrassing if we are too quick to implement large scale changes, just for example with eliminating retributive punishment in the legal system, only to find out that it unintentionally had large-scale negative effects--as a result of a science of ethics!

Additional concerns I have relate to the nitty gritty, since given Harris' stance in other areas, how would spiritual/religious well-being be incorporated into overall well-being? Or meaning? This relates to my other major concern, which is that Harris' overall worldview undermines a science of morality. To say we have no free will is to then eliminate (or greatly reduce) moral responsibility, and moral responsibility is the basis of moral blame or praise, and these act to modify social behavior. The thing is, we have empirical evidence that this is the case, and Harris may wish to lightly dismiss this like he did in his book on free will, but that is just unscientific (he actually dismissed it and then supported his position with a personal anecdote. Yikes!).

For example, one study induced disbelief in free will and then measured levels of forgiveness and found that the lower the belief in free will the greater the forgiveness. This might initially seem good, but this would be based on a simplistic view of forgiveness. There is again evidence (especially related to marriages!) that being too forgiving is essentially making a statement that the behavior is permissible. On top of that, by being too forgiving and not sending the message that the act was wrong and bad, and that the person deserves punishment, the transgressor experiences less negative emotion such as guilt or regret. Negative emotions, especially strong ones, can increase reflection and counterfactual thinking, leading to changes in behavior. By eliminating moral responsibility via disbelief in free will, this eliminates moral blame and leads to increased forgiveness, leading to less learning from transgressors, leading to less modification of behavior and an overall negative social impact.

There is other evidence that framing mental disorders in neuroscientific ways actually reduces compassion in treatment. It effectively dehumanizes the patients and they get worse treatment from doctors because of it. The problem is, neuroscience is making the claim that neurophysiological states are all that exist, and that mental or psychological states don't really exist and don't cause behavior. This is a troubling view, since we have evidence that framing the problem psychologically leads to more compassionate care.

So the current scientific worldview seems to undermine the science of morality that Harris wishes to propose. I'm totally with him on a lot of his points, but there are serious concerns, particularly with his odd disbelief in free will and presenting it as if it's "fact" as he likes to say.


Laird said:
The argument I proposed is certainly (deductively) logical (i.e. at least valid if not sound), the only question is whether the key premise is cogent. You suggest that it is given force by mere plausible reasoning - but the premises in many other arguments are given force by nothing more persuasive, yet equally "obvious", so ... should this really be a problem?

I'm with you on the last part, but I wouldn't say "mere" plausible reasoning. Most of science is based on plausible reasoning, not logic, and even within mathematics plausible reasoning is used. We use plausible reasoning to say that a premise is or is not reasonable, and the rest is based on logical formalism. It's usually the plausibility in some premise that is questionable and therefore not purely logic.

Laird said:
I understand what you're saying about science's "is"es being based on (evidential) values which might be framed as "ought"s. It is only when you say that there is "no logical way to derive either from either" [the "either"s being oughts and ises] that I wonder whether perhaps you're going too far. Perhaps you are taking too strict and limited a view of logic, or perhaps we need a different word which simply is broader, such as "reasonability". Perhaps we can say that from the perspective of "reasonability", if not strict "logic", we can derive an ought from an is.

I think that you're saying exactly what I am with plausible reasoning. I think that it is perfectly reasonable and rational to create normative statements from descriptive statements regarding our conscious experience. This is especially the case when science uses normative statements to even arrive at the descriptive statements in the first place. I don't get how it could be a one way street--you cant use values to establish descriptive statements then turn around and say descriptive statements cannot inform values.

If you want to put the nail in the coffin on this argument then there are two additional points:

1. If you cannot derive an ought from an is, then you certainly cannot derive an is from an ought, yet how would one propose to progress in science without the ought values that are used in evidence and theory evaluation? There is never any logical refutation when it comes to empirical research. To deny creating normative statements from descriptive statements would be to deny using normative statements to create descriptive statements, and therefore undermine science itself.

2. Hume's other problem was that of induction. Logically induction has no basis. You cannot derive a universal or general statement from a statement of particulars, but science does this all the time. Why would there be such stonewall on a science of morality based on Hume's is-ought problem when his problem of induction basically says there is no logical basis for science itself? If someone wishes to prop up the is-ought distinction as some way to say that science cannot talk about morality then it would be fun to hear them defend science's violation of the logical problem of induction,
Laird said:
Re virtue ethics: I haven't studied it much but it sounds interesting.

Here is a relatively short paper that you may find interesting:

http://puffin.creighton.edu/phil/Stephens/History_of_Ethics/Nussbaum~Non-relative Virtues.pdf

Laird said:
Re your thoughts on bridging the gap between our initial views with the idea that "As we become older, we see a greater innervation between affective and cognitive systems, which allows us to make finer distinctions and also use the ability to reason": yes, totally agreed. "Moral maturity" is, like "plausible reasoning", another helpful term that you've introduced into this discussion, and potentially applicable in the context of virtue ethics too.

The more that I read the more that I see a strong developmental component and a trainability. There is also wisdom in eastern traditions with meditation as a tool to enhance the virtues. It is part of practice in Vedanta, and the increased virtues also allow meditation practice to deepen, which then further enhances virtues, etc. Many contemplative practices in Buddhism also seem highly relevant. It would seem science could study these methods to see what works best.

The other thing that should be recognized is how people tend to learn moral principles from stories and exemplars such as Jesus, Buddha, etc. Sure, there are the 10 Commandments, yet there are stories to supports these, and Jesus serves as both inspiration and an example of how to behave. Simply reading some basic rules in a textbook, especially without practice, is not going to be very effective, which I think the research on ethicists not behaving any more ethically suggests.

Laird said:
Re your comment that "It is important, in my opinion, to note that emotions play a rational role, in that they are necessary for appropriate response and to motivate behavior. It is either excessive and uncontrolled emotion or too little (or no) emotional reaction that leads to irrationality": yes, this seems very plausible. We are in a constant ocean of choices: if we do not have some "feeling" for which choices are right, before we even exercise our rational and critical judgement to refine them, then decision-making becomes very, very difficult; on the other hand, if our "feeling" for which choice is right is too powerful, then it might eclipse our rational and critical judgement, and, being thus unrefined, potentially lead to a biased and mistaken choice. Further, I'd suggest that the more we consistently exercise our judgement, the more we bring our emotions into line with our judgements (i.e. train our emotions), and the more assistance they can be towards our fulfilment of our self-determined principles. This brings into focus your idea of "practice" trumping "theory" with respect to morality.

I have come to appreciate emotions even more, especially considering what I mentioned above with how negative emotions can lead to contemplation, counterfactual thinking, and behavior changes. There seems to be an attitude that is common, especially with male scientists, that the emotional component of moral judgment is evidence of its irrationality and that it is just a matter of emotional opinion. This is used against a science of morality, but it is based on pretty severe ignorance.
Laird said:
Finally, I am very sympathetic to this: 'It also seems that there is great divide over moral views such as consequentialism (and many views within this), deontology, etc, and I cannot understand why there needs to be just one view that must be right. Perhaps this is from some sort of "logical" desire, but to me I see it to be reasonable that consequentialism (act or rule) can contribute, as well as virtue ethics. Why cannot one come to a reasonable conclusion weighing all these perspectives?' Well put. Despite my interest in ethics as a field, I have never committed to a single theoretical view; perhaps the closest I've come is to the (unique?) view that I've expressed in previous posts: that ethics is like a "tree of branching principles", although in reality this tree might be more like a "web". But I agree, to stick rigidly to one view might not be so helpful; the best approach is probably one of blending and synthesis. And often, one view can anyway be expressed or formulated in terms of another, even if not 100% accurately.

It does seem that consequentialist considerations may be more important when it comes to economic policy or political considerations, especially when it is less clear on what a "virtuous person" might decide. It would seem that a virtuous person would decide on what has better consequences (while respecting individual liberties)!

Thanks for taking the time to respond in the way you have. This has been very fruitful and I appreciate it.
 
Last edited:
Laird, when you have the review up please send me the link; I would be very interested to read it.

Here it is: http://creativeandcritical.net/reviews/books/the-moral-landscape-by-sam-harris. I took a generally different focus than your very good points below, and even ignored Sam's metaphysical anti-religious, anti-free-will beliefs, other than to note that I differ. Here's my review's conclusion:

"Summing up then: in terms of moral theory, and ignoring certain issues of metaphysics, as well as the book's misguided diversions, The Moral Landscape is a fine piece of rhetoric which argues its case well, and - again, aside from certain of its metaphysics - there is nothing much in it with which to disagree, however it doesn't have much meat on its bones, and, in practical terms, it does not address three serious questions: firstly, how much can science really determine values versus philosophy; secondly, how damaging could the general idea of science objectively determining values be to the autonomy of those, such as involuntary "mental health" patients, in already precarious situations; and, thirdly, can the conclusions of a hypothetical moral science be generally-accepted enough, or at least correctly interpreted and acted upon by the correct people, especially on certain key questions, to be of general social value?".

For brevity, I won't quote your critique of Sam's ideas, but they are well-taken - perhaps you could write your own review?

It seems like we're roughly on the same page with the is-ought problem, logic and plausible reasoning. I won't here cover the same ground covered in the thread you started on this issue.


Thank you, that was interesting. I do see a compatibility between virtue ethics and certain forms of consequentialism, if carefully defined.

I won't quote the remainder of your post - but it's all very agreeable and I appreciate your thoughts!
 
Back
Top