• Welcome to League Of Reason Forums! Please read the rules before posting.
    If you are willing and able please consider making a donation to help with site overheads.
    Donations can be made via here

Not even morality at all?

mknorman

New Member
arg-fallbackName="mknorman"/>
There have been a couple of threads that have touched on the subjectivity or objectivity of morality, but I'm wondering if morality has even been properly identified.

The issue here is that most of us don't refrain from homicide, for example, strictly because of a moral law. That is, we aren't walking around thinking, "I'd really like to kill someone (or, someone in particular," and then refraining because we either remember a moral prohibition or run up against some internal moral check. I suspect that most of us so refrain for pragmatic reasons if we are 'refraining' at all, and, what is more likely, that the urge just never comes up for most of us. That latter is to say, to some extent, that for the vast majority of people in developed nations, civility has replaced morality.

So, the question is, "Is this 'morality' really just a stand-in for the pragmatism of civility? And, if so, is it even important to ask if it is subjective or objective?"
 
arg-fallbackName="GoodKat"/>
It appears to me a deed has to pass through three moral/ethical barriers before it can be committed:

1. Instinctual morality- controlled by the emotions, the code present in all animals, if a deed cannot pass this one it usually isn't even considered, but it depends heavily on the emotional state of the person.
2. Personal standards- this is the one that most claim to be objective, but form based on their society anyway, it is simply the person considering what is right and wrong. Can be subverted when one is too emotionally charged to consider whether or not an action is just.
3. Pragmatic behavior- Weighing of risks, rewards, and consequences. Can also be circumvented when one is too impassioned to consider consequences.
 
arg-fallbackName="Ozymandyus"/>
If you wish. It greatly narrows the range of the discussion, as pragmatism is really a specific form of morality. Pragmatism gives you a code of conduct based on choosing right or wrong by an action's consequences. It is a large part of what my own understanding of what is right and wrong. It's a little narrower than speaking of codes of conduct inclusively, but it works for me. Most codes of conduct have pragmatism as their basis, knowingly or not, so it doesn't do the discussion much harm.

Civility likewise IS a form of morality, whether it is reasoned or not - it is a code of conduct that defines thing as right or wrong. It certainly doesn't explain all moral decision making, and is too narrow to really discuss broadly in my opinion. It is clearly subjective to cultures, and is something we can all agree is mostly trained into people by their surroundings. Civility is, as you mention, mostly grounded in pragmatism (though it has other roots and follows a lot of similar rules as evolution, mutation, drift, etc).

Anyway, if we are accepting this view of morality as pragmatism, the real question of subjectivity vs objectivity isn't answered and is still important. Pragmatism can give us multiple answers based on our frame of reference, there can be multiple pragmatically best options to a single moral decision (if we know all the consequences which is a different story of course, and part of why we use other considerations in our moral decision making)- one based on the best consequences for ourselves(most subjective), another for our culture, another for the human race as a whole, or the human race across its entire future (the most objective). The degree of subjectivity in the action and what is the right choice is still a very important question.
 
arg-fallbackName="mknorman"/>
Ozymandyus said:
Anyway, if we are accepting this view of morality as pragmatism, the real question of subjectivity vs objectivity isn't answered and is still important. Pragmatism can give us multiple answers based on our frame of reference, there can be multiple pragmatically best options to a single moral decision (if we know all the consequences which is a different story of course, and part of why we use other considerations in our moral decision making)- one based on the best consequences for ourselves(most subjective), another for our culture, another for the human race as a whole, or the human race across its entire future (the most objective). The degree of subjectivity in the action and what is the right choice is still a very important question.

To be clear, I'm not saying that morality is pragmatism. If anything, I'm suggesting that morality might be irrelevant because pragmatism always intervenes before the decision ever gets a chance to become moral.

What you seem to be indicating above is that values are subjective (or pragmatic), not that morality is subjective (or pragmatic, respectively), though either would likely follow if morality flows from values.) But, if we allow values to be subjective, or allow them to flow from a perspective, then we're in a real pickle. For example, both pure communism and pure capitalism could be either wholly good or wholly bad/evil, and the result would depend merely on the effectiveness of the indoctrination mechanism of the respective governments implementing each plan. Furthermore, their communism could actually be evil to me, a capitalist, and my capitalism could actually be evil to them, as communists.
Ozymandyus said:
...or the human race across its entire future (the most objective).

This is the one I find most interesting. What may be best for the entire species for all eternity may be, in fact, for us to remove all safety nets and actively fight/compete with each other ruthlessly, in order to apply the most selective pressure on the gene pool. Furthermore, the weight of all future generations, compared to our generation, makes the choice infinitely easy. 'The needs of the many...,' quoth Spock, and we are very few indeed.

I do think, however, that you are incorrect in considering a morality for all time as equivalent to the most objective. All we've done is to enlarged the perspective of the valuating effort to include a larger span of time. One can still ask whether, for instance, we want to produce a race of warriors or of philosophers over the course of the next million years. (These are obviously not the only choices. Not that admitting this gets us any closer to a choice.)

All this is to say two things. First, the question only got interesting when we tried to define what we were talking about. That is, what really is the referent we intend to point to when we say 'morality.' Second, your very interesting post raised a lot of questions for me, and some of those questions seem to have immediate, even if disturbing, answers.
 
arg-fallbackName="Ebon"/>
Would I be at all wrong for suggesting that morality is pretty much subjective, save for that which GoodKat called "instinctual"?

Basically, morality in its broadest sense is little more than a code of right and wrong. Right and wrong, however, are not absolutes and are not obtained or shaped through any objective means as one must be taught right from wrong as they develop and ever after. Something truly objective cannot be shifted, changed or denied. Morals can be, sometimes very easily so.

That's just my extremely narrow take on it.
 
arg-fallbackName="mknorman"/>
Ebon said:
Would I be at all wrong for suggesting that morality is pretty much subjective, save for that which GoodKat called "instinctual"?

First, it seems to me that what GoodKat might call instinctual morality is still subjective, in the sense that each of us will have a slightly different expression of the trait. Maybe this is playing fast and loose with the term 'subjective,' but I think that may be justified here. (Eye color may be called 'objective' if we have the categories 'blue', 'green', 'brown', & etc., and can 'objectively' state which eye color you have. However, if we say that you have a 'blue' morality and I have a 'green' one, I think we would still say that morality is subjective, even if the enumeration of them were objective.)

Second, I probably doubt the existence of this instinctual morality. As an analogy, children raised with little or badly broken social interactions often develop into adults that, for instance, don't lack a sexuality but rather have an inappropriate sexuality. That is, it is activated at inappropriate times and not subdued at appropriate times. Similarly, it may be that 'moral senses' develop, more than they arrive naturally and instinctively.
Ebon said:
Basically, morality in its broadest sense is little more than a code of right and wrong. Right and wrong, however, are not absolutes and are not obtained or shaped through any objective means as one must be taught right from wrong as they develop and ever after. Something truly objective cannot be shifted, changed or denied. Morals can be, sometimes very easily so.

I don't think we gain anything by saying that 'morality is a code.' If we mean that 'a morality' is just 'a set of behavioral rules,' then we absolutely have not answered the question about what a moral action is. (Is it just, 'An action allowed by the code?' If so, what new meaning have we given to morality by calling it a code?) It's here where all sorts of pointless discussions take off. Person A can say, 'Well, by changing the code we can change what is good or bad." Person B can say, "We can't change the code, so good and bad are fixed." Both of these formulations look like propositions, but they are really tautologies. A is saying, "We can change the code, so we can change the code," and B is saying, "We can't change the code, so we can't change the code." The issues are untouched. Most absurdly, any given action can be either infinitely good, bad, evil, or indifferent by applying equivalent codes to it. ('Action C is infinitely good|bad|evil|indifferent,' for example.) Indeed, your closing, 'Morals can be, sometimes very easily so,' is an instance of this very phenomenon.

Here is where I, for now, have to be a moral skeptic. I think my original point tentatively stands, that we do not refrain, in practice, for moral reasons, but for biological or social reasons. In other words, morality is completely irrelevant in that it is not a useful tool for conduct. It really seems that humans restrain themselves out of pragmatism and fear of reproach. The very fact that "Would you do X if there was a guarantee you wouldn't get caught?" is an interesting question to most people is a tacit admission that fear is the main inhibitor of behavior. Here is where I say that nobody knows what they're talking about when they ask if morality is objective or subjective.
 
arg-fallbackName="felixthecoach"/>
We go back to the same problem. A sense of right and wrong comes from
1. culture
2. evolutionary past
3. individual variation


Culture is responsible for all kinds of variety in the study of morality. As relative as culture seems to be, morality will seem to be just as relative. However, there are some common expressions of human nature--probably expressive of evolutionary past--among most if not all cultures:

1. division of labor
2. reciprocal altruism (eye for eye, tooth for tooth)
3. similar facial expressions for similar emotions (e.g. fear, anger, disgust, happy, neutral, etc)
4. general avoidance of incest
5. religious interaction and belief.

It stands to reason that if these things effect most/all cultures, they will effect what the people in those cultures think is morally right and morally wrong.

I guess one thing we can recognize is that even if human nature wants to insist on on common actions like dividing labor between the sexes, or using religious dogma to control the poor, we have to reexamine what we consider "morally right". In short, even if morality initially expresses from our evolutionary past, morality HAS to be subjective in order to be open to new ideas and suggestions. That is, if we want to improve quality of life for everyone overall.


EDIT: i guess that means i agree mostly with what MK posted above, heh...
 
arg-fallbackName="Ozymandyus"/>
felixthecoach said:
We go back to the same problem. A sense of right and wrong comes from
1. culture
2. evolutionary past
3. individual variation
You are forgetting the force that perceived consequences play on a sense of right and wrong. Which seems to be a large part of the force behind what is culturally accepted and is certainly the fuel for the evolutionary history. It is working behind our morality and also In Front, as a better understanding of consequences pulls our societal codes of conduct Forward.

The reason that I disagree the purely relativist view of morality that would seem to come from some of the assumptions above is that we are gaining a better understanding of our actions: both where they come from and their consequences. This understanding is a force behind a FORWARD progress for our morals that implies an objective morality.

As we become aware of the underlying assumptions that fuel our decisions, and how they get formed, we are more able to rise above these, and take them into consideration when trying to make decisions and when forming and altering our societal code of conduct. This gives us more material for our pragmatist ethics to work with and helps us make better decisions about how society should operate. Our moral code is BETTER than our forefathers because we know more about human nature, we know more of the consequences and root causes of our actions and that allows us to make objectively better pragmatic decisions.

Or at least thats how I see it.
 
arg-fallbackName="felixthecoach"/>
Ozymandyus said:
You are forgetting the force that perceived consequences play on a sense of right and wrong

I dont disagree with you. My explanation of culture might have been a little narrow. Maybe I should say "our immediate environment" over culture.
 
arg-fallbackName="mknorman"/>
Ozymandyus said:
The reason that I disagree the purely relativist view of morality that would seem to come from some of the assumptions above is that we are gaining a better understanding of our actions: both where they come from and their consequences. This understanding is a force behind a FORWARD progress for our morals that implies an objective morality.

As we become aware of the underlying assumptions that fuel our decisions, and how they get formed, we are more able to rise above these, and take them into consideration when trying to make decisions and when forming and altering our societal code of conduct. This gives us more material for our pragmatist ethics to work with and helps us make better decisions about how society should operate. Our moral code is BETTER than our forefathers because we know more about human nature, we know more of the consequences and root causes of our actions and that allows us to make objectively better pragmatic decisions.

Or at least thats how I see it.

To say that a 'FORWARD progress for our morals...implies an objective morality' is, I think, to argue in a circular fashion. How would we differentiate between moral change that progresses toward a target 'objective' morality and moral change that is merely random or even directed toward a subjective morality? To do so, we would have to know what that objective is, so that we can compare the changes we see to the thing to which morality aspires. If we then know what this 'objective' morality is, why not cut to the chase and make all the changes at once?

This is a teleological argument, in essence, in that it says that objective morality exists because of the purpose evident in moral becoming. In other words, that the moral sense exists for the function of finding objective morality, therefore, objective reality exists.

Imagine that a researcher studying a troop of monkeys notes that a these monkeys, at times, kill for revenge or for sport. Will a philosopher come along and say that these monkeys 'act immorally'? What if further research finds that this trend either increases or decreases over time. Will this tempt the philosopher of monkey morality to argue that there is an objective monkey morality?

Instead, what the researcher notes is that monkey A is more likely to die at the hands of monkey B because of past action C. In addition, the researcher might find within the troop a learned aversion to action C within the population, or an aversion to the revenge killing, or an aversion to allowing A to escape, or absolute indifference. The point of all this is that the researcher is not saying that it is wrong for B to kill C. He simply notes it as a phenomenon. He does the same for the various learned aversions or indifference noted above.

Moral judgment could only enter when we value one outcome over the other. When we do this, we feel comfortable in calling our invented rules 'morality' and to say that they spring from a 'moral sense.' Note how silly we would feel if we allowed ourselves to be morally outraged at the revenge killings inside of a troop of monkeys.

That this is so, I hazard, betrays why we moralize and why some of us want there to be an 'objective morality.' Monkeys are hopelessly outmatched against us, and do not enter into the sphere of things that frighten us. We recognize immediately that any 'moral code' that they have is peculiar to their situation, their biology, and their interests. Humans are a different story. Humans frighten us and, I suspect, we sublimate this fear into 'morality.' It's an implicit argument from consequences, which says, "If there is no morality, then the awful harm that they can do me, which horrifies me, is not evil. This awful harm must be evil, therefore there is an objective morality."

It's probably not that morality is subjective or relative, but that it is a chimera.
 
arg-fallbackName="Ozymandyus"/>
First, you are misquoting/quote mining there... I said our improved understanding of consequences (which is simply a fact) which is synomous with an improvement in our pragmatic ability (our ability to make good decisions based on consequencesis) is an implication of objective morality.

As I said, the noticeably forward progress is based upon our objectively improved ability to predict consequences and we were working under the pragmatist code of conduct. Denying that we are now objectively better at prediction is silly. We have learned from our mistakes, that has been part of this whole human experiment. We haven't learned everything, nor made every mistake, but we are undoubtedly learning.

I'm not saying I can tell you every movement that has been a forward improvement in morality, but I am asserting that there has been a general shift in that direction because of our previous experiments in morality (we've learned from history what works, to some extent), because science have given us new tools to determine consequences, and because we know far more about what affects human behavior.

To me, its a scientific process. We narrow in on a theory of morality, and we see where its wrong, and then we improve that theory. It's not perfectly scientific by any means, and there are people with intelligent design ideas trying to put their stamp on the theory, but we are still generally getting a better theory out of the process. Every time we disprove a piece of the theory, it gets stronger and closer to what is really going on.

Sorry, I wrote this post hastily and without properly addressing your specific post, because I'm cooking... mmmm Masaman curry.
 
arg-fallbackName="ebbixx"/>
mknorman said:
It's probably not that morality is subjective or relative, but that it is a chimera.

Or at least an invention that appears to be unique to humanity. Excellent argument.

I would only add that, in trying to give the "objective morality" case a fair hearing, I kept running into this:

Unless I'm badly misreading Ozymandyus' case for an objective moral code, it seems to hinge on the notion that behaviors that benefit the long-term survival of humans are "objectively moral" acts, and those that reduce survival chances are "objectively immoral."

I have at least two issues with this, if it is what Ozy is arguing:

One, extinction happens sometimes regardless of the behavior of a species. In that perspective, our actions are largely meaningless, at least where survival is concerned.

Two, can we truly predict all actions that might or might not prove detrimental to long-term species survival? Or are we left trying to infer the nature and likely outcome of those actions?

In a very long term sense, it may or may not turn out that intelligence, technology, tool making and human civilzation led to a situation where we made our own long-term survival untenable, due to any of a number of ecological catastrophes, or our inability to find a fix in time, or other possibilities that may rely as much on chance as they do on intentional acts. I know of no calculus that necessarily and accurately predicts those effects. It's at least possible that our ingenuity at technological fixes will ensure our survival for a much longer period of time than anyone has imagined.

Unless we can predict outcomes to a very high degree of accuracy, I find it hard to say which of our actions and behaviors will prove beneficial or detrimental to long-term species survival.

As I said, though, I may also have badly misread what Ozy is arguing here. I look forward to any clarifications he might offer.
 
arg-fallbackName="Ozymandyus"/>
ebbixx said:
I would only add that, in trying to give the "objective morality" case a fair hearing, I kept running into this:

Unless I'm badly misreading Ozymandyus' case for an objective moral code, it seems to hinge on the notion that behaviors that benefit the long-term survival of humans are "objectively moral" acts, and those that reduce survival chances are "objectively immoral."
Though I believe this is Part of the basis of what would be objectively moral, I don't think it's the whole basis. Such a system would look at human needs/desires, of which survival is one, and look at which decisions will have consequences that best fulfill those needs. For example, seeking pleasure and avoiding pain is a near universal quality of humankind.*(Though there are some exceptions, they are often in a blurrily defined part of pain and pleasure in sex, or have a basis in psychological trauma: cutting,etc.)

We derive much of our code of conduct by seeking to maximize and minimize pain and pleasure, respectively, while not compromising our survival. A better understanding of human behavior, consequences, and other knowledge that we have gained across our history has improved our ability to fulfill these needs and encoded some of that knowledge in our codes of conduct. There are other things that science suggests are intrinsic human needs/desires such as a feeling of control over our circumstances that also play a role in what I think of as moral progress.
I have at least two issues with this, if it is what Ozy is arguing:

One, extinction happens sometimes regardless of the behavior of a species. In that perspective, our actions are largely meaningless, at least where survival is concerned.

Two, can we truly predict all actions that might or might not prove detrimental to long-term species survival? Or are we left trying to infer the nature and likely outcome of those actions?
Oh certainly not - but we are getting BETTER at predicting and controlling outcomes. This is where science and understanding as an intrinsic human need come into play. This ability must be given great priority in a good code of conduct - that is we should, and hopefully someday will, have a great deal of our conduct point us towards better understanding (more emphasis on the importance of education, and the goodness of being smart etc). I predict that our code of conduct will continue to head in this direction(eventually, after some backlash settles down), particularly in terms of letting science play a bigger role in determining our societal direction. If morality was just a drifting set of social norms I could never make any such predictions.

Sorry, Sometimes I get excited and write quickly, I should probably edit more for clarity. Oh well, I'm not writing a thesis statement or anything here.
In a very long term sense, it may or may not turn out that intelligence, technology, tool making and human civilzation led to a situation where we made our own long-term survival untenable, due to any of a number of ecological catastrophes, or our inability to find a fix in time, or other possibilities that may rely as much on chance as they do on intentional acts. I know of no calculus that necessarily and accurately predicts those effects. It's at least possible that our ingenuity at technological fixes will ensure our survival for a much longer period of time than anyone has imagined.

Unless we can predict outcomes to a very high degree of accuracy, I find it hard to say which of our actions and behaviors will prove beneficial or detrimental to long-term species survival.
I agree again that we certainly do not have all the tools to make these decisions. But we are GETTING and improving those tools. We are getting better at making informed decisions, we have learned a lot from our past attempts at society... Our ancestors thought it was in their best interest to sacrifice virgins when there was a bad crop, now we can engineer our crops to be able to survive the worst droughts. This isn't just a scientific advancement, its a change in the way we act. We solve things with science, not prayer and ritual. Its a choice, a good choice. A moral choice. A permanent change in our code for the better (as soon as we get rid of some of the rest of these silly belief systems, which for the most part no longer lay claim to such decisions).

Anyway, if we can make it a few hundred years more, we will most likely have all the tools to ensure survival of our species or something very much like it for several billion years into the future. Even a massive world event like a meteor would almost definitely not wipe us out right now. It would make life suck but it would not wipe us out. Pretty much anything sort of a world ender has a very low chance by my calculations to completely wipe us out, even something serious like global warming or large meteors or whatever doomsday event we have in mind... we are only just starting to scrape the surface of what unleashed science can do.

Again I apologize for the length. I probably still didn't clarify my position, I apparently don't write very clearly... or is it think? I'll be looking into it. Thanks for the discussions, I really enjoy them.
 
arg-fallbackName="mknorman"/>
Ozymandyus said:
First, you are misquoting/quote mining there... I said our improved understanding of consequences (which is simply a fact) which is synomous with an improvement in our pragmatic ability (our ability to make good decisions based on consequencesis) is an implication of objective morality.

As I said, the noticeably forward progress is based upon our objectively improved ability to predict consequences and we were working under the pragmatist code of conduct. Denying that we are now objectively better at prediction is silly. We have learned from our mistakes, that has been part of this whole human experiment. We haven't learned everything, nor made every mistake, but we are undoubtedly learning.

In these paragraphs, it seems like 'good', 'improvement', and the like are being used to describe the statistical likelihood of things turning out as planned in light of our actions. It's important here to distinguish this from 'moral good' and 'moral improvement.'

And here comes the leap:
Ozymandyus said:
I'm not saying I can tell you every movement that has been a forward improvement in morality, but I am asserting that there has been a general shift in that direction because of our previous experiments in morality (we've learned from history what works, to some extent), because science have given us new tools to determine consequences, and because we know far more about what affects human behavior.

It's this leap from the pragmatic as 'effective in obtaining what was sought' to pragmatic as 'effective for obtaining moral goods' that I object to. We can't get an 'ought' from an 'is' so easily, and certainly not as the result of an equivocation.

What is confused here is the nature of the referent of scientific laws vs. the referent of moral laws. Scientific laws refer to the physical world, to observable reality. Moral laws refer to the relations between actions and values. It is these values that are at the core of the issue. When we talk of morality as being subjective or objective, we can mean one of at least two things. 1) Values are permanent, universal, and knowable, but the rules about how to attain them are not, or 2) values are fleeting, individual, or mysterious, so no fixed code can hope to tell us how to act in light of this.

Ozymandyus said:
To me, its a scientific process. We narrow in on a theory of morality, and we see where its wrong, and then we improve that theory. It's not perfectly scientific by any means, and there are people with intelligent design ideas trying to put their stamp on the theory, but we are still generally getting a better theory out of the process. Every time we disprove a piece of the theory, it gets stronger and closer to what is really going on.

Sorry, I wrote this post hastily and without properly addressing your specific post, because I'm cooking... mmmm Masaman curry.

The bold claim--snicker--above is precisely what, I argue, is going unproven. And I don't mean this in the sense that we are vacillating between differing scientifically derived 'moral theories' and we're not making progress toward an ultimate one. What I mean is that science doesn't tell us what the values are from which we would derive a morality. Science is not a valuating effort. It can't tell us if we should, for example, strive to be a race of philosophers or a race of warriors. It may be able to tell us if the outcome of either strategy would result in an increase in our number, or in an improved quality of life (as specified by some measure), or in our extinction. What it does not tell us is what we 'should' do. How, for example, would science inform us of whether or not it is beautiful or desired to see the human race winnowed down by combat until only a handful of shining master warriors standing atop a pile of skulls meters deep? It may be a scientific or statistical certainty that my head will be in the pile rather than atop one of the warriors, but what makes me value or devalue that is something other than (statistical) pragmatism. (Though a self-centered moral pragmatism may come into play, namely that I'd forgo the epic beauty of that fateful day in favor of a scheme that keeps my head on my shoulders.)

Again, I think the confusion is that 'a theory of morality' is not the same thing as 'a theory of outcomes.'

I think I've en passant shown that I have not misread, misquoted, nor quote-mined, but I'm certainly open to correction on that.

How was the curry?
 
arg-fallbackName="Ozymandyus"/>
mknorman said:
In these paragraphs, it seems like 'good', 'improvement', and the like are being used to describe the statistical likelihood of things turning out as planned in light of our actions. It's important here to distinguish this from 'moral good' and 'moral improvement.'

It's this leap from the pragmatic as 'effective in obtaining what was sought' to pragmatic as 'effective for obtaining moral goods' that I object to. We can't get an 'ought' from an 'is' so easily, and certainly not as the result of an equivocation.
Ah, but we are learning along the way to refine what we seek as well as our means of seeking it. Its not simply that we are becoming better at knowing the consequences of our actions, we also are becoming better at knowing the sources of our actions and human behavior patterns.

We are learning what it is to be human, what it is we need and want and why we want it. We are learning it scientifically. The movement from knowing what we need and want to fulfilling those needs and wants is the pragmatic code of conduct. The learning what it is we want and need is done somewhat scientifically whether in the humanities and behavioral sciences, but also though historical trial and error, evidence, reformulation of codes of conduct. We look at why they didnt work and learn more about how humans act in circumstances and some of the basis of those reactions.
What is confused here is the nature of the referent of scientific laws vs. the referent of moral laws. Scientific laws refer to the physical world, to observable reality. Moral laws refer to the relations between actions and values. It is these values that are at the core of the issue. When we talk of morality as being subjective or objective, we can mean one of at least two things. 1) Values are permanent, universal, and knowable, but the rules about how to attain them are not, or 2) values are fleeting, individual, or mysterious, so no fixed code can hope to tell us how to act in light of this.

And I don't mean this in the sense that we are vacillating between differing scientifically derived 'moral theories' and we're not making progress toward an ultimate one. What I mean is that science doesn't tell us what the values are from which we would derive a morality. Science is not a valuating effort. It can't tell us if we should, for example, strive to be a race of philosophers or a race of warriors.

Again, I think the confusion is that 'a theory of morality' is not the same thing as 'a theory of outcomes.'
First of all, the curry was excellent, thankyou!

I disagree that science cannot give us values. In fact, that really is all science does. Numeric values for forces, probabilities for game theory, whatever. Okay sure, I played with the double meaning of the word value there, and I know you mean it in another way. But the association of language of those two terms is very meaningful. Our values are really just hidden numerical weightings that guide our actions. Those weightings are informed by our society and culture, yes, but there is an UNDERLYING valuation system that is common to all humans. A scientific 'control' human, I propose, would have certain innate qualities. These are the objective fuel for moral development. Matching up the plasticity of this control human with the fulfilling of a maximum amount of its innate desires and future goals of humanity would give us a map to a good culture and code of conduct.

I believe evidence for these underlying innate qualities is written all over our history, our behavioral sciences, and our humanities (literature, philosophy etc). Themes like feeling needed (loved) and feeling free repeat themselves in many different cultures. Sane successful people often share certain similarities(someone caring for them, access to clean food and water, etc), and insane ones often show similar patterns of abuse. Our methods of educating come directly from learning better the way humans learn... the hierarchy of needs, the stages of psychosocial development etc have ALL informed our code of conduct! These are scientific ways of thinking about what we need as humans, and they have forever altered our attitudes towards children, unless we find an even BETTER understanding. Our society is mapped onto this self-knowledge and informed by this knowledge in a very direct way.
 
arg-fallbackName="mknorman"/>
Ozymandyus said:
I disagree that science cannot give us values. In fact, that really is all science does. Numeric values for forces, probabilities for game theory, whatever. Okay sure, I played with the double meaning of the word value there, and I know you mean it in another way. But the association of language of those two terms is very meaningful. Our values are really just hidden numerical weightings that guide our actions. Those weightings are informed by our society and culture, yes, but there is an UNDERLYING valuation system that is common to all humans. A scientific 'control' human, I propose, would have certain innate qualities. These are the objective fuel for moral development. Matching up the plasticity of this control human with the fulfilling of a maximum amount of its innate desires and future goals of humanity would give us a map to a good culture and code of conduct.
These two senses of 'value' are fundamentally different, and for one absolutely indisputable reason. Scientific values are for the whole universe. Human values are for each individual. The word 'value' in the philosophical sense is probably shorthand for 'valued by X as good.' For example, "mknorman values his survival as good," or "Ozymandyus gives the value '10 out of 10' to chicken curry in the flavor category.' What is thus left doubly implicit in the philosophical use of the term 'value' is the agent of valuation. (Doubly so because 'good' is left implicit, and 'good for whom' after that.) Contrast this with physics, where we say, "F=MA." We do not mean that "F=MA for me," but rather "F=MA once and for all for the universe, (except at relativistic and so on)." There may be ranges of validity for scientific truth, or there may not. But the range of validity for "I like curry" is always the individual who utters it.

This is a subtle point. It means that, in a very real sense, we can't even talk about a moral law in the sense that we talk about a physical law. The evidence for a physical law, and the 'values' derived, apply to the One Universe*. The phrase 'human values' is misleading in that it can't be about values common to humanity, because each valuer has his own copy, unique in that it refers to him or her self. Furthermore, there is no 'field' outside of these individuals for these values to operate in. Ozy and mk are not going to collide and find that Ozy's love of curry has been transferred to mk, for instance. Whatever we would infer from Ozy's and mk's values can only be in relation to Ozy and mk respectively, and not about their interaction. Our values don't interact. And, they can't even be wholly shared, because of the referent problem above. Even if Ozy and mk collide, and mk suddenly values curry, what has been 'transferred' is not Ozy's love of curry. If mk eats curry, will Ozy then enjoy it? There's even more confusion, because we say that, in a physical collision, some of object A's momentum is transferred to object B. In reality, this 'momentum' is a function of the field, the reference frame, and not either of the objects. Physical truths are about the relations between objects, whereas philosophical values are about the relation between a valuer and the thing valued.

So, it's still a type jump, because a moral law has to go from individual values to a collective morality. Science may be able to say, "Human beings have a tendency to desire equality." But what science can't say is, "When some human beings desire to rule, to be favored politically, and to have rights to injure others, this is immoral." Science can only report that, "When some become favored, the remainder resent it." It is sets of values--those belonging to the rulers and the ruled**, in this case--that inform their respective moralities. Those values are inherently subjective. The oppressed wish to be free. The oppressors wish the fruits of their tyranny. Science cannot talk about the morality of who should win in the struggle. It can only report the results of the struggle on each population. For example, how is science going to decide whether the tyrants should rule? If we appeal to the categorical imperative, that what is a law for me should be so only if it is law for everyman, then that is outside of science. It is also outside of science to say, for example, that what provides the most benefit for the most number is the proper course.
Ozymandyus said:
I believe evidence for these underlying innate qualities is written all over our history, our behavioral sciences, and our humanities (literature, philosophy etc). Themes like feeling needed (loved) and feeling free repeat themselves in many different cultures. Sane successful people often share certain similarities(someone caring for them, access to clean food and water, etc), and insane ones often show similar patterns of abuse. Our methods of educating come directly from learning better the way humans learn... the hierarchy of needs, the stages of psychosocial development etc have ALL informed our code of conduct! These are scientific ways of thinking about what we need as humans, and they have forever altered our attitudes towards children, unless we find an even BETTER understanding. Our society is mapped onto this self-knowledge and informed by this knowledge in a very direct way.

Fair comment, but saying that some scientific discoveries have enabled a 'better morality' is not the same as saying that 'science provides values.' The insights are not moral in nature, they only inform morality. Bananas are not 'hunger' in nature, but they do satisfy hunger. Furthermore, saying that these qualities and desires are innate says nothing of whether or not they should be fulfilled. If there is a 'criminal mind' that has 'criminal needs,' are we prepared to say that it is moral for him/her to steal, rape, or swindle? If not, and we mark these people as aberrations, and say that it is right to punish them, that decision is not scientific in nature. Science may have discovered that we have a sense of justice, but it does not say that we 'should' be just. It can only say that many of us may feel that we should be just.

Let me ask a few succinct, illustrative questions:

Even supposing that science shows that we have a universal instinct to approve of justice, how can science make a moral claim that we are obligated to pursue justice?

Conversely, if science were to show that the vast majority of us in fact would be happier in an unjust world, how can science make a moral claim that we are obligated to neglect justice?

If science discovers two sets of individuals who have opposing values, how does science point the moral way to resolve the conflict? (Take the tyrants and the oppressed, above, as an example.)

If science discovers that we all have the same sets of values, but that those values are in conflict--everybody wants to be on top, for example--how can science give us a moral law about how to proceed?

Back to an earlier quote:
Ozymandyus said:
[W]eightings are informed by our society and culture, yes, but there is an UNDERLYING valuation system that is common to all humans. A scientific 'control' human, I propose, would have certain innate qualities. These are the objective fuel for moral development.

We can still stoke a subjective moral fire with this 'objective fuel', as hinted at in my most recent question. That is, even if the biological system that valuates is universal, the one thing it can't do is make you into me, or vise versa. What is also universal is individuality, and this is the bugaboo of attempts at a 'scientific' approach to morality.

In order to make a moral choice between the aims of separate persons with individual values, even if those values are identical in every way except for the referent of the 'value holder,' what is needed is an independent valuer. Since there is no independent valuer who can claim rights of arbitration--we're atheists, right?***--and, more specifically, since science is not that agent, there is no way that science can make a moral judgment about the outcome.

I have, I think, shown that there is no scientific basis for moral pronouncements. I think I've accidentally (!) shown that morality is an incoherent concept when applied collectively, because there is no privileged valuer present who can give an authoritative moral verdict where two interested individuals lock horns.

Look what you made me do!

*Let's not quibble over Universe vs. Multiverse. A law of the Multiverse would be about all the Universes that compose it.
**In each individual case with an individualized referent. For example, the 'shared values' of the rulers are shared insofar as each has a customized copy: "It is right for I, Bob|Larry|Brutus to be among the elect and so to lead," or "I, peasant Curly|Moe|Shep wish to be free."
***And, even if we aren't, what's the mechanism that would legitimize a creator's claim to moral relevance or authority?
 
arg-fallbackName="mknorman"/>
ebbixx said:
Or at least an invention that appears to be unique to humanity. Excellent argument.

Kind words. Thank you very much.
ebbixx said:
I have at least two issues with this, if it is what Ozy is arguing:

One, extinction happens sometimes regardless of the behavior of a species. In that perspective, our actions are largely meaningless, at least where survival is concerned.

Two, can we truly predict all actions that might or might not prove detrimental to long-term species survival? Or are we left trying to infer the nature and likely outcome of those actions?

In a very long term sense, it may or may not turn out that intelligence, technology, tool making and human civilzation led to a situation where we made our own long-term survival untenable, due to any of a number of ecological catastrophes, or our inability to find a fix in time, or other possibilities that may rely as much on chance as they do on intentional acts. I know of no calculus that necessarily and accurately predicts those effects. It's at least possible that our ingenuity at technological fixes will ensure our survival for a much longer period of time than anyone has imagined.

Unless we can predict outcomes to a very high degree of accuracy, I find it hard to say which of our actions and behaviors will prove beneficial or detrimental to long-term species survival.

Dude! I'm just a moral skeptic. You're a moral tragedian! That shit is dark!

/Backs away slowly from ebbixx, maintaining eye contact.
 
arg-fallbackName="WolfAU"/>
re mknorman; perhaps I missed your point but to me you're largely arguing semantics. I define morality as an individuals 'moral code', a code they believe is behaviour (or a standard) acceptable to them that they hold themselves and/or others to. This can include anything and everything that influences your behaviour.

The reason for not shooting people because you don't want to go to jail is largely arguing semantics because in saying this, you are acknowledging that this is a punishment society as a whole believes you deserve for your actions (ie their moral code as a collective). Also there are other forms of punishments which keep many people from doing things they would otherwise want to do, such as disapproval, social exile from friends/family, harm to their dreams (ie something that would prevent me getting an ideal job, a nice girlfriend etc).

The reason we do things while big brother is watching is not particularly complex, the reason we don't do things when he isn't is still largely about holding yourself to some kind of standard (largely empathy, ie 'what would others think of my actions?' 'Does this action break my standard I hold others to?' etc.

We often break the law when what the law says, and what the masses say conflict (with the masses usually having more sway over us). However if the law and public oppinion disagree with us, usually we conform where possible.
 
arg-fallbackName="ebbixx"/>
mknorman said:
Dude! I'm just a moral skeptic. You're a moral tragedian! That shit is dark!

/Backs away slowly from ebbixx, maintaining eye contact.

Interesting interpretation! I actually thought I'd cribbed a good deal of that from Stephen Jay Gould. ;)

And I'm not sure what it was you picked up on that necessarily touched on morals. My personal working hypothesis is that the universe is supremely indifferent to our existence, much less our success or failure as a species. I don't see moral failings of other species coming into play at least for their extinctions. And while it's tempting to moralize about our own role in the current cascade of extinctions, I'm not sure that really matters much either, in the grand scheme of things. This planet is far too tiny and isolated to have much of an impact on the universe, at least until we manage to build some much more impressive toys.

One thing I've found interesting in paying attention to scientific discoveries and the changes in "the known" from my childhood in the Sixties until now is just how much we have discovered that we are not as different or unique as animals as we once assumed ourselves to be. Corollary to that is the degree to which, after religious leaders, philosophers were among the last ones to acknowledge that shift in known (or extremely probable) facts and findings.

In brief, I'm just not sure morals are at all as valuable and meaningful as many of us might wish they were, "us" encompassing the full range of theists, deists, agnostics and atheists. It's very hard (emotionally) to give up on the notion that we are somehow special. But I'm not convinced that necessarly leads to a dark place as much as to a funny place.
 
arg-fallbackName="Ozymandyus"/>
mknorman said:
These two senses of 'value' are fundamentally different, and for one absolutely indisputable reason. Scientific values are for the whole universe. Human values are for each individual. The word 'value' in the philosophical sense is probably shorthand for 'valued by X as good.' For example, "mknorman values his survival as good," or "Ozymandyus gives the value '10 out of 10' to chicken curry in the flavor category.' What is thus left doubly implicit in the philosophical use of the term 'value' is the agent of valuation. (Doubly so because 'good' is left implicit, and 'good for whom' after that.) Contrast this with physics, where we say, "F=MA." We do not mean that "F=MA for me," but rather "F=MA once and for all for the universe, (except at relativistic and so on)." There may be ranges of validity for scientific truth, or there may not. But the range of validity for "I like curry" is always the individual who utters it.
I in no way think there is some kind of objective valuation of curry that science will find. Likewise, I think many of our current values are merely sauce... flavoring... neither I nor a scientific investigation of morals has any interest in telling people what flavors are best or what kind of sexual position to use, those are just aspects of the plasticity of human nature. Variety may indeed be the spice of life, as they say, and no code of conduct should completely take that variety away. Mmmm, delicious metaphors.

Anyway, the difference between what I will call a 'Taste' and a 'Value' is one of more than degree, just as you think the difference between scientific value and moral value is one of more than degree. I happen to think you are wrong on this point, but its a problem with the definitions not with your logic. Which is what I was trying to point out in the first place by drawing attention to the word value. You misrepresent the whole concept by bringing it down to the level of taste, when I am actually speaking about universal values that can be looked at scientifically.
The evidence for a physical law, and the 'values' derived, apply to the One Universe*. The phrase 'human values' is misleading in that it can't be about values common to humanity, because each valuer has his own copy, unique in that it refers to him or her self. Furthermore, there is no 'field' outside of these individuals for these values to operate in.
It IS about values common to humanity. Each individual has a copy of what he BELIEVES these values of humanity to be, that that has no bearing on ACTUAL universal human values, just as Ben Stein's scientific beliefs have no bearing on actual scientific fact. Each individuals beliefs can be influenced by his upbringing and culture, just as scientific beliefs can be. There is a 'field' outside of these individuals or more precisely, inside; each and every one of them is human and has the objectively same tools for looking at and considering the universe and for interacting and being affected by the universe. This commonality is no different than the commonalities that allow us to predict how forces affect objects.
Furthermore, saying that these qualities and desires are innate says nothing of whether or not they should be fulfilled. If there is a 'criminal mind' that has 'criminal needs,' are we prepared to say that it is moral for him/her to steal, rape, or swindle? If not, and we mark these people as aberrations, and say that it is right to punish them, that decision is not scientific in nature. Science may have discovered that we have a sense of justice, but it does not say that we 'should' be just. It can only say that many of us may feel that we should be just.
But it is a scientific decision. We look at the situation pragmatically, and say even if this is a small part of human nature (which it may indeed not be, it may be an effect of that person's circumstances, which science has determined for criminality across a broad range of cases) we should marginalize those human desires that conflict with other parts of human nature. This 'should' is a probability that can be dealt with scientifically.

How do we make this decision, what value do we assign to survival vs comfort, etc? That is something that is being tested right now in our society. We see people suing their doctors for allowing them to live with cerebral palsy, we look at and examine those cases closely and that informs our decisions in the future. That information is valuable and though we will not get a universal answer from single case (just as we would not get a universal scientific law from a single case), we can draw inferences and make hypotheses that can give us better codes of conduct.

I'm just going to directly answer your other questions...
mknorman said:
Even supposing that science shows that we have a universal instinct to approve of justice, how can science make a moral claim that we are obligated to pursue justice?
It is a combination of our investigation into human nature, which yields results, and our application of a pragmatist code of conduct which we were trying to work under in this discussion, which is a scientific way to fulfill those universal instincts. Our understanding of human behavior informs our goals, and our knowledge of consequences alter our code of conduct. Furthermore, All of society acts like a scientific experiment, with parts of it being constantly revised to adjust to perceived flaws. A progression in which each successive movement emerges as a solution to the contradictions to human nature inherent in the preceding movement.
mknorman said:
Conversely, if science were to show that the vast majority of us in fact would be happier in an unjust world, how can science make a moral claim that we are obligated to neglect justice?

It would do the exact same thing if this were the truth! That is, it would look at the desire, see what consequences would best bring that desire into equilibrium with other desires of survival etc... and then pack that into a code of conduct. Of course, this is not the truth, and we both know that - every revolution has been fueled by resentment of injustices. This is one of the objective facts that informs us that humans as a group DO crave justice.
mknorman said:
If science discovers two sets of individuals who have opposing values, how does science point the moral way to resolve the conflict? (Take the tyrants and the oppressed, above, as an example.)

Science would look at each of their valuation systems and what informed them in an objective way. It would examine in detail their arguments, look for logical flaws, look through each of their histories, and arrive at what objective human traits are at work. In this case, I believe humans have a near universal need to feel in control of their situation. This is both the cause of tyrants, and the cause of the discontent of oppressed people. However, our pragmatic analysis of this situation dictates that we find an equilibrium where all people can feel in some control of their situation, which is exactly the society we have today.
mknorman said:
If science discovers that we all have the same sets of values, but that those values are in conflict--everybody wants to be on top, for example--how can science give us a moral law about how to proceed?

Well, I believe I offered a sort of solution above, but lets take values that could Truly be at odds with one another - i.e. it seems humans have a near universal to be sedentary and free of stress, but we are most productive and innovative (qualities which are themselves highly valued) when we are in conflict. Well, perhaps our current society is a great working answer to that. Indeed we find many of our conflicting qualities do have answers that can please everyone to some degree. The code of conduct does not have to be some rigid things that cannot let multiple valuations coexist within it. Furthermore the code of conduct Must be amendable, because some of these valuations will always be made clearer as we gain a better understanding of human behavior.

Anyway, I hope that mostly answers those questions for you.
 
Back
Top