A followup to yesterday (?)’s post.

I had an idea about how to be able to recognize the normal inference, but avoiding recognizing the other one if one so wants to. I don’t particularly feel like this is the right way to go, but supposing someone wants to do it.

The idea is based on similarity. The inference is about two sets of stuff, F and G. The idea is that some sets are somehow more ‘natural’ or important than others, and that this is based on similarity between its members. Clearly, the members of the set of “swans” have more in common than does the members of the set “white things”. Thus, one can infer that all swans are white, but not that all white things are swans.

Problems with this approach? Most likely. For instance, try to pick some sets of stuff that thare both ‘natural’ or neither. Say, in the first case, the set of women and the set of people with red hair. Does the members of the set “women” have more in common with each other than the set of “people with red hair”? Seems difficult to answer.

So, perhaps it isnt a good idea to make a requirement that the set be the most ‘natural’. But there are some other possibilities. For instance, one cud go with the interesting idea that the more the members of a set have in common, the stronger is the inference with that set as the F. The idea of varying strengths of inference fits very well with it being an inductive inference to begin with. In the swans and white things scenario, the inference to “all swans are white” is much stronger than the inference to “all white things are swans”. But perhaps in women and red haired people scenario, the inferences are about of equal strength.

There is also a nice theoretical niceness to this solution. The fact that the members of a particular set have more in common does imply that if one picks an attribute of those things at random, and generalizes from the members, then it is more likely that the generalization holds true. It even seems trivial.

Also, a friend of mine mentioned that this problem is quite similar to the Raven Paradox, and i agree.

The paradox

Hempel describes the paradox in terms of the hypothesis:[2][3]

(1) All ravens are black.

In strict logical terms, via contraposition, this statement is equivalent to:

(2) Everything that is not black is not a raven.

It should be clear that in all circumstances where (2) is true, (1) is also true; and likewise, in all circumstances where (2) is false (i.e. if a world is imagined in which something that was not black, yet was a raven, existed), (1) is also false. This establishes logical equivalence.

Given a general statement such as all ravens are black, a form of the same statement that refers to a specific observable instance of the general class would typically be considered to constitute evidence for that general statement. For example,

(3) Nevermore, my pet raven, is black.

is evidence supporting the hypothesis that all ravens are black.

The paradox arises when this same process is applied to statement (2). On sighting a green apple, one can observe:

(4) This green (and thus not black) thing is an apple (and thus not a raven).

By the same reasoning, this statement is evidence that (2) everything that is not black is not a raven. But since (as above) this statement is logically equivalent to (1) all ravens are black, it follows that the sight of a green apple is evidence supporting the notion that all ravens are black. This conclusion seems paradoxical, because it implies that information has been gained about ravens by looking at an apple.

Both of them are odd things about inductive inferences. I rather dislike using the word paradox in this loose sense meaning something like a puzzle.

0 Comments

Leave a Reply