## Posts tagged ‘beliefs’

### Beliefs as secondary truth bearers in a pluralistic proposition theory

It is common to speak of true beliefs. As an example think of the JTB analyses of knowledge. JTB, that is, justified true belief. One could see “true belief” as a shorthand for “a belief in a true proposition”. This seems to be the case. It is common to call the theory for the JTB analysis of knowledge, but when writing down the three necessary and sufficient conditions, one does not write “has a true belief” but “p is true”.

But perhaps it is a good idea to allow for some or all beliefs to be true/false while still maintaining that it is propositions that are the primary truth bearers. A reason not to think so is again parsimony similar to the case of allowed sentences to be true too. Suppose that it is a good idea anyway.

### What are the truth-conditions for beliefs?

First we may note that there seems to be no problem with ambiguity as there is with sentences as truth bearers. Perhaps there are ambiguous beliefs. We will suppose that there are none. We may, then, introduce these simple truth-conditions for beliefs:

A belief is true iff the proposition believed in is true.

A belief is false iff the proposition believed in is false.

### Re: Beliefs and probabilities 2

Opening post.

More objections and thoughts.

### The infinite objection

I discovered another objection to the thesis. Infinite regress. The most defendable version of the thesis is

(T4) S believes that p materially implies that S believes that the probability of p is >0.5 given the evidence available to S.

Suppose Sarah believes p. That and (T4) implies that Sarah believes another and different proposition. That together with (T4) implies that that person believes yet another and different proposition ad infinitum. So (T4) along with any actual belief implies that the person believes an infinite number of propositions. Is that really true? I think not.

Moreover. Because each person believes that each of his believed propositions have a probability of >0.5 of being true. This applies also to the probability-of-propositions propositions. So it follows in the above example that Sarah believes that the probability of “The probability of p is >0.5 given the evidence available to Sarah” is >0.5 given the evidence available to Sarah, and so on. That’s some rather weird beliefs to holding an infinite number of.

### The lack of evidence and the web of belief

Supposing that I cannot find any knock-down argument against the thesis, that does not imply that it is a good idea to believe the thesis. (Argument from Ignorance.) There are an infinite number of propositions (and hence theses) that no good argument can be found against. Not all if any of these are a good idea to believe. I suppose that one can find a way to fit one’s other beliefs to this thesis such that there is no inconsistency or blatant incoherence. One could hold some weird beliefs about how abduction (i.e. inference to best explanation) works and some account about what beliefs are and how many of them we hold. But is there any actual reason to make this change to one’s web of belief if it is not better in any way to the traditionally held beliefs about these things? No. I advocate instead conservatism about changing one’s mind upon discovering that some large change in one’s web of beliefs would result in a consistent and coherent belief set that is just as good as the one one has.

### Re: Beliefs and probabilities 1

Opening post.

My response:

I think some clarification is in order. The thesis in question is this:

(T1) If S believes that p, then S believes that the probability of p is >0.5.

It seems reasonable in many contexts. E.g. I believe that the Earth is round, and I believe that the probability that the Earth is round is >0.5. One might ponder what kind of probability that is: prior probability? Clearly not. So we ought to add a clause about what evidence we’re calculating, intuitively perhaps, the probability given

(T2) If S believes that p, then S believes that the probability of p is >0.5 given the evidence available to S.

This amendment seems to fix the unclearness about evidence and probabilities.

Now, as this is a categorical prop. we can look for counter-examples. First some further clarification is in order. The thesis is still ambiguous. The ambiguity is about what meaning is to be ascribed to the “if…then…” part. Is it a material or a logical implication? We can clarify it like this

(T3) S believes that p logically implies that S believes that the probability of p is >0.5 given the evidence available to S.

(T4) S believes that p materially implies that S believes that the probability of p is >0.5 given the evidence available to S.

Now obviously (T3) is much stronger than (T4) and so it is more probably false. The difference with counter-examples in relation to (T3) and (T4) is that counter-examples for (T3) only need to be logically possible to work, but they need to be actual against (T4).1 So, let’s first look for counter-examples for the stronger claim but note that if we can find one that works against (T4) then it also works against (T3) since actuality implies possibility.

# Belief voluntarism

One might find a counter-example in the vicinity of Pascal’s Wager. Suppose that someone manages to believe some proposition, p, because he thinks that it is a good idea because of the consequences of believing p irrespectively of what the probability of p is given that someone’s evidence. This theory that one can start believing something purely because of choice is called voluntarism about belief. It seems false when we think about it. Suppose for instance that you want to believe that you don’t exist, or that the Earth is flat. Can you? No. The voluntarism may respond that it’s just because you didn’t try hard enough. That seems to be an ad hoc reply. Now, supposing that it is false that if one tries to believe something, then one will do it. Thus, voluntarism is not a threat against (T4), that is, the weak version of the thesis.

But is voluntarism logically impossible? I seems not because I can find no contradiction in a possible world where it is true. Perhaps someone else can. Suppose that it is indeed logically possible. Then it follows that the strong thesis is false, that is, (T3) is false because (T3) implies that it is impossible that (S believes that p and it is not the case that S believes that the probability of p is >0.5 given his evidence). I ask of you to find a contradiction in this counter-example without begging the question.

Suppose the above objection to (T3) is successful, then we should ask ourselves: Can some version of the stronger thesis perhaps still be saved? Let’s consider another version of it

(T5) S believes that p is defined as S believes that the probability of p is >0.5 given the evidence available to S.

Now it is a definition. It’s not clear how it avoids the objection from before but let’s suppose that it somehow does. There is another objection that can be made against (T5), and that is that it is circular. Note that “belief” is defined via the same word as it is trying to define. I take this circularity to be vicious. Given the objection I conclude that the strong version of the thesis has been adequately refuted.

# Inference to the best explanation

Now let’s return to the weak version (T4). Is it more defendable? Perhaps. However, consider a case where we know of only two theories of some phenomena (or -non). Suppose some person were to think about the phenomena and these two theories. The person happens to conclude for some reason that theory one is better than theory two, and on this basis he infers to the best explanation without having any belief about the probability of the theory being true given his evidence. He may even believe that the probability of the theory is <0.5. Are there any actual cases of this description? I would say yes. We sometimes infer to the best explanation in cases where we cannot find another theory that explains the data and still we accept the theory without believing it to be more probable than 0.5.

If there are counter-examples of the above type, then the weak thesis is also false, and I cannot seem to find any amendment to the thesis to make.

1Or physically possible, perhaps. Physically possible and actual are logically equivalent given a regularity theory of laws of physics.

### Worldviews and mistaken beliefs, a paradox

Here’s a little paradox that I’ve come across while thinking. It’s about worldviews and knowing that at least one thing I currently believe to be true is actually false.

The argument

1.      For all x, if x is a belief in my worldview, then I hold that belief. [Premise]

2.      If for all x, if x is a belief in my worldview, then I hold that belief, then I don’t believe that my worldview contains a mistaken belief. [Premise]

3.      Thus, I don’t believe that my worldview contains a mistaken belief. [from 1, 2, MP]

4.      There is at least one proposition such that I believe it and it is false. [Premise]

5.      If there is at least one proposition such that I believe it and it is false, then my worldview contains a mistaken belief. [Premise]

6.      Thus, my worldview contains a false belief. [From 4, 5, MP]

7.      Thus, my worldview contains a mistaken belief and I don’t believe that my worldview contains a mistaken belief. [3, 6, Conj.][1]

This conclusion seems paradoxical to me. It’s not a contradiction at it stands, it’s just fishy. If I form the belief as a result of this argument, that my worldview contains a mistaken belief, then I hold two contradictory beliefs. It seems that I have to reject a premise, but I don’t find any of them weak. In fact, three of them are logical tautologies, and the last is proven by induction. I’ll discuss the premises below.

Premise one

This one may seem a little unnecessary. I had no luck formulating the argument without this premise. I think it is a logical tautology, that is, true per definition. Let me first define worldview as I use it here.

Worldview =df The (complete) set of beliefs a person has.

Then, given the above definition, it is clear that if a belief is part of a person’s worldview, then that person holds that belief.

When I wrote this article and when I was thinking of the paradox, I noticed that it is easy to make a category error and call a worldview false. But that doesn’t make sense. ‘True’ and ‘false’ are meaningful in relation to propositions and not to beliefs. We may instead say that a worldview is mistaken which just means that the worldview contains at least one mistaken belief. A mistaken belief is a belief in a false proposition.

Common language may be broader in the use of ‘true’ and ‘false’ but here I will restrict myself for the sake of clarity of thought.

Premise two

This one is similar to the first, as it seems unnecessary and is a logical tautology.

One could argue it with a reductio. Assume that I believe that my worldview contains a mistaken belief. If I believe that, then I don’t hold the belief that is mistaken. (Since if I did, I would have two contrary beliefs.) If I don’t hold that belief, then it isn’t part of my worldview. But from the assumed we can deduce that it is part of my worldview. Thus, contradiction, and the assumption is, thus, false.

Premise three

This one is not a tautology for a change, but I think it is uncontroversial. I have a large number of beliefs, all grown-ups do, and in the past it has always been the case that a belief I had turned out to be false. Similar behavior has been observed in other humans. By induction we have good reason to believe that some of my current beliefs are false. The trouble is that I don’t know which of them it is!

Premise four

This is another tautology. By definition my worldview is the set of beliefs that I have, and if I hold a mistaken belief then it follows that my worldview contains at least one mistaken belief.

Solution?

I don’t know. I haven’t found one, if there is one.

[1] The argument is valid in propositional logic but some of the propositions are formulated in predicate logic for extra clarity.