I am a bit lost for words here. An example of a formal error is when you use a mathematical symbol or rule different from it's definition. For instance, if I say that 2 + 2 = 5 that is a formal error because that is not how "+" is defined for integers.JohanRonnblom wrote:This is getting repetitive. I already showed here that this is not false because Carrier does not need mythicism [according to Carrier's hypothesis] to be logically equivalent to the negation of historicity [according to Carrier's hypothesis]. He only needs it to be numerically equivalent. He has clearly stated the assumptions he makes that make this true. So there is no formal error. You responded to me saying you agreed. And now you are back claiming he made a formal error.Tim Hendrix wrote: Well, I am aware that Carrier writes that. Here is the example I had in mind: Carrier defines h as the hypothesis that Jesus existed (OHJ, p. 30). Then ~h would (formally) be the logical negation of h, however in OHJ "~h" is defined as the list of propositions in OHJ, p.53. From a formal point of view, that is false as the list of propositions are not logically equivalent to the negation of h. Carrier is aware of this abuse of notation and mentions it in OHJ.
The negation of a proposition means something particular in Boolean logic. When Carrier writes "~h", but defines that as something else than the negation of h, that is a formal error according to my above description of what a formal error is. You can claim something is not a formal error if, in subsequent computations, it has no effect. For instance, you could say that 2+2=5 assuming it was part of some computation where it would not matter if we used 4 or 5 in terms of some goal or pragmatic consideration --- however, you would be completely alone in that definition of what it means to be formally correct or not.
As a further point, I suggest you are affected by confirmation bias if you are seriously arguing that it is formally correct to re-define the rules of Boolean logic so as to maintain OHJ are free of formal inaccuracies.
I have explained how it bias the computation several times here and elsewhere. Please notice you are again changing my argument to something ridiculous: I claim (or rather, propose) that the assumption "Later Christian communities came to believe (or teach) Jesus was a historical person" better explains stories about a historical Jesus than the bare assumption that "originally Jesus existed". You can disagree with that suggestion and I am rather convinced I can't change your mind one bit, however, I am still free to point out the consequences of such an assumption being true.JohanRonnblom wrote:Bias the computation? How? Yes, obviously if your hypothesis is: "Some people believed that Jesus was a man" that explains stories of a man called Jesus better than if your hypothesis is "Bread is a planet made of jellyfish".Tim Hendrix wrote: The use of a particular theory of historicity, therefore (plausibly) affect all terms in the computation as I think it is self-evidently true elements of the 5-point hypothesis make some of the evidence easier to explain. For instance, it is easier to explain the gospels (stories of a man on earth) if you assume that early Christian communities believed in such a man on earth than if you do not make such an assumption. The use of a particular hypothesis (rather than a specific) must carefully be accounted for when the prior is estimated and therefore any issues with the prior potentially bias the entire computation.
What do you mean that Bob or Sue does not appear in his estimates?.JohanRonnblom wrote:You could at least check it well enough to see that neither Bob or Sue appear in Carrier's estimates of the probabilities or in his computation. I'm using Carrier's formula in the same way he uses it. If I use Carrier's numbers, of course, I will get Carrier's results. We can't use his numbers because we do not know what the correct estimates of the probabilities are. So I use a thought experiment of a hard case where we know the correct estimates, which allows us to see the effect of error and bias.Tim Hendrix wrote: I am happy you bring up a concrete example we can discuss. The problem with your example is that you are not using Carriers estimates of the probabilities, but rather consider another, unrelated, example. If we use the actual computation Carrier uses, and your way of defining "bias" as additive, then it does invert the conclusion. I think this graph is correct but be aware I have not checked it very well:
You are using a formula structured as Carriers, but with completely different numbers. If you want to examine the effect of systematic bias on Carriers computations then at the very least the case of no bias has to agree with Carriers numbers!.
Your example is like studying the effect of having unusually heavy cars pass over a bridge, and then starting out by assuming each car weights 5kg and conclude that a 5% increase in the cars weights have no effect -- if you allow yourself to replace Carriers numbers by something completely different to prove a point about small changes in Carriers numbers then, well, I don't know what to say.
You claim to not have read what I am actually doing and then you proceed to argue from your strawmen. I agree: If my claims were anything like that they would have no validity, however, I have never said any such thing. Studying bias simply means studying what happens if we are systematically too optimistic or too pessimistic in terms of the numerical stability of the computation.JohanRonnblom wrote: But I think I can see what you're doing: you're simply assuming that every argument Carrier uses is always wrong, therefore the more arguments that are brought into the discussion, the more wrong Carrier is, and the more certain can we be that the opposite of what Carrier argues must be correct.
I do not claim statistics (as all science) is not based on assumptions. I claim those assumptions are of a very different nature. The example should make that evident as rather than guessing the terms in the likelihood, the model introduces a parameter that describes the probability of the two outcomes that are being modeled, i.e. as for the likelihood there is no guesswork aside the initial assumption that such a parameter exists and an exchangeability assumption. That, by the very nature, is different from what Carrier is doing. If you wish to say two examples of applied math make assumptions of the same validity if just they both make assumptions I don't think you understand what applied math is.JohanRonnblom wrote: In this example they are guessing that the probability that a newborn baby is female is described by a binomial model. This is a good enough guess for most purposes, but it is not precisely true. In reality, it is much more complicated than that. Indeed, it is in the modelling that the guesswork usually happens. It's no different for Carrier.
How about the person who makes the rather remarkable claim the upper and lower bounds exactly agree provides the argument? I did, by the way, provide reasons, namely that people such as your or scholars like Ehrman assigns a wider range to these values. This is again like looking at a tree and saying: I think the ratio of the trees weight to its height is 0.1. And the upper limit of that value is 0.1, and the lower limit is 0.1. The person who makes such a claim surely bear the burden of proof even if I cannot say exactly what those limits are.JohanRonnblom wrote:Then make an argument for that. Someone might say that kittens are a valid argument. It is clearly not Carrier's job to make everyone elses argument for them. He is simply stating which probabilities he personally finds to be within reason.Tim Hendrix wrote: On to Carrier. Carrier claims that the highest, and lowest, estimate of the probability of the Gospels agree (i.e. the ratio is 1 in both cases). That is what I don't think is fair.
See my above answer: Your assertion here is equivalent to saying that if you can provide no arguments for why the trees height/weight ratio should be above 0.1, or below 0.1, I am justified in claiming the upper and lower limit coincide. Meanwhile, experts believe those ratio takes a wide range of values, as do even yourself do. You are simply shifting the burden of proof onto me to prove Carriers subjective estimate wrong even while you yourself say Carriers subjective estimate likely is wrong(!).JohanRonnblom wrote:What this means is simply that Carrier has not found any argument that he believes holds any water for why the Gospels would either prove or disprove historicity. Now, if you think there is any argument that he has either overlooked, or that he is treating wrongly, then it is very easy for you to put in some different numbers, and Carrier is inviting you to do exactly that. But, really, you need to bring some argument, and it should better be one that Carrier hasn't already treated (or a rebuttal to his counter-arguments, etc).Tim Hendrix wrote: I have at all times recognized that these statements are about ratios, simply see my very own quote above where I state exactly that. What Carrier assumes is that the ratio of P(Gospels|h) / P(Gospels|~h) = 1 in both the optimistic and pessimistic scenario. Mathematically that is equal to the assumption P(Gospels|h) = P(Gospels|~h) in both the optimistic and pessimistic scenario.
Firstly, it boggles the mind you can so flatly accuse me of using "bogus numbers" when I use Carriers exact numbers and you do not.JohanRonnblom wrote:It's not error analysis at all, but just bogus numbers thrown into a hat. If you apply that 'method' to any subject, you will find that the more reasons we actually have to believe that something is true, the more easily we are just victims of bias. If we have 1000 independent samples of DNA from the crime scene, analyzed by 1000 different labs, well, then in your model that makes the conclusion more uncertain than if we have only 1 sample, because any 'bias' will inflate the error.Tim Hendrix wrote: (...)Your simulations do not use his probabilities as you write yourself, mine does. If we do use Carriers probabilities, your conclusion simply does not hold as you can check yourself. Once again I stress this is a basic point of error analysis.
Secondly, You are simply asserting your DNA example agrees with your conclusion with absolutely no reasons given. To substantiate your claim about the DNA example you would first have to provide a relevant statistical analysis of the DNA example, define Bias in that context and show the effect you speak of is true. I hope you will try to do this as I am sure you will learn relevant differences in how statistics are applied to DNA compared to how Carrier applies it.
One such difference is that the frequency of the alleles used in DNA evidence is something that is estimated reliably and with quantifiable errors. In other words, this alone put us outside the realm of simply guessing the various factors as in OHJ. Now, you can claim that statistical analysis is still based on some assumptions: True, but again those assumptions can be examined experimentally, they do not take the form of one long sequence of "My subjective estimate of the probability of X is ...".