What makes theory convincing
It is supposed to be that there is rational warrant for the judgment that current theories are not truthlike. The flaw with this kind of sweeping generalization is precisely that it totally disregards the fresh strong evidence there is for current theories—it renders current evidence totally irrelevant to the issue of their probability of being true.
Surely this is unwarranted. Not only because it disregards potentially important differences in the quality and quantity of evidence there is for current theories differences that would justify treating current theories as more supported by available evidence than past theories were by the then available evidence ; but also because it makes a mockery of looking for evidence for scientific theories!
If I know that X is more likely than Y and that this relation cannot change by doing Z , there is no point in doing Z. If we think of the pessimistic argument not as inductive but as a warrant-remover argument and if we also think that the fate of past theories should have a bearing on what we are warranted in accepting now, we should think of its structure differently.
It has been argued by Psillos chapter 5 that we should think of the pessimistic argument as a kind of reductio. If we view the historical challenge this way, viz. Premise B of argument P is critical. It is meant to capture radical discontinuity in theory-change, which was put thus stated in the material mode :. Psillos Unless there are past successful theories which are warrantedly deemed not to be truthlike, premise B cannot be sustained and the warrant-removing reductio of A fails.
If C can be substantiated, success cannot be used to warrant the claim that current theories are true. C can be substantiated only by examining past successful theories and their fate. History of science is thereby essentially engaged. The realist response has come to be known as the divide et impera strategy to refute the pessimistic argument. The focus of this strategy was on rebutting the claim that the truth of current theories implies that past theories cannot be deemed truthlike.
To defend realism, realists needed to be selective in their commitments. This selectivity was developed by Kitcher and independently by Psillos One way to be selective is to draw a distinction between working posits of a theory viz. Kitcher Psillos , The underlying thought is that the empirical successes of a theory do not indiscriminably support all theoretical claims of the theory, but rather the empirical support is differentially distributed among the various claims of the theory according to the contribution they make to the generation of the successes.
It is worth-noting that, methodologically, the divide et impera strategy recommended that the historical challenge to realism can only be met by looking at the actual successes of past successful theories and by showing that those parts of past theories e.
Against this point it has been argued that the realist strategy proceeds in two steps cf. The first is to make the claim of continuity or convergence plausible, viz. But this first step does not establish that the convergence is to the truth. For this claim to be made plausible a second argument is needed, viz. So there is, after all, entitlement to move from convergence to truthlikeness, insofar as truthlikeness is the best explanation of this convergence.
He says:. So the apparent convergence of truth and the sources of success in past theories is easily explained by the simple fact that both kinds of retrospective judgements have a common source in our present beliefs about nature.
It has been claimed by Psillos that the foregoing objection is misguided. The problem is this. There are the theories scientists currently endorse and there are the theories that had been endorsed in the past. Some but not all of them were empirically successful perhaps for long periods of time.
They were empirically successful irrespective of the fact that, subsequently, they came to be replaced by others. This replacement was a contingent matter that had to do with the fact that the world did not fully co-operate with the then extant theories: some of their predictions failed; or the theories became overly ad hoc or complicated in their attempt to accommodate anomalies, or what have you.
The replacement of theories by others does not cancel out the fact that the replaced theories were empirically successful. Even if scientists had somehow failed to come up with new theories, the old theories would not have ceased to be successful. So success is one thing, replacement is another.
Hence, it is one thing to inquire into what features of some past theories accounted for their success and quite another to ask whether these features were such that they were retained in subsequent theories of the same domain. These are two independent issues and they can be dealt with both conceptually and historically independently.
One should start with some past theories and—bracketing the question of their replacement—try to identify, on independent grounds, the sources of their empirical success; that is, to identify those theoretical constituents of the theories that fuelled their successes.
When a past theory has been, as it were, anatomised, we can then ask the independent question of whether there is any sense in which the sources of success of a past theory that the anatomy has identified are present in our current theories. A refinement of the divide et impera move against the PI has been suggested by Peter Vickers He argues that the onus of proof lies with the antirealist: the antirealist has to reconstruct the derivation of a prediction, identify the assumptions that merit realist commitments and then show that at least one of them is not truthlike by our current lights.
But then, Vickers adds, all the realists need to show is that the specific assumptions identified by the anti-realist do not merit realist commitments. It should be noted that this is exactly the strategy recommended by Psillos in his , where he aimed to show, using specific cases, that various assumptions such as that heat is a material substance in the case of the caloric theory of heat, do not merit realist commitment, because there are weaker assumptions that fuel the derivation of successful predictions.
Vickers generalizes this strategy by arguing as follows. Take a hypothesis H that is taken to be employed in the derivation of P and does not merit realist commitment. An instance of the divide et impera strategy is structural realism. This view has been associated with John Worrall , who revived the relationist account of theory-change that emerged in the beginning of the twentieth century. In opposition to scientific realism, structural realism restricts the cognitive content of scientific theories to their mathematical structure together with their empirical consequences.
But, in opposition to instrumentalism, structural realism suggests that the mathematical structure of a theory represents the structure of the world real relations between things. Against PI, structural realism contends that there is continuity in theory-change, but this continuity is again at the level of mathematical structure.
Structural realism was independently developed in the s by Grover Maxwell a, b in an attempt to show that the Ramsey-sentence approach to theories need not lead to instrumentalism.
Ramsey-sentences go back to a seminal idea by Frank Ramsey The key idea here was that a Ramsey-sentence satisfies both conditions i and ii. A key problem with Ramsey-sentence realism is that though a Ramsey-sentence of a theory may be empirically inadequate, and hence false, if it is empirically adequate if, that is, the structure of observable phenomena is embedded in one of its models , then it is bound to be true.
More recently, David Papineau has argued that if we identify the theory with its Ramsey-sentence, it can be argued that past theories are approximately true if there are entities which satisfy, or nearly satisfy, their Ramsey-sentences. The advantage of this move, according to Papineau, is that the issue of referential failure is bypassed when assessing theories for approximate truth, since the Ramsey sentence replaces the theoretical terms with existentially bound variables.
But as Papineau admits, the force of the historical challenge to realism is not thereby thwarted. For it may well be the case that the Ramsey-sentences of most past theories are not satisfied not even nearly so. In the more recent literature, there has been considerable debate as to how exactly we should understand PI. PI can … be described as a two-step worry. First, there is an assertion to the effect that the history of science contains an impressive graveyard of theories that were previously believed [to be true], but subsequently judged to be false … Second, there is an induction on the basis of this assertion, whose conclusion is that current theories are likely future occupants of the same graveyard.
Yet, it is plausible to think that qua an inductive argument, history-based pessimism is bound to fail. The key point here is that the sampling of theories which constitute the inductive evidence is neither random nor otherwise representative of theories in general. It has been argued that, seen as an inductive argument, PI is fallacious: it commits the base-rate fallacy cf. Lewis If in the past there have been many more false theories than true ones, if, in other words, truth has been rare , it cannot be concluded that there is no connection between success and truth.
Take S to stand for Success and not- S to stand for failure. Analogously, take T to stand for truth of theory T and not- T for falsity of theory T. Assume that is, that there is a very high True Positives successful but true rate and a small False Positives successful but false theories rate. We may then ask the question: How likely is it that a theory is true, given that it is successful?
But this does not imply something about the connection between success and truth. It is still the case that the false positives are low and that the true positives high. The low probability is due to the fact that truth is rare or that falsity is much more frequent.
Similarly, the probability that a theory is false given that it is successful i. As Peter Lewis put it:. At a given time in the past, it may well be that false theories vastly outnumber true theories. In that case, even if only a small proportion of false theories are successful, and even if a large proportion of true theories are successful, the successful false theories may outnumber the successful true theories.
So the fact that successful false theories outnumber successful true theories at some time does nothing to undermine the reliability of success as a test for truth at that time, let alone at other times —7.
Seen in this light, PI does not discredit the reliability of success as a test for truth of a theory; it merely points to the fact that truth is scarce among past theories. Challenging the inductive credentials of PI has acquired a life of its own.
A standard objection cf. That is, theories are diverse enough over time, structure and content not to allow us to take a few of them—not picked randomly—as representative of all and to project the characteristics shared by those picked to all theories in general.
In particular, the list that Laudan produced is not a random sample of theories. They are all before the twentieth century and all have been chosen solely on the basis that they had had some successes irrespective of how robust these successes were.
An argument of the form:. Things would be different, if we had a random sampling of theories. These 40 were then divided into three groups: accepted theories, abandoned theories and debated theories. Mizrahi then notes that these randomly selected data cannot justify an inductively drawn conclusion that most successful theories are false. On the contrary, an optimistic induction would be more warranted:. Mizrahi has come back to the issue of random sampling and has attempted to show that the empirical evidence is against PI:.
If the history of science were a graveyard of dead theories and abandoned posits, then random samples of scientific theories and theoretical posits would contain significantly more dead theories and abandoned posits than live theories and accepted posits.
It is not the case that random samples of scientific theories and theoretical posits contain significantly more dead theories and abandoned posits than live theories and accepted posits.
Therefore, It is not the case that the history of science is a graveyard of dead theories and abandoned posits. A similar argument has been defended by Park This has been adopted by Michael Devitt too, though restricted to entities.
In a similar fashion but focusing on current theories, Doppelt claims that realists should confine their commitment to the approximate truth of current best theories, where best theories are those that are both most successful and well established.
The asymmetry between current best theories and past ones is such that the success of current theories is of a different kind than the success of past theories. The difference, Doppelt assumes, is so big that the success of current theories can only be explained by assuming that they are approximately true, whereas the explanation of the success of past theories does not require this commitment.
If this is right, there is sufficient qualitative distance between past theories and current best ones to block. Doppelt This singular degree of empirical confirmation amounts to raising the standards of empirical success to a level unreachable by past theories cf.
Hence, the standards of empirical success change slower than theories. Hence, it is not very likely that current standards of empirical success will change any time soon. It has been argued, however, that Doppelt cannot explain the novel predictive success of past theories without arguing that they had truthlike constituents cf. Alai The key point of this strategy is that the history of science does not offer a representative sample of the totality of theories that should be used to feed the historical pessimism of PI.
In order to substantiate this, Fahrbach suggested, based on extensive bibliometric data, that over the last three centuries the number of papers published by scientists as well as the number of scientists themselves have grown exponentially , with a doubling rate of 15—20 years.
As such the sample is totally unrepresentative of theories in total; and hence the pessimistic conclusion, viz. But within falsificationism, some contradictions have arisen and from these, three versions of falsificationism have been formulated.
This is built around ideas that are supported by evidence, which corresponds to logic with the current knowledge. Even more common theories in the sciences must be able to be falsified by nature. In that way, theories by definition can never be proven to be completely right and therefore stay untouched by opposing arguments.
It is the nature of theories that this essay is mostly concerned with. People will rely on one way of knowing more than the others because it is a stronger justification for them. In some cases people may use one of the ways of knowing that other people would not consider, consistent with fitting the problem to the tool.
Many people would prefer to rely on logic and reason to determine truth. The concrete facts legitimize information and provide justification for ideas. Especially in an area of knowledge like science, people rely heavily on logic and demand the presentation of evidence to support claims.
That is, if there is even an impartial difference between logical and illogical inductions? This may be very difficult to answer. Which proves whether accusations presented are justified though a systematic approach known as the scientific methodology. The two distinct methods consisted within the scientific method are the inductive and deductive processes. How do people assume something to be true although they have not seen, measured, or tested the very idea themselves?
How do the sciences use theories in order to convince populations everywhere? In order to begin, I will discuss key words featured in the title. This is also concurrent with falsificationism as it considers all scientific knowledge to be falsifiable. In order to disprove a scientific claim then it must be testable and must therefore be based on what we can perceive around us.
While Popperian hypothetico-deductivism seemingly solves the issues with induction, it also has some flaws which I will elucidate. Confirmation evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify a theory. Some genuinely testable theories, when found to be false, are still upheld by their admirers-for example by introducing some ad hoc auxiliary assumption, or re-interpreting the theory ad hoc in such a way that it escapes refutation.
However, such a method either destroys or lowers its scientific status. There has also been large increases in the accuracy and use of technology is ensuring that there is more empirical evidence and proof that theories are being based on.
Home Page What is it about theories in the human sciences and natural sciences that makes them convincing? What is it about theories in the human sciences and natural sciences that makes them convincing? Powerful Essays. Open Document. Essay Sample Check Writing Quality.
Once you've listed how some theories have come into existence, look at who's doing the thinking. Big famous people? The common man or woman? People from certain regions or jobs or backgrounds? Look for similarities and differences. I don't know if it's dangerous to talk about this, but to me the question is not-so-subtly hinting at the non-sciences and their theories.
I feel like a part of this title is asking how these science-y theories can be more convincing than the artsy theories. That's one way answer the question, but I don't know if you have to look into that at all. Anyways, now answer the question.
Why were certain theories convincing? Look at theories that we have discredited today in addition to theories that we still hold. You might say, "Experiments! Repeatable, precise experiments in this natural science makes these theories convincing. I know that I wouldn't have any experience with social sciences other than history and a little bit of economics with my courses in IB. Pick one you're most comfortable with but ask about others if you want to branch out.
At every step be sure to connect your argument with examples from your life. This is just one way to approach the question. In figuring out how you want to dissect the title and bring it back together, you can come up with different angles because the question is pretty open-ended.
I'm done with TOK, but my friend just recently asked me to help her out. I would like to add that, if during this essay you have not covered the scientific method, you are in danger of having gone rather wrong. The important bit isn't so much analysing theories themselves -- the language, structure, presentation etc. I suggest looking at the ways of knowing involved in the scientific method for natural sciences and the reasons why we find these to be reliable.
Then look at ways of knowing involved in human sciences and look at how the scientific method differs. Is the scientific method exactly the same? Is the nature of the data the same? Personally I would say that the scientific method unites both sciences and that you need to address the reasons why we find the scientific method reliable: what is it about it as in the process that makes it convincing?
Then perhaps discuss how the scientific method is applied to different forms of knowledge in human from natural sciences which might make a theory a little less convincing Just giving this a bump, im stuck for ideas, brainstorming atm, anyone got anything else to think about thats related? I'd like to add that you would do better to define what a 'scientific method' is.
Bear in mind that the traditional idea of observation, induction towards hypothesis, experiment towards confirmation or disproof, etc is a central idea, but not what every scientist uses. Rather, these are components of a scientific method which may be deployed, with some modifications, towards the ideals of firstly achieving validity and reliability, and then utility or facility.
The point about human sciences like economics and psychology 'science-y things found in Group 3' is that they too attempt to employ the scientific method. But their inherent difficulty is the non-reproducibility of human behaviour which has to be overcome by statistical methods.
A 'hard science' theory can often be examined with just one case; a 'human science' theory has to be checked to see if it's generally true for most humans. This happens in some areas of plant and animal biology too, except that it might be considered unethical to do things to humans that we might do to a tapeworm or a plant. Also i found somewhere else that it actually refers to the how theories fight opposition or resistance.
Well a theory can only "fight opposition or resistance" if it carries 'weight' i. Thus, the two 'definitions' are really the same thing. A theory which is 'false' cannot be preferred to a theory which is 'true' unless it fits the definition of truth i.
In a nutshell, a convincing theory is likely to be true. However I don't think this is the road you want to go down. What you want to do is to answer the knowledge issue. The knowledge issue hidden in the question is likely to be: 'What are the values and limitations of using the scientific method in the natural sciences and social sciences in obtaining knowledge?
0コメント