Occam's Razor DefendedsteemCreated with Sketch.

in #philosophy8 years ago

enter image description here

Occam's Razor states that we ought not to unnecessarily multiply entities; or, in more conventional terms, that we ought not to make unnecessary assumptions. William of Ockham (ca. 1287–1347) formulated this principle in response to the tendency among Neoplatonist philosophers to posit the existence of things or entities without any direct evidence or rational necessity. (Cf. Iamblichus' divine emmenations and intermediaries) Occam's Razor can also be stated thus: the simplest solution consistent with the facts ought to be preferred to more complex solutions.

From a critical rationalist perspective, this is a sound scientific principle because the greater the number of assumptions, the more likely it is that one of those assumptions will be wrong. If your hypothesis has fewer assumptions, it is statistically less probable for it to be falsified. And, if you have assumptions that are currently unfalsifiable (though they might theoretically be either true or false), then your hypothesis is statistically less likely to be true. There is a greater possibility that some fact that is currently unknown could falsify that hypothesis in the future. If you have a theory that relies mostly on empirical observation and only on one assumption, the probability of that theory being wrong is less than that of another theory based on the same amount of empirical evidence but with many assumptions. Each unnecessary assumption is a potential weak point. Thus, although the simpler hypothesis and the more complex one are both possible, it would be more rational to place your wager or bet upon the simpler one, since that one is more likely in terms of probability.

Karl Popper liked to think of the epistemological value of simplicity in terms of generality. A simple theory that can be stated in basic terms (e.g. an object of greater mass attracts an object of lesser mass) will usually be more generally applicable than a more complex theory. If the theory is more generally applicable, the greater number of applications increases the likelihood of it being falsified. The theory then becomes more testable.

"Above all, our theory explains why simplicity is so highly desirable. To understand this there is no need for us to assume 'a principle of economy of thought' or anything of the kind. Simple statements, if knowledge is our object, are to be prized more highly than less simple ones because they tell us more; because their empirical content is greater; and because they are better testable."--Karl Popper (The Logic of Scientific Discovery)

For instance, string theory makes a number of assumptions that are potentially falsifiable: that particles are really one dimensional strings, that the vibrational state of the string gives the particles their properties, that there are 10 dimensions (i.e. dimensions beyond the 3-dimensional space and 1-dimensional time that we can observe), that a hypothetical particle known as a graviton mediates the force of gravity, etc. Supposing that some other, more simple, theory were able to account for the phenomena that scientists observe, it would be more rational to embrace that theory over string theory. String theory is only taken seriously by scientists because it does account for a lot of things that no other theory can currently account for. Nevertheless, string theory is probably incorrect because of the number of assumptions that it must make. If someone came up with a simpler theory with only one or two assumptions apart from the empirical data, then that theory would be logically preferable to string theory. String theory may very well turn out to be true, but it doesn't make a whole lot of sense to assume that it is true unless you have no simpler explanation. And that is part of Occam's Razor. Occam's Razor holds that you ought to embrace the simplest theory consistent with the data, so a complex theory is preferable if simpler theories are not as consistent with the data.

Sort:  

One of the flaws of Occam's Razor is that it gives people the false authority to wrongly discriminate against facts and exclude anomalous data that does not fit their more simplistic paradigms or theories. If a theory is absolutely correct, then it should account for all data observed and gathered. If it cannot -- and people are too intellectually lazy to do the work to make the more complex models that help explain the full breadth of empirical data -- then Occam's Razor becomes a lazy intellect's Ginsu knife -- or an excuse to not do the work. Therefore, Occam's Razor is being misused. This is happening all the time because some scientists refuse to do the extra work to account for anomalies.

Precisely...in those instances, Occam's Razor is being misused. There is no flaw in Occam's Razor itself, since Occam's Razor has the built-in caveat that you can multiply assumptions if it is necessary in order to make the theory consistent with the data. I would argue that this is why people who appeal to Occam's Razor in arguments for creationism are mistaken, because the creationist account is simpler but it is not more consistent with the data; therefore, it doesn't really pass the test of Occam's Razor.

Yes, but it occurs more often than just that. There is anomalous data with respect to human origins and lost civilizations (lazy anthropologists). Anomalous data regarding archaeological finds (lazy archaeologists). Anomalous data in physics--e.g., the cutting/exclusion of æther from physics frameworks (lazy physicists). Plus, much more. I'm speaking more specifically of some scientists and the paradigms that they peddle. I see holes. They make excuses and lean on Occam's Razor to dismiss evidence and facts. I've challenged institutional professors and they prefer the lazy route. Nobody wants to go against the consensus paradigm, even if it's full of holes.

Well, I would assume that you mean anomalous data like that purported in the book Forbidden Archeology by Michael Cremo. I would argue that those apparent anomalies need to be critically analyzed to see if they are genuine anomalies, and then you need to look for other similar anomalies. There's good reason to doubt the authenticity of most archeological anomalies. After the time of Darwin, they did a lot of for profit expeditions, and a lot of it was motivated by the desire for fame or money, so there were many bunk discoveries that were purported as being significant. But the more reliable discoveries and data, which doesn't have reason to be doubted, and is better documented, is generally what scientists prioritize. Anomalous data is marginalized until there is enough evidence to validate it. So, I think modern archeology and anthropology is generally reliable, but then all knowledge is only conjectures, never absolute, so you have to keep that in mind. There's a lot we don't know, will never know, and simply can't know.

As for physics, that's just a big can of worms. With a lot of it, there's just no theory to make sense of the data as of yet. I mean, all of quantum mechanics and theoretical physics has to do with reconciling pretty big anomalies, so I don't think there's too great of a tendency to ignore anomalies.

Coin Marketplace

STEEM 0.16
TRX 0.17
JST 0.029
BTC 69218.33
ETH 2488.39
USDT 1.00
SBD 2.53