An essay by Peter Woit, a theoretical physicist, on the scientific status of string theory.
An excerpt from the essay:
Last months conference in Princeton included two remarkable talks by prominent physicists, both of whom invoked philosophy in a manner unprecedented for this kind of scientific gathering. On the first day, Paul Steinhardt attacked the current practice of inflationary cosmology as able to accommodate any experimental result, so, on philosophical grounds, no longer science . He included a video clip of Richard Feynman characterizing this sort of thing as cargo cult physics. On the final day, David Gross interpreted Steinhardts talk as implicitly applying to string theory, then went on to invoke a philosophers new book to defend string theory, arguing that string theorists needed to read the book in order to learn how to defend what they do as science .
The book in question was Richard Dawids String Theory and the Scientific Method , which comes with blurbs from Gross and string theorist John Schwarz on the cover. Dawid is a physicist turned philosopher, and he makes the claim that string theory shows that conventional ideas about theory confirmation need to be revised to accommodate new scientific practice and the increasing significance of non-empirical theory confirmation. The issues of this kind raised by string theory are complex, so much so that I once decided to write a whole book on the topic . A decade later I think the arguments of that book still hold up well, with its point of view about string theory now much more widespread among working physicists. One thing I wasnt aware of back then was the literature in philosophy of science about progressive vs. degenerating research programs, which now seems to me quite relevant to the question of how to think about evaluating string theory.
Ive written a bit about the Dawid book and earlier work of his , although as for any serious book theres of course much more to say, even if I lack the time or energy for it. Recently an interview with Dawid appeared, entitled String theory and post-empiricism, which summarizes his views and makes some claims about string theory critics which deserve a response, so that will be the topic here. In the interview he says:
I think that those critics make two mistakes. First, they implicitly presume that there is an unchanging conception of theory confirmation that can serve as an eternal criterion for sound scientific reasoning. If this were the case, showing that a certain group violates that criterion would per se refute that groups line of reasoning. But we have no god-given principles of theory confirmation. The principles we have are themselves a product of the scientific process. They vary from context to context and they change with time based on scientific progress. This means that, in order to criticize a strategy of theory assessment, its not enough to point out that the strategy doesnt agree with a particular more traditional notion.
Richard Dawid, a philosopher with a doctorate in physics, who takes an opposing view, is interviewed here.
The fake papers had been submitted to conferences on computer science and engineering whose proceedings were published in specialised, subscription-only publications, Springer said.
"We are in the process of taking down the papers as quickly as possible," the German-based publisher said in a statement.
"This means that they will be removed, not retracted, since they are all nonsense."
That is, at one point, Winston's reply to O'Brien, and is probably what Orwell believed.
In June, John Crowley gave a talk at MoMA on The Future as Parable. From about 8 minutes in up to about 20 minutes in he talked about Orwell's 1984.
A short excerpt from what he said:
Here is another. In 1946, as he was conceiving 1984, George Orwell reviewed the writings of an American political philosopher and futurologist named James Burnham, whose work had made a deep impression on him. Burnham began his political life a Trotskyite and went on to become an editor of the National Review. In 1940, he published The Managerial Revolution, which foresaw the coming of a new order in human political and economic organization. Capitalism would soon disappear, but socialism wouldnt replace it. Instead, Burnham said, a managerial class of bureaucrats and technocrats and administrators was evolving that would replace both the old-fashioned business owner/entrepreneur and electoral politics. Private property would disappear but wouldnt be replaced by common ownership; the managers would make all decisions, distribute all wealth, retain all power. The rest of humanity would subsist as dependents, happily enough, controlled by propaganda. Meanwhile the clusters of small states, democratic or tyrannical or whatever, would vanish, to be replaced by a few huge combines America, Europe plus western Asia, the Pacific East, the Soviet sphere. These would be continuously at war, though never able to dominate all the others. A kind of stasis would probably eventuate and last from then on, or at least for a very long time.
In Burnhams vision, as Orwell describes it, the only engine of history is the struggle for power: All historical changes finally boil down to the replacement of one ruling class by another. Talk about utopia or the classless society is bullshit (humbug, Orwell calls it). It is clear that Burnham is fascinated by the spectacle of power, says Orwell. There is a note of unmistakable relish over the cruelty and wickedness of the processes that are being discussed.
The question Burnham ought to ask but never does, Orwell says, is why this lust for power became the ruling human passion just at the time when the rule of the many by the few, which might once have been necessary to survival and the expansion of human culture, has become unnecessary. Orwell predicts, astutely, that the Russian régime will either democratise itself, or it will perish. The huge, invincible, everlasting slave empire of which Burnham appears to dream will not be established, or, if established, will not endure, because slavery is no longer a stable basis for human society.
If thats his reasoned opinion about Burnhams dream, then why did he write a book warning of its possibility? In 1984, dystopianism has arisen whole expanded and crystallising and conquered the world in just the forty years since the end of the Nazi empire, and apparently it seems set to last, a boot stamping on a human face, forever. But it wont and cant. The only possibility was that Orwell was building a Burnham world precisely in order to contradict him by going further than even Burnham could. 1984 is not a warning, much less a prediction, but a parable. It doesnt mean what it seems at first to mean, just as the parables of Jesus dont mean what they seem at first to mean.
The article referenced in the OP says:
There is another important aspect of the Planck length. Relativity predicts that distances as measured by an observer in a fast-moving reference frame shrinkthe so-called Lorentz contraction. But the Planck length is specialits the only length that can be derived from the constants c, G, and h without adding some arbitrary constantso it may retain the same value in all reference frames, not subject to any Lorentz contraction. But the Planck length is derived from universal constants, so it must have the same value in all reference frames; it cant change according to a Lorentz contraction. This implies that relativity theory does not apply at this size scale. We need some new scientific explanation for this phenomenon, and stochastic space-time might provide it. The idea that the Planck length cannot be shortened by the Lorentz contraction suggests that it is a fundamental quantum, or unit, of length. As a result, volumes with dimensions smaller than the Planck length arguably dont exist. The Planck length then, is a highly likely candidate for the size of a space-time grain, the smallest possible piece of space-time.
A description of the 2011 paper says:
Some theories suggest that the quantum nature of space should manifest itself at the Planck scale: the minuscule 10-35 of a metre, where a millimetre is 10-3 m.
However, Integrals observations are about 10 000 times more accurate than any previous and show that any quantum graininess must be at a level of 10-48 m or smaller.
This is a very important result in fundamental physics and will rule out some string theories and quantum loop gravity theories, says Dr Laurent.
... asymptotically deterministic.
Yet Hoefer's paper, linked to from Pigliucci's blog, explicitly claims we don't even know if the question of whether or not the universe is deterministic is decidable, yet Rosenberg is claiming that he knows it is asymtotically deterministic at the macro level. And, we are not just talking about quantum events with no large effects in the macro world, it may be impossible to determine whether chaotic system are deterministic or stochastic:
The usual idealizing assumptions are made: no friction, perfectly elastic collisions, no outside influences. The ball's trajectory is determined by its initial position and direction of motion. If we imagine a slightly different initial direction, the trajectory will at first be only slightly different. And collisions with the straight walls will not tend to increase very rapidly the difference between trajectories. But collisions with the convex object will have the effect of amplifying the differences. After several collisions with the convex body or bodies, trajectories that started out very close to one another will have become wildly differentSDIC (sensitive dependence on initial conditions - Jim).
In the example of the billiard table, we know that we are starting out with a Newtonian deterministic systemthat is how the idealized example is defined. But chaotic dynamical systems come in a great variety of types: discrete and continuous, 2-dimensional, 3-dimensional and higher, particle-based and fluid-flow-based, and so on. Mathematically, we may suppose all of these systems share SDIC. But generally they will also display properties such as unpredictability, non-computability, Kolmogorov-random behavior, and so onat least when looked at in the right way, or at the right level of detail. This leads to the following epistemic difficulty: if, in nature, we find a type of system that displays some or all of these latter properties, how can we decide which of the following two hypotheses is true?
1. The system is governed by genuinely stochastic, indeterministic laws (or by no laws at all), i.e., its apparent randomness is in fact real randomness.
2. The system is governed by underlying deterministic laws, but is chaotic.
In other words, once one appreciates the varieties of chaotic dynamical systems that exist, mathematically speaking, it starts to look difficultmaybe impossiblefor us to ever decide whether apparently random behavior in nature arises from genuine stochasticity, or rather from deterministic chaos. Patrick Suppes (1993, 1996) argues, on the basis of theorems proven by Ornstein (1974 and later) that There are processes which can equally well be analyzed as deterministic systems of classical mechanics or as indeterministic semi-Markov processes, no matter how many observations are made. And he concludes that Deterministic metaphysicians can comfortably hold to their view knowing they cannot be empirically refuted, but so can indeterministic ones as well. (Suppes (1993), p. 254)
I don't see how Rosenberg can be on board with this and then claim that nature is asymtotically deterministic.
And, research indicates that animal brains have built-in chaotic subsystems - i.e we can't know whether they are deterministic or not. For instance, this excerpt from Towards a scientific concept of free will as a biological trait: spontaneous actions and decision-making in invertebrates - ( http://rspb.royalsocietypublishing.org/content/early/2010/12/14/rspb.2010.2325.full| ):
A corresponding conclusion can be drawn from two earlier studies, which independently found that the temporal structure of the variability in spontaneous turning manoeuvres both in tethered and in free-flying fruitflies could not be explained by random system noise [63,64]. Instead, a nonlinear signature was found, suggesting that fly brains operate at criticality, meaning that they are mathematically unstable, which, in turn, implies an evolved mechanism rendering brains highly susceptible to the smallest differences in initial conditions (i.e. SDIC - Jim) and amplifying them exponentially . Put differently, fly brains have evolved to generate unpredictable turning manoeuvres. The default state also of flies is to behave variably. Ongoing studies are trying to localize the brain circuits giving rise to this nonlinear signature.
Results from studies in walking flies indicate that at least some component of variability in walking activity is under the control of a circuit in the so-called ellipsoid body, deep in the central brain . The authors tested the temporal structure in spontaneous bouts of activity in flies walking back and forth individually in small tubes and found that the power law in their data disappeared if a subset of neurons in the ellipsoid body was experimentally silenced. Analogous experiments have recently been taken up independently by another group and the results are currently being evaluated . The neurons of the ellipsoid body of the fly also exhibit spontaneous activity in live imaging experiments , suggesting a default-mode network also might exist in insects.
Even what is often presented to students as the simplest behaviour, the spinal stretch reflex in vertebrates, contains adaptive variability. Via the cortico-spinal tract, the motor cortex injects variability into this reflex arc, making it variable enough for operant self-learning . Jonathan Wolpaw and colleagues can train mice, rats, monkeys and humans to produce reflex magnitudes either larger or smaller than a previously determined baseline precisely because much of the deviations from this baseline are not noise but variability deliberately injected into the reflex. Thus, while invertebrates lead the way in the biological study of behavioural variability, the principles discovered there can be found in vertebrates as well.
One of the common observations of behavioural variability in all animals seems to be that it is not entirely random, yet unpredictable. The principle thought to underlie this observation is nonlinearity. Nonlinear systems are characterized by sensitive dependence on initial conditions. This means such systems can amplify tiny disturbances such that the states of two initially almost identical nonlinear systems can diverge exponentially from each other. Because of this nonlinearity, it does not matter (and it is currently unknown) whether the tiny disturbances are objectively random as in quantum randomness or whether they can be attributed to system, or thermal noise. What can be said is that principled, quantum randomness is always some part of the phenomenon, whether it is necessary or not, simply because quantum fluctuations do occur. Other than that it must be a non-zero contribution, there is currently insufficient data to quantify the contribution of such quantum randomness. In effect, such nonlinearity may be imagined as an amplification system in the brain that can either increase or decrease the variability in behaviour by exploiting small, random fluctuations as a source for generating large-scale variability. A general account of such amplification effects had already been formulated as early as in the 1930s . Interestingly, a neuronal amplification process was recently observed directly in the barrel cortex of rodents, opening up the intriguing perspective of a physiological mechanism dedicated to generating neural (and by consequence behavioural) variability .
I can only read the first paragraph of the review without getting a subscription to TNR which I'm not interested in. That paragraph just contains a bunch of ridiculous looking questions. So I found another review of the book, and it looks like those questions and answers are directly from the book:
Its a seemingly simple notion, and one that many scientists and scientific-minded people would claim already to hew to, but it has surprisingly fraught implications. Rosenberg lays them out very early in Chapter 1, in a series of questions and answers. Is there a God? No. What is the nature of reality? What physics says it is. What is the purpose of the universe? There is none. Similarly, theres no meaning to life; you and I are here because of dumb luck, and theres no soul.
While Massimo Pigliucci hasn't yet reviewed this book, he has referred to it numerous times. For example:
I got so sick of the smug attitudes that Rosenberg, Coyne, Harris and others derive from their acceptance of determinism obviously without having looked much into the issue that I delved into the topic a bit more in depth myself. As a result, Ive become agnostic about determinism, and I highly recommend the same position to anyone seriously interested in these topics (as opposed to anyone using his bad understanding of physics and philosophy to score rhetorical points).
A good starting point from which to get a grip on the nuances and complexities of discussions concerning determinism is a very nicely written article by Carl Hoefer in the Stanford Encyclopedia of Philosophy, as well as several of the primary sources cited there, particularly John Earman's Primer on Determinism.
I've seen some good reviews of Rosenberg's book; but most of the reviews I've read have panned it.
Dworkin gave the Einstein Lectures at the University of Bern on December 12, 13, and 14. Videos of his lectures (about 1 hour and 20 minutes each) can be found: here:
Here is an excerpt from the abstract of the lectures: :
A religious attitude involves moral and cosmic convictions beyond simply a belief in god: that people have an innate, inescapable responsibility to make something valuable of their lives and that the natural universe is gloriously, mysteriously wonderful. Religious people accept such convictions as matters of faith rather than evidence and as personality-defining creeds that play a pervasive role in their lives.
In these lectures I argue that a belief in god is not only not essential to the religious attitude but is actually irrelevant to that attitude. The existence or non-existence of a god does not even bear on the question of peoples intrinsic ethical responsibility or their glorification of the universe. I do not argue either for or against the existence of a god, but only that a gods existence can make no difference to the truth of religious values. If a god exists, perhaps he can send people to Heaven or Hell. But he cannot create right answers to moral questions or instill the universe with a glory it would not otherwise have.
How, then, can we defend a religious attitude if we cannot rely on a god? In the first lecture I offer a godless argument that moral and ethical values are objectively real: They do not depend on god, but neither are they just subjective or relative to cultures. They are objective and universal. In the second lecture I concentrate on Einsteins own religion: his bewitchment by the universe. What kind of beauty might the vast universe be thought to hold what analogy to more familiar sources of beauty is most suggestive? I propose that the beauty basic physicists really hope to find is the beauty of a powerful, profound mathematical proof. Godly religions insist that though god explains everything his own existence need not be explained because he necessarily exists. Religious atheists like Einstein have, I believe, a parallel faith: that when a unifying theory of everything is found it will be not only simple but, in the way of mathematics, inevitable. They dream of a new kind of necessity: cosmic necessity.
a little bit more ...
Your post does not justify either issue as pertinent to the consideration of human limitations.
I'm going to consider only the potential limitations on human knowledge. My first consideration in this would be the historical development of the human brain. It developed through evolution via the natural selection of accidental mutations. Specifically, the human brain was selected for because it solved problems in the environment better than brains based on other accidental changes. We have to assume that our brains are a minimal cost model that can work well with our bodies and solve environmental problems a little bit better than the next-best model. Since evolution did not require that our brain have an unlimited capacity to solve problems; but only a slightly better capacity to solve the problems that were encountered in its environment, I believe that there is an a fortiori case that the human brain has limitations.
Is there any evidence for such limitations? Yes. For example, the human brain has a tendency to categorize knowledge. We divide up problems into subject matter categories, entities into life and non-life categories, life into plant and animal, and then subdivide these categories. To a certain extent, we see the world in terms of these categories; they have a certain determinative effect on our knowledge. Is there a physical constraint on the categories that we choose? An excerpt for Philosophy in the Flesh by George Lakoff and Mark Johnson (Basic Books pp 18 - 19):
To take a concrete example, each human eye has 100 million light-sensing cones, but only about 1 million fibers leading to the brain. Each incoming image therefore must be reduced in complexity by a factor of 100. That is, information in each fiber constitutes a "categorization' of the information from about 100 cells. Neural categorization of this sort exists throughout the brain, up through the highest levels of categories that we can be aware of. When we see trees, we them as trees, not just as individual objects distinct from one another. The same with rocks, houses, windows, doors, and so on.
A small percentage of our categories have been formed by conscious acts of categorization, but most are formed automatically and unconsciously as a result of functioning in the world. Though we learn new categories regularly, we cannot make massive changes in our category systems through conscious acts of recategorization (though, through experience in the world, our categories are subject to unconscious reshaping and partial change). We do not, and cannot, have full conscious control over how we categorize. Even when we think we are forming new categories, our unconscious categories enter into our choice of possible conscious categories.
Most important, it is not just that our bodies and brains determine that we will categorize; they also determine what kinds of categories we will have and what their structures will be. Think of the properties of the human body that contribute to the peculiarities of our conceptual system. We have eyes and ears, arms and legs that work in certain very definite ways and not in others. We have a visual system, with topographic maps and orientation sensitive cells, that provide structures for our ability to conceptualize spatial relations. Our abilities to move in the ways we do and to track the motion of other things give motion a major role in our conceptual system. The fact that we have muscles and use them to apply force in certain ways leads to the structure of our system of causal concepts. What is important is not just that we have bodies and that thought is somehow embodied. What is important is that the peculiar nature of our bodies shapes our very possibilities fro conceptualization and categorization.
Aquinas' argument is (in part):
If that by which it is put in motion be itself put in motion, then this also must needs be put in motion by another, and that by another again.
But this cannot go on to infinity, because then there would be no first mover, and, consequently, no other mover
Therefore it is necessary to arrive at a first mover, put in motion by no other
As to your question:
Read my post #10:
This is not to claim that Aquinas' arguments are right. They have been rather famously refuted - for instance, by Kant. Dawkins could have just cited Kant.
Your interest is quite beside the point. You asked why Dawkins was attacked. In this instance, his claims were attacked because he claimed:
But, as I stated in my post, Summa Theologica goes on to derive God's attributes based on the previously given existence arguments. Again, I'm not arguing for the correctness of Aquinas' arguments; just that to claim they are not even there is, to say the least, extreme sloppiness, and, as such, subject to attack.