Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Jim__

Jim__'s Journal
Jim__'s Journal
Missing entry

July 11, 2014

String theory and post-empiricism

An essay by Peter Woit, a theoretical physicist, on the scientific status of string theory.

An excerpt from the essay:

...

Last month’s conference in Princeton included two remarkable talks by prominent physicists, both of whom invoked philosophy in a manner unprecedented for this kind of scientific gathering. On the first day, Paul Steinhardt attacked the current practice of inflationary cosmology as able to accommodate any experimental result, so, on philosophical grounds, no longer science [2]. He included a video clip of Richard Feynman characterizing this sort of thing as “cargo cult physics.” On the final day, David Gross interpreted Steinhardt’s talk as implicitly applying to string theory, then went on to invoke a philosopher’s new book to defend string theory, arguing that string theorists needed to read the book in order to learn how to defend what they do as science [3].

The book in question was Richard Dawid’s String Theory and the Scientific Method [4], which comes with blurbs from Gross and string theorist John Schwarz on the cover. Dawid is a physicist turned philosopher, and he makes the claim that string theory shows that conventional ideas about theory confirmation need to be revised to accommodate new scientific practice and the increasing significance of “non-empirical theory confirmation.” The issues of this kind raised by string theory are complex, so much so that I once decided to write a whole book on the topic [5]. A decade later I think the arguments of that book still hold up well, with its point of view about string theory now much more widespread among working physicists. One thing I wasn’t aware of back then was the literature in philosophy of science about “progressive” vs. “degenerating” research programs, which now seems to me quite relevant to the question of how to think about evaluating string theory.

I’ve written a bit about the Dawid book and earlier work of his [6], although as for any serious book there’s of course much more to say, even if I lack the time or energy for it. Recently an interview with Dawid appeared, entitled “String theory and post-empiricism,” which summarizes his views and makes some claims about string theory critics which deserve a response, so that will be the topic here. In the interview he says:

I think that those critics make two mistakes. First, they implicitly presume that there is an unchanging conception of theory confirmation that can serve as an eternal criterion for sound scientific reasoning. If this were the case, showing that a certain group violates that criterion would per se refute that group’s line of reasoning. But we have no god-given principles of theory confirmation. The principles we have are themselves a product of the scientific process. They vary from context to context and they change with time based on scientific progress. This means that, in order to criticize a strategy of theory assessment, it’s not enough to point out that the strategy doesn’t agree with a particular more traditional notion.
...


more ...



Richard Dawid, a philosopher with a doctorate in physics, who takes an opposing view, is interviewed here.



February 28, 2014

Science publisher fooled by gibberish papers

From phys.org:

Publisher of science journals Springer said Thursday it would scrap 16 papers from its archives after they were revealed to be computer-generated gibberish.

The fake papers had been submitted to conferences on computer science and engineering whose proceedings were published in specialised, subscription-only publications, Springer said.

"We are in the process of taking down the papers as quickly as possible," the German-based publisher said in a statement.

"This means that they will be removed, not retracted, since they are all nonsense."

more ...


October 28, 2013

"It is impossible to found a civilization on fear and hatred and cruelty. It would never endure."

That is, at one point, Winston's reply to O'Brien, and is probably what Orwell believed.

In June, John Crowley gave a talk at MoMA on The Future as Parable. From about 8 minutes in up to about 20 minutes in he talked about Orwell's 1984.

A short excerpt from what he said:


...

Here is another. In 1946, as he was conceiving 1984, George Orwell reviewed the writings of an American political philosopher and futurologist named James Burnham, whose work had made a deep impression on him. Burnham began his political life a Trotskyite and went on to become an editor of the National Review. In 1940, he published The Managerial Revolution, which foresaw the coming of a new order in human political and economic organization. Capitalism would soon disappear, but socialism wouldn’t replace it. Instead, Burnham said, a managerial class of bureaucrats and technocrats and administrators was evolving that would replace both the old-fashioned business owner/entrepreneur and electoral politics. Private property would disappear but wouldn’t be replaced by common ownership; the managers would make all decisions, distribute all wealth, retain all power. The rest of humanity would subsist as dependents, happily enough, controlled by propaganda. Meanwhile the clusters of small states, democratic or tyrannical or whatever, would vanish, to be replaced by a few huge combines — America, Europe plus western Asia, the Pacific East, the Soviet sphere. These would be continuously at war, though never able to dominate all the others. A kind of stasis would probably eventuate and last from then on, or at least for a very long time.

In Burnham’s vision, as Orwell describes it, the only engine of history is the struggle for power: “All historical changes finally boil down to the replacement of one ruling class by another.” Talk about utopia or “the classless society” is bullshit (“humbug,” Orwell calls it). “It is clear that Burnham is fascinated by the spectacle of power,” says Orwell. “There is a note of unmistakable relish over the cruelty and wickedness of the processes that are being discussed.”

...

The question Burnham ought to ask but never does, Orwell says, is why this lust for power became the ruling human passion just at the time when the rule of the many by the few, which might once have been necessary to survival and the expansion of human culture, has become unnecessary. Orwell predicts, astutely, that “the Russian régime will either democratise itself, or it will perish. The huge, invincible, everlasting slave empire of which Burnham appears to dream will not be established, or, if established, will not endure, because slavery is no longer a stable basis for human society.”

If that’s his reasoned opinion about Burnham’s dream, then why did he write a book warning of its possibility? In 1984, dystopianism has arisen whole — “expanded” and “crystallising” — and conquered the world in just the forty years since the end of the Nazi empire, and apparently it seems set to last, a boot stamping on a human face, forever. But it won’t and can’t. The only possibility was that Orwell was building a Burnham world precisely in order to contradict him by going further than even Burnham could. 1984 is not a warning, much less a prediction, but a parable. It doesn’t mean what it seems at first to mean, just as the parables of Jesus don’t mean what they seem at first to mean.

...

June 8, 2013

A paper from 2011 disagrees that the size of quantum graininess can be the Planck length.

The article referenced in the OP says:

The Planck length turns out to be a very short distance: about 10-35 meters. It is a hundred million trillion times smaller than the diameter of a proton—too small to measure and, arguably, too small to ever be measured.

...

There is another important aspect of the Planck length. Relativity predicts that distances as measured by an observer in a fast-moving reference frame shrink—the so-called Lorentz contraction. But the Planck length is special—it’s the only length that can be derived from the constants c, G, and h without adding some arbitrary constant—so it may retain the same value in all reference frames, not subject to any Lorentz contraction. But the Planck length is derived from universal constants, so it must have the same value in all reference frames; it can’t change according to a Lorentz contraction. This implies that relativity theory does not apply at this size scale. We need some new scientific explanation for this phenomenon, and stochastic space-time might provide it. The idea that the Planck length cannot be shortened by the Lorentz contraction suggests that it is a fundamental quantum, or unit, of length. As a result, volumes with dimensions smaller than the Planck length arguably don’t exist. The Planck length then, is a highly likely candidate for the size of a space-time “grain,” the smallest possible piece of space-time.



A description of the 2011 paper says:

Einstein’s General Theory of Relativity describes the properties of gravity and assumes that space is a smooth, continuous fabric. Yet quantum theory suggests that space should be grainy at the smallest scales, like sand on a beach.

...

Some theories suggest that the quantum nature of space should manifest itself at the ‘Planck scale’: the minuscule 10-35 of a metre, where a millimetre is 10-3 m.

However, Integral’s observations are about 10 000 times more accurate than any previous and show that any quantum graininess must be at a level of 10-48 m or smaller.

“This is a very important result in fundamental physics and will rule out some string theories and quantum loop gravity theories,” says Dr Laurent.


more ...


February 12, 2012

Surprising that Rosenberg claims to be on board with all that; then claims the macro world is ...

... asymptotically deterministic.

Yet Hoefer's paper, linked to from Pigliucci's blog, explicitly claims we don't even know if the question of whether or not the universe is deterministic is decidable, yet Rosenberg is claiming that he knows it is asymtotically deterministic at the macro level. And, we are not just talking about quantum events with no large effects in the macro world, it may be impossible to determine whether chaotic system are deterministic or stochastic:


...

The usual idealizing assumptions are made: no friction, perfectly elastic collisions, no outside influences. The ball's trajectory is determined by its initial position and direction of motion. If we imagine a slightly different initial direction, the trajectory will at first be only slightly different. And collisions with the straight walls will not tend to increase very rapidly the difference between trajectories. But collisions with the convex object will have the effect of amplifying the differences. After several collisions with the convex body or bodies, trajectories that started out very close to one another will have become wildly different—SDIC (sensitive dependence on initial conditions - Jim).

In the example of the billiard table, we know that we are starting out with a Newtonian deterministic system—that is how the idealized example is defined. But chaotic dynamical systems come in a great variety of types: discrete and continuous, 2-dimensional, 3-dimensional and higher, particle-based and fluid-flow-based, and so on. Mathematically, we may suppose all of these systems share SDIC. But generally they will also display properties such as unpredictability, non-computability, Kolmogorov-random behavior, and so on—at least when looked at in the right way, or at the right level of detail. This leads to the following epistemic difficulty: if, in nature, we find a type of system that displays some or all of these latter properties, how can we decide which of the following two hypotheses is true?

1. The system is governed by genuinely stochastic, indeterministic laws (or by no laws at all), i.e., its apparent randomness is in fact real randomness.

2. The system is governed by underlying deterministic laws, but is chaotic.

In other words, once one appreciates the varieties of chaotic dynamical systems that exist, mathematically speaking, it starts to look difficult—maybe impossible—for us to ever decide whether apparently random behavior in nature arises from genuine stochasticity, or rather from deterministic chaos. Patrick Suppes (1993, 1996) argues, on the basis of theorems proven by Ornstein (1974 and later) that “There are processes which can equally well be analyzed as deterministic systems of classical mechanics or as indeterministic semi-Markov processes, no matter how many observations are made.” And he concludes that “Deterministic metaphysicians can comfortably hold to their view knowing they cannot be empirically refuted, but so can indeterministic ones as well.” (Suppes (1993), p. 254)

...


I don't see how Rosenberg can be on board with this and then claim that nature is asymtotically deterministic.

And, research indicates that animal brains have built-in chaotic subsystems - i.e we can't know whether they are deterministic or not. For instance, this excerpt from Towards a scientific concept of free will as a biological trait: spontaneous actions and decision-making in invertebrates - ( http://rspb.royalsocietypublishing.org/content/early/2010/12/14/rspb.2010.2325.full| ):


...

A corresponding conclusion can be drawn from two earlier studies, which independently found that the temporal structure of the variability in spontaneous turning manoeuvres both in tethered and in free-flying fruitflies could not be explained by random system noise [63,64]. Instead, a nonlinear signature was found, suggesting that fly brains operate at criticality, meaning that they are mathematically unstable, which, in turn, implies an evolved mechanism rendering brains highly susceptible to the smallest differences in initial conditions (i.e. SDIC - Jim) and amplifying them exponentially [63]. Put differently, fly brains have evolved to generate unpredictable turning manoeuvres. The default state also of flies is to behave variably. Ongoing studies are trying to localize the brain circuits giving rise to this nonlinear signature.

Results from studies in walking flies indicate that at least some component of variability in walking activity is under the control of a circuit in the so-called ellipsoid body, deep in the central brain [65]. The authors tested the temporal structure in spontaneous bouts of activity in flies walking back and forth individually in small tubes and found that the power law in their data disappeared if a subset of neurons in the ellipsoid body was experimentally silenced. Analogous experiments have recently been taken up independently by another group and the results are currently being evaluated [66]. The neurons of the ellipsoid body of the fly also exhibit spontaneous activity in live imaging experiments [67], suggesting a default-mode network also might exist in insects.

Even what is often presented to students as ‘the simplest behaviour’, the spinal stretch reflex in vertebrates, contains adaptive variability. Via the cortico-spinal tract, the motor cortex injects variability into this reflex arc, making it variable enough for operant self-learning [68–72]. Jonathan Wolpaw and colleagues can train mice, rats, monkeys and humans to produce reflex magnitudes either larger or smaller than a previously determined baseline precisely because much of the deviations from this baseline are not noise but variability deliberately injected into the reflex. Thus, while invertebrates lead the way in the biological study of behavioural variability, the principles discovered there can be found in vertebrates as well.

One of the common observations of behavioural variability in all animals seems to be that it is not entirely random, yet unpredictable. The principle thought to underlie this observation is nonlinearity. Nonlinear systems are characterized by sensitive dependence on initial conditions. This means such systems can amplify tiny disturbances such that the states of two initially almost identical nonlinear systems can diverge exponentially from each other. Because of this nonlinearity, it does not matter (and it is currently unknown) whether the ‘tiny disturbances’ are objectively random as in quantum randomness or whether they can be attributed to system, or thermal noise. What can be said is that principled, quantum randomness is always some part of the phenomenon, whether it is necessary or not, simply because quantum fluctuations do occur. Other than that it must be a non-zero contribution, there is currently insufficient data to quantify the contribution of such quantum randomness. In effect, such nonlinearity may be imagined as an amplification system in the brain that can either increase or decrease the variability in behaviour by exploiting small, random fluctuations as a source for generating large-scale variability. A general account of such amplification effects had already been formulated as early as in the 1930s [73]. Interestingly, a neuronal amplification process was recently observed directly in the barrel cortex of rodents, opening up the intriguing perspective of a physiological mechanism dedicated to generating neural (and by consequence behavioural) variability [74].

...
February 10, 2012

I can't get to the review that you're referencing.

I can only read the first paragraph of the review without getting a subscription to TNR which I'm not interested in. That paragraph just contains a bunch of ridiculous looking questions. So I found another review of the book, and it looks like those questions and answers are directly from the book:

...

It’s a seemingly simple notion, and one that many scientists and scientific-minded people would claim already to hew to, but it has surprisingly fraught implications. Rosenberg lays them out very early in Chapter 1, in a series of questions and answers. “Is there a God? No.’’ “What is the nature of reality? What physics says it is.’’ “What is the purpose of the universe? There is none.’’ Similarly, there’s no meaning to life; you and I are here because of dumb luck, and there’s no soul.

...



While Massimo Pigliucci hasn't yet reviewed this book, he has referred to it numerous times. For example:

Lately I hear the word “determinism” being thrown around like a trump card for all sorts of arguments, most obviously the recent discussions of free will that we have had on this blog. Moreover, as I already mentioned in passing, I am reading a new book by Alex Rosenberg that feels a lot like Dawkins on steroids (if you can imagine that), a huge portion of which is based on the assumption — which the author thinks he can derive from established and certainly unchangeable physics — of, you guessed it, determinism!

I got so sick of the smug attitudes that Rosenberg, Coyne, Harris and others derive from their acceptance of determinism — obviously without having looked much into the issue — that I delved into the topic a bit more in depth myself. As a result, I’ve become agnostic about determinism, and I highly recommend the same position to anyone seriously interested in these topics (as opposed to anyone using his bad understanding of physics and philosophy to score rhetorical points).

A good starting point from which to get a grip on the nuances and complexities of discussions concerning determinism is a very nicely written article by Carl Hoefer in the Stanford Encyclopedia of Philosophy, as well as several of the primary sources cited there, particularly John Earman's Primer on Determinism.

...


I've seen some good reviews of Rosenberg's book; but most of the reviews I've read have panned it.
January 15, 2012

Ronald Dworkin: Religion without God.

Dworkin gave the Einstein Lectures at the University of Bern on December 12, 13, and 14. Videos of his lectures (about 1 hour and 20 minutes each) can be found: here:

Here is an excerpt from the abstract of the lectures: :


"For most people religion means a belief in a god. But Albert Einstein said that he was both an atheist and a deeply religious man. Millions of ordinary people seem to have the same thought: they say that though they don’t believe in a god they do believe in something “bigger than us.” In these lectures I argue that these claims are not linguistic contradictions, as they are often taken to be, but fundamental insights into what a religion really is.

A religious attitude involves moral and cosmic convictions beyond simply a belief in god: that people have an innate, inescapable responsibility to make something valuable of their lives and that the natural universe is gloriously, mysteriously wonderful. Religious people accept such convictions as matters of faith rather than evidence and as personality-defining creeds that play a pervasive role in their lives.

In these lectures I argue that a belief in god is not only not essential to the religious attitude but is actually irrelevant to that attitude. The existence or non-existence of a god does not even bear on the question of people’s intrinsic ethical responsibility or their glorification of the universe. I do not argue either for or against the existence of a god, but only that a god’s existence can make no difference to the truth of religious values. If a god exists, perhaps he can send people to Heaven or Hell. But he cannot create right answers to moral questions or instill the universe with a glory it would not otherwise have.

How, then, can we defend a religious attitude if we cannot rely on a god? In the first lecture I offer a godless argument that moral and ethical values are objectively real: They do not depend on god, but neither are they just subjective or relative to cultures. They are objective and universal. In the second lecture I concentrate on Einstein’s own religion: his bewitchment by the universe. What kind of beauty might the vast universe be thought to hold – what analogy to more familiar sources of beauty is most suggestive? I propose that the beauty basic physicists really hope to find is the beauty of a powerful, profound mathematical proof. Godly religions insist that though god explains everything his own existence need not be explained because he necessarily exists. Religious atheists like Einstein have, I believe, a parallel faith: that when a unifying theory of everything is found it will be not only simple but, in the way of mathematics, inevitable. They dream of a new kind of necessity: cosmic necessity.

a little bit more ...






December 30, 2011

In your attempt to discuss human limitations you raise 2 spurious issues: fear and religion.

Your post does not justify either issue as pertinent to the consideration of human limitations.

I'm going to consider only the potential limitations on human knowledge. My first consideration in this would be the historical development of the human brain. It developed through evolution via the natural selection of accidental mutations. Specifically, the human brain was selected for because it solved problems in the environment better than brains based on other accidental changes. We have to assume that our brains are a minimal cost model that can work well with our bodies and solve environmental problems a little bit better than the next-best model. Since evolution did not require that our brain have an unlimited capacity to solve problems; but only a slightly better capacity to solve the problems that were encountered in its environment, I believe that there is an a fortiori case that the human brain has limitations.

Is there any evidence for such limitations? Yes. For example, the human brain has a tendency to categorize knowledge. We divide up problems into subject matter categories, entities into life and non-life categories, life into plant and animal, and then subdivide these categories. To a certain extent, we see the world in terms of these categories; they have a certain determinative effect on our knowledge. Is there a physical constraint on the categories that we choose? An excerpt for Philosophy in the Flesh by George Lakoff and Mark Johnson (Basic Books pp 18 - 19):

The first and most important thing to realize about categorization is that it is an inescapable consequence of our biological makeup. We are neural beings. Our brains each have 100 billion neurons and 100 trillion synaptic connections. It is common in the brain for information to be passed from one dense ensemble of neurons to another via a relatively sparse set of connections. Whenever this happens, tha pattern of activation distributed over the first set of neurons is too great to be represented in a one-to-one manner in the sparse set of connections. Therefore, the sparse set of connections necessarily groups together certain input patterns in mapping them across to the output ensemble. Whenever a neural ensemble provides the same output with different inputs, there is a neural categorization.

To take a concrete example, each human eye has 100 million light-sensing cones, but only about 1 million fibers leading to the brain. Each incoming image therefore must be reduced in complexity by a factor of 100. That is, information in each fiber constitutes a "categorization' of the information from about 100 cells. Neural categorization of this sort exists throughout the brain, up through the highest levels of categories that we can be aware of. When we see trees, we them as trees, not just as individual objects distinct from one another. The same with rocks, houses, windows, doors, and so on.

A small percentage of our categories have been formed by conscious acts of categorization, but most are formed automatically and unconsciously as a result of functioning in the world. Though we learn new categories regularly, we cannot make massive changes in our category systems through conscious acts of recategorization (though, through experience in the world, our categories are subject to unconscious reshaping and partial change). We do not, and cannot, have full conscious control over how we categorize. Even when we think we are forming new categories, our unconscious categories enter into our choice of possible conscious categories.

Most important, it is not just that our bodies and brains determine that we will categorize; they also determine what kinds of categories we will have and what their structures will be. Think of the properties of the human body that contribute to the peculiarities of our conceptual system. We have eyes and ears, arms and legs that work in certain very definite ways and not in others. We have a visual system, with topographic maps and orientation sensitive cells, that provide structures for our ability to conceptualize spatial relations. Our abilities to move in the ways we do and to track the motion of other things give motion a major role in our conceptual system. The fact that we have muscles and use them to apply force in certain ways leads to the structure of our system of causal concepts. What is important is not just that we have bodies and that thought is somehow embodied. What is important is that the peculiar nature of our bodies shapes our very possibilities fro conceptualization and categorization.
December 20, 2011

That is the conclusion of the argument.

Aquinas' argument is (in part):

Now whatever is in motion is put in motion by another
...
If that by which it is put in motion be itself put in motion, then this also must needs be put in motion by another, and that by another again.
...
But this cannot go on to infinity, because then there would be no first mover, and, consequently, no other mover
...
Therefore it is necessary to arrive at a first mover, put in motion by no other


As to your question:

Are you saying its a warranted conclusion, and if so, where is your evidence?


Read my post #10:

... The whole argument leads to the conclusion that there is a need for an unmoved mover. You can call the argument invalid; but you can't claim the claim that the conclusion is an entirely unwarranted assumption.

...

This is not to claim that Aquinas' arguments are right. They have been rather famously refuted - for instance, by Kant. Dawkins could have just cited Kant.


As to:

As far as the second, no idea, have no interest in bullshit fields of study such as theology(or parapsychology like a BS posted in another post in this thread).


Your interest is quite beside the point. You asked why Dawkins was attacked. In this instance, his claims were attacked because he claimed:

Even if we allow the dubious luxury of arbitrarily conjuring up a terminator to an infinite regress and giving it a name, simply because we need one, there is absolutely no reason to endow that terminator with any of the properties normally ascribed to God: omnipotence, omniscience, goodness, creativity of design, to say nothing of such human attributes as listening to prayers, forgiving sins and reading innermost thoughts.


But, as I stated in my post, Summa Theologica goes on to derive God's attributes based on the previously given existence arguments. Again, I'm not arguing for the correctness of Aquinas' arguments; just that to claim they are not even there is, to say the least, extreme sloppiness, and, as such, subject to attack.

Profile Information

Gender: Do not display
Member since: 2003 before July 6th
Number of posts: 14,391
Latest Discussions»Jim__'s Journal