Monday, June 19, 2006

What does "gay" mean?

Last night, on Radio 4's Feedback programme, I discovered that in addition to it meaning "homosexual", the word "gay" is currently being used to mean "rubbish". The reason this featured on the programme was to discuss whether it was acceptable to use the word like this when it might be considered offensive to homosexuals.

At one stage, "gay" had an innocent role meaning: "cheery: bright and pleasant; promoting a feeling of cheer". It became a word meaning: "homosexual". I don't believe anybody was consulted about this redefinition. I agree that if people don't understand the way in which a word is being used it needs to be explained - jargon is always a barrier to communication - but words are no more owned by the homosexual community - or any other minority - than they are by anybody else.

One of the people interviewed wondered whether words describing other sections of the community had become terms of abuse, and how other groups might react to this. Well, two such words came to mind very quickly. The first is "liberal", and is now generally used to refer to somebody politically to the left of me - though this is low-key enough that few people would regard this as a term of abuse. However, the word "fundamentalist" is now almost universally regarded as a term of abuse. It originally meant somebody who aligned themselves with a publication called "The Fundamentals: A Testimony to the Truth" - but it is now used as a term meaning "somebody so far politically or religiously to the right of me that there is little point in debating with them".

I would have thought that critical thinking is of greater importance than the way in which language is used. People need to understand that choice of words evokes a particular reaction. Why do sections of the homosexual community object to using the word "gay" to mean "rubbish"? Is it because they want to maintain the association with the meaning "cheery: bright and pleasant"? So is their objection to the new use of the word any less arbitrary than the way in which they choose to use it? Why, for that matter, did "The Brights" choose that particular adjective to describe themselves, rather than (say) "The Dims", "The Electrodes" or "The Porches"?

As Humpty Dumpty said: "When I use a word, it means what I want it to mean; neither more, nor less."

Should faith be blind?

I have posted before about the different ways in which the word "faith" is used, quoting Francis Schaeffer on this issue at length.

People continue to use a kind of "postmodern" understanding of what faith means to try and say that Christians ought to regard their beliefs as something fundamentally irrational. Here is part of a letter published in Nature:
Your Editorial about the promotion of ID in schools and universities (Nature 434, 1053; 2005) asks us to persuade our students that science and faith do not compete, but for Christians this should always have been clear. In the Bible (John 20: 25−29), Thomas doubts that the man speaking to him is the resurrected Christ until Jesus reveals his wounds. Thomas then believes, but Jesus says: "Blessed are those who have not seen and yet have believed".

The Bible throughout teaches that faith is more valuable when expressed in the absence of evidence. For a Christian, when science is allowed to be neutral on the subject of God, science can only bolster faith.
This section of John's gospel is often used in support of this claim, but by taking those two verses out of context, they are almost completely reversing the sense of what was originally written.

Look at the next two verses:
Jesus did many other miraculous signs in the presence of his disciples, which are not recorded in this book. But these are written that you may believe that Jesus is the Christ, the Son of God, and that by believing you may have life in his name.
Thomas saw and believed, and Jesus says that those who haven't seen and yet have still believed will be blessed. The question is: how are we who haven't seen supposed to know that Jesus is the Christ, the Son of God? This would be an understandable question long before the postmodern era. Is this supposed to be blind faith? An irrational leap in the dark?

No, says John. Jesus did loads of other things, but the whole point of me writing these things down is so that people who haven't seen can still believe. My eyewitness testimony is reliable, he says.

So faith isn't supposed to be blind - lacking in evidence. Faith is supposed to be based on knowledge that has been supplied by eyewitnesses.

Furthermore, elsewhere in the Bible, writers make it clear that regardless of whether or not a person accepts the validity of these eyewitness accounts, there is still enough evidence for them to know that there is a God - in the Old Testament, it says that "the heavens declare the glory of God". In the New Testament, Paul the apostle writes in Romans 1 that even though God is invisible, the nature of the universe tells us about what he is like. When the Bible talks about faith, it is certainly confidence in something that isn't seen - but that doesn't mean that the sense in the Bible is that it is a leap in the dark.

The idea of religious belief being something fundamentally irrational derives not from the nature of Christianity, but from the way in which existentialism and postmodernism have invaded all areas of public discourse over the last 200 years.

Thursday, June 15, 2006

Junk DNA and the anti-design hypothesis

The charge against ID is that assuming something has purpose doesn't lead to useful science.

The countercharge (from me) is that assuming something doesn't have purpose doesn't lead to useful science.

I've started a new post because these ideas are very interesting, and deserve more visibility than they would get in comments of an existing post.

Looking at this in the context of non-coding DNA...
To demonstrate [that an anti-teleological approach restricted science], you'd need to show that this approach had caused people to reject valid evidence that certain DNA had function. Otherwise, all that you'd be demonstrating was that lack of function was indeed a valid null hypothesis, which people would happily move away from in the face of evidence.
I don't think so. To demonstrate this, you have to show that having assumed that something had little or no purpose, people turned their faces away from it to areas that looked more promising. Had they assumed that it had purpose, they might have continued their investigations and discovered the purpose at an earlier stage.

I was trying to remember back to Biology of Cells Part I (1986-7). I don't recall ever being taught much about non-coding DNA, other than the fact that there was lots of it - and I'm pretty sure that the phrase "junk DNA" goes back in my mind well over a decade.

I also looked in Alberts et al ("Molecular Biology of the Cell", 1983) to find out what he said, which was:
Most Chromosomal DNA Does Not Code for Essential Proteins

...Population biologists have estimated just how much of the DNA of higher organisms actually codes for essential proteins (or is involved in the regulation of genes coding for such proteins). In outline the argument runs as follows: Mutation is an accidental process in which randomly selected nucleotides in the DNA sequence are altered at a low but finite rate. Since most such mutations will be deleterious to the organism when they occur in an essential DNA sequence, there is a limit to the number of essential genes that can be stably maintained. It has been estimated from the observed mutation rate that no more than about 1% of the mammalian genome can be involved in regulating or coding for essential proteins....

What does the rest of the DNA do in higher eucaryotic genomes? We have already suggested that some of it may have a purely structural function in the organization of chromatin. In later sections of this chapter we shall discuss more recent evidence on the nature of the noncoding DNA and some other hypotheses about its function. Whatever the answer(s), the data shown in Figure 8-33 make it clear that it is not a great handicap for a higher cell to carry along a great deal of extra DNA, which suggests that there has been little pressure to minimize DNA content to include only the essential regions.
Note "essential" - non-coding DNA is considered "non-essential". And the index, at least, gave no indication as to where any later discussion of other hypotheses could be found. There was little other mention of non-coding DNA throughout the book

Even more significant is Goodenough ("Genetics", 3rd Edition, 1984). Again there is no reference to "junk DNA", but there are two short sections that relate to non-coding DNA. Firstly, in the context of considering the chromosome of a virus
...in visualizing the chromosome of T4 we can think of a given single strand of DNA as having long sequences of bases containing meaningful information, followed by long sequences of bases that do not code directly for protein and presumably code for nothing at all.[p.264, emphasis mine]
The only other section on "non-coding DNA" referred to in the index is in the section relating to "'Split Genes' in Eukaryotes" -
the coding oregion of most eukaryotic structural genes is interrupted by noncoding sectors, called ... introns.
A later section (9.11) talks about the "'Purpose' and Evolution of Intervening Sequences". She writes:
why do eukaryotes "bother" to have introns that they first transcribe and then excise? Several observations are compatible with the theory that introns serve no purpose .... On the other hand, there are indications that at least some introns have functional roles .... Perhaps the most interesting theory is W. Gilbert's proposal that introns might have been important in eukaryotic evolution.... the Gilbert proposal suggests that most present-day introns may be evolutionary relics[her emphasis], once important for creating new eukaryotic genes and persistent simply because cells have no way to get rid of them
As far as I can tell from a quick study, this is all the detail that Goodenough goes into regarding non-coding DNA. There is no obvious reference to the fact that the majority of DNA is "non-coding" and had no known function at the time.

So to conclude, I can't offer evidence on the basis of the information I have to hand that an anti-design hypothesis inhibited research. However, I have demonstrated that this non-coding DNA, despite its significance within a cell, was basically ignored at an undergraduate level as late as the mid-1980's. At best, this suggests that it was considered to have no biological significance; it was "not essential"; non-coding DNA was an "evolutionary relic". What would motivate people to investigate the significance of something when mainstream textbooks had dismissed it in those terms?

Wednesday, June 14, 2006

Another thing about ID ...

In a comment on an earlier post, Corkscrew said:
If the following process were to be completed, I would consider it strong evidence of design:

1) Find an interesting phenomenon
2) Posit a design hypothesis for it (could include: the Designer's identity, their methodology, their tools etc)
3) Draw concrete, non-obvious*, testable predictions from that premise
4) Test those predictions
5) Discover that the predictions are accurate
6) Repeat a few times with more predictions to ensure it wasn't fluke
7) Submit results to peer-review and fail to have any daft mistakes in methodology pointed out

You don't even really need to show that design happened; you just have to show that it's a scientifically useful concept. If you manage to pull this off, you'll probably get a Nobel, as well as seriously undermining the philosophical argument for atheism.
This is a helpful post. However, what was interesting is that, over the last few decades, it has been shown that the counter-position (if such a concept exists) is pretty useless in scientific terms. How does this run?
1) Find an interesting phenomenon
2) Posit that it has no design function
3) Draw concrete, non obvious, testable predictions from this premise
4) Test those predictions
5) Discover that the predictions are accurate
....
Think of some of the interesting phenomena that this has been applied to. Vestigial organs - human coccyx is a vestigial tail, appendix is vestigial from earlier in evolution. Junk DNA - which turns out to have a great deal of significance, and isn't junk at all. Most recently, perhaps, the presence of lactic acid in muscles. The anti-telic approach has proved itself to inhibit science. Perhaps it would be sensible, in the interests of science, if the contrary position were assumed as a starting point ....

In fact, given the challenges that are made to ID, could opponents of ID suggest any occasions when an assumption that something has no design significance has produced any useful science?

Tuesday, June 13, 2006

All quiet on the ID front?

Having established a reputation and a readership on this blog through talking about ID, I have said almost nothing about it for some time. Perhaps any readers still around think I have lost interest. Well, I haven't – though I have come to the conclusion that the debate here wasn't necessarily achieving anything, since the fundamental disagreement was evidently over presuppositions – and if people weren't prepared to accept that their presuppositions had an impact on their interpretation of evidence, then there was little I could hope to achieve in terms of moving the debate on - that is, on to the validity or otherwise of those presuppositions.

However ...

1)I still believe that “design is detectable”, even if it hasn't been rigorously detected yet. Despite the keenness to refute ID as a concept amongst its opponents, I think that the information content in biological systems is significant; I think that irreducible complexity hasn't been shown to be a flawed concept; I think that the link between habitability and observability (Privileged Planet) is significant. I also think that there are other areas which can be explored, though I don't know how yet – I'm not sure that evolutionary processes are sufficient to allow the development of language that can deal with abstraction, for example. Also, one idea that has been teasing around the back of my mind recently – evolution is a story of gradual process, and yet humans are so strongly selected for that with an evolutionary history of 3.5 billion years, they have come to completely dominate the entire planet, and comprehend pretty much the entire universe, in the course of around four thousand years. If this is a consequence of evolution, then frankly its predictive power is negligible. We are told that nothing in biology may make sense except in the light of evolution – but since it seems that evolution couldn't foresee the biological event that has transformed the entire planet in a geological instant, it seems reasonable to argue that evolution has little of use to tell us about humanity.

2)I still believe that much of the opposition to ID is lazy, complacent, arrogant and uninformed. In the sidebar are links to papers that I have critiqued in the past, for various of these reasons. For example, I believe that Lenski et al. have failed to show that irreducible complexity of the level seen in biological systems can evolve, despite the claims of his paper in Nature otherwise. Monton's paper that claimed to discredit Dembski's Universal Probability Bound looks as though it is fundamentally flawed. And the evolution of such features as antibiotic resistance and antifreeze glycoproteins in fish are instances of microevolution rather than macroevolution. Beyond this, the fact that opposition to ID is prepared to resort to judicial processes, and sing the praise of Judge Jones for his clear insight into the issues, rather than successfully refute it on scientific grounds, is very telling. (A judge opposes ID? I mean really, so what?! See also my initial remarks about presuppositions above.) Furthermore, the charges of laziness, complacency, arrogance and lack of informedness are added to by the fact that these flawed attempts to demonstrate the invalidity of ID receive so much promotion and support in the anti-ID community.

3)I believe that this is fundamentally a religious/philosophical debate. Not in the sense that most ID opponents mean this – that is, that ID proponents are only ID proponents because they want to establish a theocracy, or whatever. But that if you reject the idea of external agency, then the whole idea of ID will not be acceptable. If you accept the possibility of external agency, then ID is plausible. The significant things are your religious/philosophical presuppositions. I know that the case of the philosopher Flew is only one, and it probably bears more weight than it deserves, but the significant thing here is at the level of presuppositions - not the conversion to ID, but the shift from atheism to some form of external agencyism.

I still hope to return to the model that I started working on at some stage. In the meantime, I have been interested to find out about another attempt to model evolutionary processes, carried out by Henrik Jeldtoft Jensen and others in the Department of Mathematics at Imperial College, London. Their model is called “Tangled Nature”. It looks at large numbers of digital interacting “species” evolving over hundreds of thousands of generations, and they have used their model to show various things that can be seen in evolutionary history – explosions of new species, for example, and patterns of extinctions. It looks computationally heavy, but interesting nonetheless. Quite a few of these papers are available in arxiv.org – search for “Henrik Jensen” in the maths and quantitative biology sections for more details.

Thursday, June 08, 2006

Looking forward to the World Cup?

The social side of the England games will be fun - I think I'm around for at least two of the first (haha!) three, and people in the church are getting together for a shared footie experience plus food .... Then there's the shouts and groans audible through open windows when the matches are on. But at the end of the day, it's only a game. Of 90 minutes. With one winner and one loser. I don't suppose I'll be that over the moon if England win, nor as sick as a parrot if they don't win.

My predictions. England will go into the first match against Paraguay on Saturday vastly overconfident, and lose it 0-1, with the goal being late in the second half. They will beat Trinidad and Tobago 2-1, having gone one goal down. They will win their third match against Sweden through a disputed penalty. They will qualify for the next stage on goal difference. Neither Beckham nor Rooney will score, and both will be ineligible to play in the first knockout match, because of yellow cards.

They will then go out in the first knockout stage on penalties, having drawn 0-0.

People talk about favourable draws for the competition, and in theory, England ought to qualify from their group. But to get it in perspective, at the time the draw was made for the competition, somebody pointed out that it should, in actual fact, be irrelevant. If a team isn't able to beat any team on any given day, then they shouldn't be the world cup holders.

Bah, humbug.

Thursday, June 01, 2006

"The Wild"

Madagascar (New York Zoo, animals adapting to a wild environment) meets Finding Nemo (dad looking for son but unable to cope with the wild himself; son coming of age), with a bit of Ice Age 2 (Koala rather than weasel being idolised by Wildebeests rather than ... er, what was it? Dodos?) thrown in for good measure. Mostly harmless. Children will like it. Adults will have seen it all before. That's about it.

It's really uncanny how some of these film studios seem to be mind-locked together. Finding Nemo versus Shark Tale, Madagascar versus The Wild. What gives?

Best line - whilst stalling for time - "Er, what about the ... Party Hats ... of death?"

Most disturbing subplot - a squirrel infatuated with a giraffe. Sufficient grounds on its own to close down every zoo in the world.