Saturday, December 30, 2006

Alvin Plantinga says ...

Among our chief intellectual obligations is that of believing a proposition that is not certain (i.e., either self-evident or incorrigible) only on the evidential basis of propositions that are certain. What I argued, in essence, is that from this point of view belief in other minds and belief in God are on an epistemological par. In neither case are there cogent arguments of the sort required; hence if the absence of such arguments in the theistic case demonstrates irrationality, the same goes for belief in other minds. If you flout epistemic duty in accepting one, then you flout it just as surely in accepting the other; hence if the former [belief in God] is irrational, so is the latter [belief in other minds beyond mine]. But clearly the latter isn't irrational; this version of the evidentialist objection to theistic belief, therefore is a failure.

Alvin Plantinga - "God and Other Minds", Preface to 1990 edition (emphasis mine)
Which is pretty much what I was saying. The difference is that Alvin Plantinga backs this up with 260 pages of substantial logic and philosophy of religion.

Friday, December 29, 2006

Intelligent Design and "12 Angry Men"

Inherit the Wind” is a famous film that was made at the end of the 1950's, supposedly about the Scopes Monkey Trial. In actual fact, it was really about McCarthyism rather than evolution, and takes substantial liberties with the history of the Scopes Trial.

But that's not important right now. A contemporary film, “12 Angry Men”, gives us more insight into the debate between Intelligent Design, creationism and darwinism. In this film, a jury gather in a jury room to consider their verdict after a trial. The evidence seems overwhelming that the defendant should be found guilty – and just one of the jurors dissents from this. And yet, by the end of the film, the defendant is unanimously acquitted.

What does this have to do with Intelligent Design?

The Discovery Institute have co-ordinated a list of scientists – currently numbering over 600 – who are prepared to say: “We are skeptical of claims for the ability of random mutation and natural selection to account for the complexity of life. Careful examination of the evidence for Darwinian theory should be encouraged.”

Opponents of ID are scathing. “That's nothing,” they say. “Why, we have nearly 800 scientists on our list supporting evolution. And we've restricted our list just to those people called Steve, or something derived from Steve.”

But this misses the point that ought to be obvious to anybody who has watched “12 Angry Men”. At issue isn't the number of dissenters, or the size of the majority. It's the fact that dissent exists, and won't go away. (In actual fact, of course, even if dissent doesn't exist, this has no bearing on the truth or otherwise of a proposition. It is of course possible for something to be false and believed in by 100% of people. This bears ultimately on both sides of the debate.) When Einstein first suggested relativity, how many people really dissented from the Newtonian understanding of the universe? There were one or two loose ends, but I don't think there was anything that would really have shown that it needed a fundamental rewrite. The fact that a relatively small number of scientists have “issues” with darwinism doesn't mean their dissent is irrelevant.

In the early 1980's, as a teenager, I was unhappy with the idea of what darwinism did to biblical interpretation, but the most substantial case against it was found in Sylvia Baker's creationist pamphlet, “Bones of Contention” - a booklet that doesn't look terribly substantial as a challege these days. I assumed that the case for darwinism would get stronger as time went on – they just needed to tie up the loose ends. But instead, the challenges have got stronger, and more scientifically substantial. Rather than the detail of observations showing that darwinian mechanisms “obviously” work, they show that either darwinian mechanisms are far more powerful than was ever conceived by Darwin, or that there is a designer. In all sorts of areas, holes have appeared in the naturalistic understanding of the universe, when a big picture overview previously would have suggested that the show was all over except for the final curtain.

Those 600+ scientists – and the others who haven't appended their names, and those of us who aren't doctors, but understand enough about science to be intelligently informed about the debate, and the general population who simply don't accept the modernist, reductionist analysis of what it means to be human – need to be convinced of the case for darwinism – not by bullying into silence, or by pointing out that there aren't many of us, or by being ridiculed, or by being criminalised, but by the scientific case being made. We speak the same language. We can read scientific papers. We can interpret evidence.

Bring it on!

Thursday, December 28, 2006

Sovereignty and free will

Dunno if I've blogged this before, but it bears repeating.

How can God be sovereign, but humans still have free will?

Look at it this way. Conceptually, I could write a computer program that carries out an action on the basis of a random number. Then I could set the seed of the random number generator. Which means I can know (as the author of the program) exactly what random number will appear, and hence how that program is going to behave. But from the perspective of the program, it was coded to cover any random value that appears at that point. There is a difference between my knowledge (as programmer) and the program's "knowledge". The program is written in such a way that it is able to deal with any value that pops up, even though I know as the programmer what value it will get and hence how it behaves.

Now I believe that God is sovereign. From his perspective, he knows what I am going to do - indeed, he has ordained it. But as a human, I have absolutely no knowledge of what he has ordained - and I am acting as far as I can tell perfectly freely. Furthermore, I have the potential as a human being to act perfectly freely - I am not a robot constrained to behave in a particular way.

People argue that if I believe God is sovereign, then I weaken human freedom. But since my knowledge of the fact that God is sovereign tells me nothing about what God's plan is, nor does it constrain my behaviour in any way, I don't see that it is wrong to describe me as being free, and also responsible.

I can think of all sorts of illustrations that would relate to this. The disciples all knew that somebody was going to betray Jesus - but Judas still did it. The Bible said that the Servant would be cut off from the land of the living - and it still happened.

On a more domestic scale, I might forbid a small child to do something - in the fairly sure knowledge that they will not listen to my prohibition. I "sovereignly" know that the child will disobey my prohibition - but that makes the child no less responsible when he or she does what I have asked him or her not to do.

Update: I thought I'd posted something like this before. See here - a more substantial post.

"Blue on Blue", Leigh Nash

I got this for Christmas, and I am a little disappointed.

Leigh's voice (Sixpence None the Richer) has lost none of its gorgeousness. It's a well produced, musically pleasant CD. You could put it on and nobody - not even great-grandma - would be in the least offended by any of the songs on it. But frankly, it's too nice. Too many of the songs are about the fact that Leigh is now happily married and deeply in love. Wonderful. But the poetry and struggles that were conveyed in the songs Matt Slocum wrote for Sixpence are just not there.

I see Windows Media Player classes it as "Religious". Integrity Music (I think!) also slapped a sticker on the front of the CD bought in the UK that took some soaking off. I suspect that this is a marketing ploy, because unless I'm missing something, the strongest religious content here is saying that being with somebody is "my idea of heaven", or that the somebody is "my angel tonight". Or "In heaven, love is everywhere." Oh please, Leigh. Perhaps the songs are really Christian devotional taking pop as their idiom - but when they become indistinguishable from saccharine pop, they have lost too much of their edge.

Some of the songs are a step above. "Nervous in the Light of Dawn" (for which Slocum had a writing credit) and "Ocean Size Love" are the two I guess I like most of all. The words of "Just a Little" have good poetry about them.

But I bought this on the back of having worked my way through the Sixpence catalogue. I really don't think that people would start here and be inspired to go back and find out what else Leigh had done. If you think you might want to buy this, then please listen first to the outstanding "Divine Discontent" or "This Beautiful Mess".

Tuesday, December 26, 2006

Epistemology - the alternative

Part 1, Part 2

Now, what becomes obvious, despite the current popularity of street-level postmodernism, is that we do have a sense that there is such a thing as objective truth. Postmodern writers, despite what they say, write assuming that they are able to convey something real about the nature of the universe – even when what they write is supposedly denying this. And the assumption of empiricism is that there is such a thing as objective truth, even if modernism doesn't provide us with an adequate means of knowing what the objective truth is.

So what we really need is another epistemological foundation – one that doesn't throw away the idea of objective truth (like postmodernism), but one that doesn't ground the idea of objective truth in the first instance in the observer (like modernism), which we have seen is also a dead end in epistemological terms.

The universe didn't start with the arrival of Descartes, and as I hinted before, the epistemological basis of the first scientists wasn't that of the modernists. Modern science actually arose, by and large, in post-Reformation Europe. The Reformation took place at around the same time as the Renaissance, but whereas the Renaissance was a humanist movement that looked back to the classical era, the Reformation was fundamentally a Christian movement, that was shaped by people with a high regard for the Bible. The epistemology of the Reformation was widely shared in post-Reformation Europe, even though the man in the street would perhaps have been no more able to explain it than the man on the Clapham Omnibus can today explain postmodernism, despite the fact that he has an aversion to the idea of absolute truth.

So what was this epistemology? It was based on the Bible, and it is founded upon the idea of a God who had created the universe. There is a gap between this creator God and humanity, caused by the rejection of the authority of God by humanity. This means that the creator God is now hidden from humanity, and the universe is “messed up”. But God wanted to act to restore the proper relationship between himself and humans, and has acted to do so.

God has made himself known to humans, firstly in general revelation (the idea that when we look at the universe, or ourselves, there are signs that there is a god), and secondly in special revelation (the Bible gives us a much more full account of what God is like, in particular in the person of Jesus, and these are consistent with what we see in the world).

The picture we are given about God in the Bible is that (from the point of view of a working epistemology) he is eternally consistent and he is the creator of the universe. Humans were created within the universe to know God, and he is seeking to make himself known to us.

So what, in short terms, is the nature of this epistemological foundation? It is that we can know truth because there is an external absolute who created us to know it. We are unable to know more than subjectively, but we can be confident that there is a correspondence between our subjective knowledge and objective truth about the universe because there is an objective knower who is making it known to us.

What are some of the effects of this?
  • Our instinct that there is such a thing as objective truth would be well-founded – it isn't simply a by-product of a chemical reaction. Nor is it simply a “good working hypothesis”, with no necessary basis in anything that is real.
  • No human viewpoint is artificially “privileged” - everybody has access to both “general revelation” through which God makes himself known and now also “special revelation” in which God gives more detail about what he has done to deal with the gap between himself and humanity.
  • The gathering of knowledge about the universe (“science”) is a reasonable pursuit – if God is eternal and consistent, we can work on the basis that all else being equal, the universe will behave in a consistent and predictable way. So naturalism is a sensible methodological approach within this epistemological framework. Further, perhaps one could argue that the more knowledge that we gather about the universe, the more clearly the God who created it will be revealed.
  • On the other hand, since God as creator is free (in the same way that he was free to create, or not create) we have to allow the possibility that not everything about his actions will be comprehensible. His consistency doesn't mean that he is bound by the laws of the universe (which after all, he would have made!), but it does mean that he is bound by the laws of his own nature. What are the limits of this? Perhaps one could argue that naturalistic consistency would apply unless this would override God's purposes to make himself known – his self-revelation.
  • Due to the difference in nature of the creator and the created, we wouldn't expect to know exhaustively about what God is like – but that doesn't prevent us from truly knowing about God.

  • It can hardly come as a surprise to regular readers of this blog that this is very much my epistemological foundation. I see big flaws in the alternatives, all of which are dealt with in this reformed epistemology.

    In future posts, I'd like to try and show how different epistemological worldviews shape other aspects of people's philosophy.

    Epistemology - postmodernism

    Part 1

    Another possible foundation has been proposed. This foundation is that there is no such thing as objective truth – that all that is possible is to know that something is subjectively true “for me”. This is the postmodern epistemological foundation.

    The problem with this foundation is that it doesn't even hold itself up. We can't come to the conclusion that all truth is subjective if we have no way of being able to establish that something is objectively true – the statement that all truth is subjective is itself a statement that makes objective claims about the nature of truth. This can be written right across the entire output of postmodern philosophy. We are told by postmodern writers (for example, Derrida) that the interpretation of a text depends not upon the author's intent, but upon the reader. This is founded upon the postmodern epistemology, that truth is subjective. But if the same process is applied to the text of the postmodern writer, then it is obvious that his or her writing has no objective force at all.

    For people who haven't come across “high postmodernism”, this form of explicit statement about the nature of truth may be something of a surprise. However, you don't have to look very far into our culture to see things like “feminist interpretations of Shakespeare” (with the implicit assumption that Shakespeare's interpretation of Shakespeare doesn't exist, and that this interpretation is at least as good as any other). Also, we now have a culturally instinctive opposition to any statement that sounds like a declaration of absolute truth. For example, I remember a conversation I had once. The person I was talking to said something along the lines of: “I don't smoke, but I think people have the right to do what they choose.” I said: “I think smoking is a stupid, dangerous habit, and the only reason it is allowed to continue is because the government makes so much money from it.” “Well, we're all entitled to our opinions,” was the prickly response – in the sense, presumably, that whilst I might consider smoking to be stupid and dangerous, people were quite at liberty to believe something else if they chose to – whether smoking was stupid and dangerous was a matter of subjective truth. So at the level of normal people, the postmodern idea that truth is entirely subjective is increasingly taking hold.

    Epistemology - modernism

    Epistemology is a part of philosophy that is to do with how we can know. It represents a foundation for our knowledge. We need to know how we can know before we can draw meaningful conclusions about what we know. Questions of epistemology are deeply fundamental – even more fundamental than questions like whether or not there is a god. The nature of our epistemology will have an impact on the conclusions that we draw about the nature of the universe. But epistemological questions are so fundamental – so low-level – that for the most part, they remain below the level of our consciousness. In this series of posts, I want to talk about what I understand of epistemological questions, where I am coming from, and how I justify my beliefs.

    A possible epistemological foundation is the idea that an observer is able to objectively apprehend the nature of the universe around him or her. We can know things because when we look at the universe around us, we see it universe as it truly is. This is the foundation of modernism and naturalism – it continues to underlie the work of most scientists. Its philosophical roots were in the Renaissance – in, for example, Leonardo da Vinci looking as an observer for universals (incidentally, he failed – he was left only with mechanics) – and it reached its height in the Enlightenment.

    But there are various problems with it. One is that it is not possible to determine whether an observer's view of the universe is objective. I don't know whether what I perceive of the universe is really how the universe is like. Hence the reference to Descartes in earlier posts - “I think therefore I am” allows me to conclude that I am truly conscious. However, it doesn't allow me to be confident of anything “outside” me – it gives me no confidence that what I perceive has any objective reality. In philosophical terms this foundation was beginning to look shaky before Darwin appeared. Various things have acted to discredit it. For example, Gödel's incompleteness theorem showed that there were propositions within mathematics that could not be proved – and that maths isn't an “absolute”, or an “objective truth”. The neodarwinian synthesis – a concept built up from this foundation – suggests that our consciousness is no more than a side effect of chemical processes. As such, we have no reason to believe that there should be any correlation with what our consciousness perceives and the nature of the universe. The fact that data can be interpreted in different ways also takes away confidence in this understanding of the universe.

    It would be wrong to suggest that this foundation was useless. It wasn't the foundation of knowledge that modern science was originally built on (I'll come to that), but “modernist” scientists continue to dominate science – and they have achieved a great deal in helping us to understand the universe, even though their epistemological foundations don't allow them confidence that the universe exists!

    One might argue: “Well, if this foundation is so helpful in allowing us to understand and explain the universe, then why not just run with it?” I would suggest that it's all about the difference between a mathematical observation (say that the sum of the squares on the hypotenuse is equal to the sum of the squares on the other two sides) and a mathematical proof of that observation. We may observe that we seem to be able to objectively apprehend the rest of the universe – but if we are unable to prove why that is the case, then have we missed the point somewhere? Many people simply work on the basis that this sort of knowledge is impossible, but they assume that their analysis of the universe is correct – I think this is what “empiricism” is. But without knowing that we have a sound epistemology, we are left wondering: will the universe continue to behave in this way? Do I have any way of knowing whether my observations of the universe will still be correct tomorrow? Do I have any way of knowing that my observations of the universe are shared by anybody else? Do I have any way of refuting an observation that somebody else claims to have made about the universe?

    Monday, December 25, 2006

    A happy Christmas ...

    ... to my readers. Both of them.

    This was the carol that moved me to tears this morning.
    Holy child, how still you lie!
    Safe the manger, soft the hay;
    Faint upon the eastern sky
    Breaks the dawn of Christmas Day.

    Holy child, whose birthday brings
    Shepherds from their fields and fold,
    Angel-choirs and eastern kings,
    Myrrh and frankincense and gold:

    Holy child, what gift of grace
    From the Father freely willed!
    In your infant form we trace
    All God’s promises fulfilled.

    Holy child, whose human years
    Span like ours delight and pain;
    One in human joys and tears,
    One in all but sin and stain:

    Holy child, so far from home,
    All the lost to seek and save,
    To what dreadful death you come,
    To what dark and silent grave!

    Holy child, before whose name
    Powers of darkness faint and fall;
    Conquered death and sin and shame
    Jesus Christ is Lord of all!

    Holy child, how still you lie!
    Safe the manger, soft the hay;
    Clear upon the eastern sky
    Breaks the dawn of Christmas Day.

    T. Dudley-Smith

    Sunday, December 24, 2006

    What Humphrys found out about God ...

    ... can be found here, in a Christmas special from the Daily Telegraph. My comments on the first part of the series of three interviews "In Search of God" can be found here. Unsurprisingly, he reports that few things he has done have resulted in a bigger postbag - from atheists, agnostics and theists.

    He says in the article:
    The last time [I set foot inside a church] was for my much-mourned colleague Nick Clarke [another well-known and loved Radio 4 journalist]. St Mary Abbots in Kensington, west London, was packed with the great and the good and, more importantly, with Nick's friends. The choir was superb and the eulogies perfectly judged. The problem was the vicar, Gillean Craig.

    That may sound a little harsh. He is, I'm sure, a thoroughly godly man doing a good job of running his magnificent church. But in the opening moments of the service Father Craig (as he likes to be known) struck a horribly discordant note. Here's what he said: "Terrible though it is to us, God grants the same freedom to cancer cells that he grants even to the most noble and virtuous of us."
    This was a pretty silly thing for the vicar to say - it was poor science, and poor theology. What I'd like to do, if I have the time, is to write a series of posts on the section of the Bible called Job, which is all about why bad things happen to good people. And no, there isn't a nice tidy answer, but it hopefully starts to unravel aspects of the issue.

    Without going into that now, or going any further into Humphrys' article, it struck me that it was already clear that he was missing the point.

    From a Christian perspective, the presence of the great and the good, the location, the choir are all irrelevant. The role of a vicar isn't "running his magnificent church" - by which Humphry's evidently means the building. And there is not much more reason to think that you will get coherent theology out of any vicar than you would out of the man on the Clapham Omnibus.

    Humphrys spoke in his radio series to the Archbishop of Canterbury, Rowan Williams, as I described before. Again, I assume that his expectation was doubtless that Williams would present an authoritative answer on behalf of Christianity. But his trust in the established authorities is misguided. Although it is doubtless surprising for people who continue to rely on the local parish church for a few helpful words when people are hatched, matched or dispatched, the fact is that with the impact of modernism and postmodernism on the Anglican seminaries, the diversity of "theology" within the Church of England is immense. I suspect that different sections of the Anglican communion have significant mistrust for each other - and those different sections are all represented within the Church of England. For example, I did three years of theology study by correspondence with this organisation. It is well-regarded by evangelical churches of many denominations in the UK, and has a close association with St. Helens Bishopsgate. The fact that I have done this study means nothing to my local anglican vicar, who is much more interested in Churches Together, an ecumenical organisation.

    Now the Archbishop doesn't represent an "average", any more than my local vicar, the one at Nick Clarke's funeral, or anyone else within anglicanism represents a "typical" anglican position. So you can't really expect to learn anything meaningful from what they say. You may do. But you can't expect to. I can think of various people who Humphrys would have been better served talking to or listening to - not to get a "more muscular version" of Christianity, but just one that presented what I would understand to be a more intellectually and spiritually coherent one. John Blanchard. Don Carson. Philip Yancey. Sinclair Ferguson. Josh McDowell. Alvin Plantinga.

    We also heard an account of a funeral this week. It was of a two year old girl we knew a little about (not the one I wrote about here - she continues to respond well to treatment, and is still in remission) who had been suffering from leukaemia with other complications. Somebody who attended described the funeral as intensely emotional - both sad and joyful. He said the person reading the Bible read as well as he had ever heard - and being involved in full-time Christian work, he has heard the Bible read on many, many occasions. The message was helpful, and again was as good from the speaker as he had ever heard.

    The pain of losing the youngest child in a family is no less for evangelical Christians - especially when it has been such a long and wearing battle. But at this funeral there was more than resignation, resentment, and rejection of the idea of God.

    We all got the chance to hear about the shooting in the Amish community in Pennsylvania. How many of us also followed the news for long enough to hear about the healing and restoration that followed, I wonder?

    It is too simplistic to say that this sort of thing rules out the idea of God - because there are too many accounts of this sort of thing where the people involved would say that God is at work amongst them - even if they don't understand how or why.

    Wednesday, December 20, 2006

    Blasphemy against the Holy Spirit

    What is it?

    We find out in Mark chapter 3:22-30.
    And the teachers of the law who came down from Jerusalem said, "He is possessed by Beelzebub! By the prince of demons he is driving out demons."

    So Jesus called them and spoke to them in parables: "How can Satan drive out Satan? If a kingdom is divided against itself, that kingdom cannot stand. If a house is divided against itself, that house cannot stand. And if Satan opposes himself and is divided, he cannot stand; his end has come. In fact, no one can enter a strong man's house and carry off his possessions unless he first ties up the strong man. Then he can rob his house. I tell you the truth, all the sins and blasphemies of men will be forgiven them. But whoever blasphemes against the Holy Spirit will never be forgiven; he is guilty of an eternal sin."

    He said this because they were saying, "He has an evil spirit."(NIV)

    So Jesus talks about people "blaspheming against the Holy Spirit" in the context of the teachers of the law saying that things done in the power of God were actually done in the power of Satan. They should have been able to recognise the power of God to work for good - but they deliberately chose to interpret it as satanic power. Blasphemy against the Holy Spirit isn't something you can do by mistake. It's not something you can casually do. It is about being close enough to God to see him working - but then interpreting his work as being evil rather than good.

    The closest comparable situation I've come across in my own experience is somebody who is convinced that the assurance some other people have of their salvation (which the Bible says comes from God) was actually satanic - when the evidence in those people's lives was also clearly that God was at work in them.

    I mention this having come across this website. Some thoughts.

    Firstly, you can't decide anybody's eternal destiny, not even your own. Only God can do that. Whilst taking part in the activities described on the website hardly constitute a healthy spiritual discipline, I don't believe they have "binding value" as far as God is concerned.

    Secondly, I doubt that there are many people sending in videos who have been in a position to see what God's work is like - certainly not in the same clear, unambiguous way that the teachers of the law had when they said that Jesus was working in the power of Beelzebub. I suspect that you can't blaspheme against the Holy Spirit unless you know what the Holy Spirit's work is like, and I doubt that most of these people do.

    Finally, that notwithstanding, provoking people to damn themselves is very serious. Not for the people provoked, but for the provokers.
    1Jesus said to his disciples: "Things that cause people to sin are bound to come, but woe to that person through whom they come. It would be better for him to be thrown into the sea with a millstone tied around his neck than for him to cause one of these little ones to sin. So watch yourselves. Luke 17:1-3 (NIV)
    The people who put together the website are in greater need of prayer than the people who send in a video.

    I'm slightly surprised that Pascal's Wager doesn't have more impact. I suppose that shows how much confidence people have that there is no God. Which is odd, given the large proportion of people who think that there is a God.

    (I feel honour-bound to point out that David Heddle has a post with the same title, that makes similar points - though almost certainly more thoughtfully and accurately. However, since most of my hits at the moment are coming through the link to my post that he included in his, this is perhaps hardly necessary!)

    A little old, now ...

    ... but this raised a smile.

    ID in New Scientist again

    New Scientist magazine has covered Intelligent Design before - I'm pretty sure that I've just recycled the issue that had this story in.

    A new story published in their most recent edition is more careful. ID is still fundamentally presented as a branch of creationism, but there seems to be more positive reaction to the research that the Biologic Institute aims to work on, which include "examining the origin of metabolic pathways in bacteria, the evolution of gene order in bacteria, and the evolution of protein folds", and "a programme in computational biology".

    The reporter, the infamous Celeste Biever, mentions several times that many of the people there were "reluctant to speak with a New Scientist reporter." Hmm. I wonder why that might be?

    "Stop Making Sense"

    I rewatched the “Stop Making Sense” DVD – Talking Heads in concert – and it gave me the chance to introduce the children to the band. They enjoyed it at least in part because David Byrne has some similarities with a slightly wacky uncle they have. (The uncle was introduced to a friend by our youngest as follows: “This is my uncle. He's mad.”) Also their music is very catchy – a lot of the riffs sound like you've probably heard them before somewhere, and many of their tunes are built around only a couple of progressions. And they were really funky!

    The concert is 20+ years old, and I was interested how many of the things they did in it were used in the U2 concerts I have recordings of. “Rattle and Hum” (and, for that matter, the “Vertigo” recording from Chicago) features Bono carrying a spotlight, as happens in one of the songs. A whole raft of things used in “The Fly” appear here – words projected on the screens behind the band, David Byrne running round and round the stage.

    Do other bands try and do interesting things – make their concerts into shows? Should I get out more? Don't answer that.

    Other notable things about “Stop Making Sense”.

    • The lighting in “What a day that was” - the band were lit with spots at ground level just in front of them, which projected huge shadows of them on the screens behind them. Really cool effect.

    • “The girls want to be with the girls” - which sounded like a (low-energy) B52's song (the B52's say about themselves that they were “The first band to glorify pop culture with an almost Warholian sense of purpose” - I like the B52's, but I'm not sure that Talking Heads weren't doing that at the same time plus quite a lot else besides.)

    • Tina Weymouth looking surprised but pleased to be there through the whole concert!

    • The whole band looking a bit like undergraduates.

    • “This must be the place” - which is a beautiful song. It must be – Tom says so!

    Tuesday, December 19, 2006

    Logic, atheism and agnosticism

    It is much politer to be "agnostic" than to be "atheist" - it comes across as much more tolerant - dare one say more enlightened?

    However, let's apply a little logic. If you are agnostic, then what you are saying in effect is as follows. There may be a god, or there may not. But if there is a god, then the evidence for his/her/its existence is so slight that you can't definitely conclude that the god is there. God has no effect on the universe.

    What is the functional difference between this position and saying "there is no God"?

    Sunday, December 17, 2006

    Specification all the way down

    Many people have engaged with Richard Dawkins on the issue of METHINKSITISLIKEAWEASEL, the computer program he wrote that seemingly models an evolutionary process. Probably the most ... well, meaningful, is found in “A Meaningful World”, by Benjamin Wiker and Jonathan Witt. This unpacks the significance of the line that Dawkins chose in its Shakespearean context, to demonstrate that the almost complete lack of congruence between the way in which Shakespeare used language (and we respond to language, as demonstrated even by Dawkins' choice of phrase for this test) and a random walk towards a target phrase.

    It has struck me that there are many levels of specification in language – this came up in an earlier discussion when we talked about the likelihood of a sequence of letters chosen at random conveying information. Perhaps the complexity of specification of language is the reason that Dembski didn't (as far as I know) use language as an example of specification in his books.
    Let's set aside issues of the case of letters (which represents another level of specification) and punctuation (which represents yet another), and just consider the levels of specification involved in series of letters. What levels of specification can we list?

    Level 0 – from all possible symbols chosen to write with, pick those that constitute the character set used in the English language.

    Level 1 – letters should be organised in such a way as to represent a text – aligned, and flowing in a regular direction.

    Level 2 – words must be correctly encoded:
    Disallows DEF YUOH GBY
    Allows DEN YOUR GUY

    Level 3 – words must be organised in grammatically reasonable sequences:
    Disallows CABIN THIS RED OPENS JUMPER
    Allows THIS RED CABIN OPENS A JUMPER

    Level 4 – collections of words must convey meaning:
    Disallows THIS RED CABIN OPENS A JUMPER
    Allows THIS RED KEY OPENS A DOOR

    Level 5 – collections of collections of words must convey meaning:
    This level encapsulates what we would consider to be logic – two “sentences” which contradict one another should not be present in the same text unless something happens between the two sentences to allow this to make sense. Thus, “God is infinitely mutable in his essence. God is infinitely immutable in his essence,” is only a reasonable text if you throw away the concept of logic. It sounds profound, but in actual fact, it is undermining the common logical currency that we share. It only makes sense if language doesn't make sense.

    Note that works of fiction will generally take place at this level – that is, a work of fiction ought to be well-specified at this level – it should not contain internal contradictions.

    At a high level, I guess, is the idea that there ought to be a correspondence between the text and the reality denoted. So the truth of THIS RED KEY OPENS A DOOR now hangs on a relationship that has to be established between the phrase and a specific red key – one that actually exists. A text at this level ought to be consistent with the worldview of the author.

    At the highest level is the idea of TRUTH. I don't see anything unreasonable with the idea that a text can be congruent with reality at a level that exceeds our ability as observers to apprehend it. This is my understanding of the nature of the Bible. It would have to be written from a perspective that was higher level than any human's - again, this is my understanding of the nature of the Bible.

    This ties in with the “Data/Information/Knowledge/Wisdom” idea expressed in the headline above. Many of these levels of specification also have analogues in the realm of proteins and DNA. A gene defines a function (ultimately) – that is, a gene provides an organism with the information it needs to solve a particular problem. There is a defined alphabet of four nucleic acids, and they have to be grouped in threes to define an amino acid. But a protein can't contain any old sequence of amino acids – it can't even have the right amino acids in any order. In addition to the amino acid sequence (the primary structure), there is also the secondary structure, which includes things like alpha helices, and the tertiary structure, which is the overall shape of the protein unit, and even the quaternary structure, which is how multiple protein units might function together. All of these are different levels of specification, like the different levels of specification in language that allow a series of symbols on a page to convey meaningful information to a reader.

    With the exception of onomatopoeic words, there is generally little connection between the words that we use and the concepts that those words contain. The abstraction of concepts from the universe to symbols in languages – and the ability to move from this abstraction to create entirely new ideas – is one of the defining characteristics of humans. A dog may learn what “walk” means, and non-human primates and dolphins may learn the names of objects – but this shift from concrete to abstract, and then the realisation of abstract in concrete again are uniquely human.

    In these multiple layers of specification, there are also echoes of things such as the Open Systems Interconnection seven layer model and the nature of mathematics, where philosophers are quietly amazed that there should be a connection from the abstract world of numbers to the real world of countable objects.

    Devotion to duty

    I fell asleep last night trying to write a post. I woke up again at twenty past midnight - Liz was already asleep. The last two lines I had written before nodding off were complete gibberish. I'd better go and investigate whether the rest of it makes any sense.

    I hope you know how much I suffer for my art!

    Wednesday, December 13, 2006

    Interesting environmental fact of the day

    Every kilogram of carbon-based fuel that is burnt releases around 3 kilograms of carbon dioxide. So when 2500 kg of fuel is burnt on a UK domestic flight, that is over 7 tonnes of carbon dioxide produced. A sensible-sized tank of car fuel (40 litres/9 gallons) when used releases about 100 kg of carbon dioxide into the atmosphere.

    Monday, December 11, 2006

    The price of tax on aviation fuel

    A lot of fuss is made in some quarters about the fact that airlines don't pay duty on aviation fuel. This makes it cheaper to travel by air, we are told, than by surface transport.

    It is true that travel by air is generally cheaper than surface travel in the UK. It is also more comfortable and generally more reliable. From a capitalist perspective, it's not surprising, because air travel is in many ways a deregulated, commercial market. Where there is regulation (for example, in the carriers that can offer the prized routes from London to the US), prices are kept substantially higher than where regulation has been removed. Different providers have had to fight for market share through competition - which has meant investing in the service they provide, driving down costs, and marketing themselves effectively.

    In the meantime, the road network is blocked up by private cars (also the fruit of a capitalist system, I guess) and trucks carrying freight that ought to be travelling long distances by rail. The rail network swallows substantial public subsidies, can't apparently afford to invest in improving infrastructure, and generally doesn't offer a pleasant passenger transport experience.

    The UK government nonetheless does tax the aviation industry - through Air Passenger Duty, which will be doubled from early next year, and which has no counterpart for rail, bus or sea transport, as far as I know. At the short end of flights, as it happens, the amount of duty levied per seat is comparable to the amount that it would cost if fuel were taxed. Let's suppose that aviation fuel were taxed at 40p per litre. What effect would that have on ticket prices? The fuel burn in a typical narrow-body aircraft from London to Edinburgh is about 2400 kg, and given the density of fuel, this equates to around 3000 litres. At 40p per litre tax, this represents £1200 tax per sector. If the aircraft is carrying 120 passengers, then the additional cost would equate to £10 per passenger - a price similar in size to that of APD.

    However, I estimate that a 777-200ER flying from London to Los Angeles might burn around 120 tonnes of fuel. That's around 150000 litres. The tax on the flight would be about £60,000. Divided between 280 passengers, this is an intimidating £215 per passenger - each way! - obviously substantially more than Air Passenger Duty, and arguably a price that would seriously inhibit longhaul airline travel.

    As Simon Calder points out in his column here, in talking to bmi CEO Nigel Turner, the increase in APD will have no impact on the environment. My suggestion for making aviation more environmentally friendly? Make APD payable per seat. There would then be an incentive for airlines to make sure that their aircraft were as full as possible. Heavier aircraft burn more fuel - but not in proportion to the additional number of passengers carried. And don't penalise them for cancelling a flight if there aren't many passengers on it.

    "Naturalism works!"

    It's possible to start tightening up a screw when it's not on its thread properly. And to start with, it all seems to be okay. It gets nice and tight, and it seems to hold properly. But you're wrecking the thread, and weakening the join. It wouldn't make the seal watertight, no matter how much you tightened it - if you were fixing a radiator on, or something.

    How do you deal with this? The only way is to undo it and start again, and hope that you haven't already wrecked the thread.

    What happens with naturalism, I think, is that the believer tightens the screw up when they haven't aligned it with the thread properly. It only seems to work. I don't think they end up with a watertight join.

    Where naturalism breaks down is:
    a) in epistemology - whilst it seems to the believer to work, you can't really know that there is anything else in the universe other than your consciousness. Descartes' famous "Cogito ergo sum" allows the observer to infer their own existence. However, it immediately means that the observer is unable to draw safe conclusions about anything else in the universe - everything else in the universe may simply be a product of the imagination of the observer, if you start from the observer.
    b) in evidence (which is where ID comes in) - I think that the evidence, inferred through the scientific method, presented by ID proponents shows that naturalism is insufficient to the task required of it - which is to present a consistent metanarrative - a consistent explanation of everything.

    A worldview is bound to seem pretty solid, and philosophical naturalism does work, in large measure. But it doesn't work well enough.

    Friday, December 08, 2006

    High probability start points for natural selection?

    (This whole post may be based on a misapprehension of an argument from commenters. If so, please say so in the comments.)

    The argument has been presented in comments that, in looking for primitive, low specification versions of cytochrome c, I am missing the point. In fact, I am told, recent work has suggested that smaller polypeptide sequences with relatively high probability, with the capability of binding enzymes, could be the earliest selection units – effectively a bridge between an amino acid soup and well organised life.

    This sounds really neat – we no longer need to worry about the random appearance of low-probability polypeptides in early organisms. Presumably, instead, we would find relatively short sequences of DNA or RNA in the organism, which encode for polypeptides with a length of (say) 10 amino acids. These bind proteins which are present in the environment – again, the proteins formed at random, but not having to be genetically encoded in the organism at this stage – which are then available for the organism to use. Are people happy with that as a description of the process being proposed?

    This post is a first attempt to think through the implications of this. I don't think that one can simply say - “Ah! We don't need low probability events for evolution to start! Here is a process that only needs relatively high probability events.” The reason for this is that whilst there isn't a high specification required for the short binding polypeptide in the organism, we have quietly introduced low probability elsewhere into the process. We need to make sure that the new process is no less probable than the process that we were hoping to dispense with.

    Where do the proteins come from that these low-specification sequences bind to? Are they sufficiently high-probability as to be fairly abundant in the environment in which the proto-organism finds itself?

    (I am being vague about the “environment” and the “organism” at the moment. My concept at this stage is that the primitive organism has some means of keeping access to a supply of material from the “environment” - without the complex cell wall and gated transfer mechanisms we see today. The precise nature of the “cell wall” in a primitive organism is obviously one more thing to add to the “to work out” list.)

    The abundance of amino acids in the environmental medium would be some value – let's say k Moles per litre. It seems reasonable to assume that the formation of peptide bonds between amino acids is not strongly favoured, or the entire concoction would fairly abruptly become scrambled egg (or, to talk more scientifically, long polypeptide sequences would precipitate out of the mixture) – only if the mixture is liquid can a process continue whereby different forms can be experimented with by this process. On the other hand, it seems reasonable that if two amino acids are positioned appropriately, they would be fairly likely to form a bond.

    If all the single amino acid molecules joined with other single amino acids, you would have a mixture of dipeptides with half the abundance (k/2) of the single amino acids. If a protein requires a sequence of n amino acids, then the abundance of polypeptides of that length is not more than (k/n) – probably substantially less - I'll try and think a bit more about this .... Furthermore, as the length of proteins increases, the number of different combinations increases. We find ourselves back looking at specifications, again. What is the specification of the protein that has to be bound by our short polypeptide sequence? It is hardly likely to be less specified than the polypeptide that is to bind it – otherwise it would be more likely that the organism would simply be producing the protein, rather than the binding polypeptide. So both the binding polypeptide must be present, and a protein which has a specification no smaller. The probability of the first was estimated at around 10-11. Let's assume the specification is the same. Even if the specification of the binding polypeptide is as suggested by Art, the probability of it being able to bind with a suitable protein is less than 10-22. Not beyond the probability boundary – but a lot less useful than the specification of the binding polypeptide would suggest. And it seems unlikely that being able to bind with a protein that is no more specified than the binding polypeptide would be of any real advantage to an organism – surely it's more evolutionarily obvious to manufacture the protein itself. So I would suggest that the bound protein would in actual fact have to be significantly more specified than the binding protein – which means that the probability of it being present gets even smaller.

    You then have the issue that having polypeptides that can bind to a protein is of little value to an organism. An enzyme – cytochrome c, for example – is actually a complex little machine in its own right. You could write a list of requirements that are necessary for a generic (or genetic!) machine to work – input, output, power, action and control. It is the co-ordination of thousands of biochemical machines within a cell, or tens of thousands within a multicellular organism, which is the amazing feat that life is. Conceptually, what this proposal suggests is that we can conceive of the “input” component arising separately at random. The “output” - an “un-binding” mechanism – the “power” - a system for harnessing energy from somewhere else – the “action” - a means of reconfiguring the “input” to make the “output” - and the “control” - a means whereby the organism can cause this to occur according to requirements – are not suggested.

    It would be too much to say that an enzyme was “irreducibly complex” because it had all of these requirements – I have discussed elsewhere the fact that small enzymes are not apparently beyond the probability boundaries, which I think is an unspoken part of the “irreducible complexity” concept. However, there are certainly levels of complexity present in the functionality of proteins used by organisms that this proposal has not yet addressed. It is not clear why the “input” part of a machine should convey a selective advantage to an organism. Furthermore, since the organism has no means of being able to make the protein (which is no less specified than the binding polypeptide, remember, and probably substantially more) that it requires, it has no guarantee that the binding polypeptide will continue to be any use.

    It should go without saying that you can't assume that other bits (of enzymes or whatever) are being made elsewhere for use at the same time. As soon as you do this, you are simply slipping back in the improbabilities that you took out to start with, whilst nobody is looking.

    Incidentally, it is also worth reminding ourselves that we are still making assumptions here about there being a suitable environment in which this process can occur, and the presence of fairly-well specified equipment for synthesising proteins from RNA or DNA. As I said in earlier posts, for the sake of trying to get a handle on this area that we are looking at, these issues can be set aside, as long as we come back to them at some stage.

    Thursday, December 07, 2006

    Wednesday, December 06, 2006

    Miracle Drug and shared stories

    I want to trip inside your head
    And spend the day there.
    To hear the things you haven't said
    And see what you might see.
    I want to hear you when you call,
    Do you feel anything at all?
    I want to see your thoughts take shape
    And walk right out.

    Miracle Drug, U2
    I love this song for at least two reasons. The second is that, when U2 played it at Twickenham in 2005, they dedicated it to nurses, doctors and scientists. Not many people know that there are scientists – neither nurses nor doctors – who are as involved in medical care as they are. Liz is a medical physicist.

    I run a chess club at my younger children's primary school. D came today. She's the youngest of four in her family – her next sibling C has, until this term, been one of the most regular at the club. D is only seven; C is ten, but far more sure of herself – one might even say cheeky. A was finished with the school before I had children in the junior section, but B was in the same year as my daughter and went to high school this year. They have Gaelic names, and on St. Patrick's Day, they wear leprechaun hats. I know their parents from the school board, and I know how much they want for their children – not to achieve the highest grades, but to fulfil themselves.

    I wonder, as I spend time with children at chess club, about their lives – much as I guess a teacher would. I remember when I was a first officer, and people were still allowed to visit the flight deck, a girl of D's age travelling unaccompanied was brought by the cabin crew to visit us. She was really sweet. We chatted for a while – she explained that she was travelling from mum (who lived in one country) to stay with dad (who lived in another country) for a week. As she left, I said to the captain: “She'll be breaking people's hearts in a few years time.”

    “Sounds like someone's already broken hers,” he replied. I had to look out of the window for the next five minutes.

    I know where D is likely to go to high school. I know what it is like there, to an extent – it is one of the schools we visited when we were looking for our daughter, and the experience of secondary education in England doesn't change terribly quickly. I don't know whether she is more likely to do arty subjects, or sciency subjects. I don't know whether she will do well – or whether boys or girls will end up too much of a distraction by then. I don't know how long she will stay in education. I don't know what sort of a job she will end up in. I don't know whether she will end up living with somebody – or married to somebody. I don't know whether she will have children – either with or without a father present.

    I don't know whether she will find her life haunted by mental or physical illness – either in herself or others. I don't know whether she will end up rich or poor. I don't know whether she will take delight in life, or if it will become a grinding, painful routine. I don't know whether she will even live to the end of her education. I've known of several children who haven't, and Liz tells me regularly of children that she treats who are around the age of our children.

    Just for one hour a week, perhaps, for a few weeks, I can watch her – along with the other A's, B's, C's and D's who come along. I can try and model to her how a grown-up can encourage her and pay attention to her – as far as is possible, when all the other children are also looking for this from me. When she thinks back to her childhood, I will be no more than another of the grown-ups who were part of the scenery – no more than a line in the story of her life.

    What do people get married for? I think it has something to do with sharing a story. Person E matters so much to person F that they want to share the story of their life - “till death us do part” - right to the end of the story. They want to be there – or if not, they want to hear what happened later on when they are in bed! There's a part of me that is saddened by the fact that I won't know what happens to D in a few years' time – and indeed, that I barely know anything about D now. But at least when I got married, I ensured that for this one special person, I would know what happened to them. After we got married, I inherited a lot of my wife's friends – and it still gives me twinges of regret that they had shared experiences before I was around – they knew parts of my wife's story that I didn't.

    I suppose it's part of what “going out together” is all about as well – it's two people sharing bits of the stories of their life – each person finding out whether they really want to share the rest of their life. And having children, as well. More people whose stories we share – indeed, all else being equal, whose stories will go on when ours cease, and in whose stories, we will be more than just a line.

    Where does sex fit into this? The Christian answer is that sex is for marriage. Only where people have committed themselves to one another – when they have said, “I want to be there at the end – I want to share your story” - is the sexual relationship blessed by God. To take the Christian side out of it, I still think that if somebody embarks on a sexual relationship without wanting to commit themselves to the other person – without wanting to share their story – it is fundamentally a selfish act. “I want what I can get out of this – but I'm not sufficiently interested in you to say that I will be here until the end.”

    Given this structure – that marriage is about sharing a story – then separating from a marriage partner is saying: “I don't want to share your story any more. I don't want to hear it – it no longer interests me.”

    Sorry, this is a very long, rambly post. The reason that I like the U2 song, though, is because of that thing about being inside somebody's head. The best part of 7 billion stories in the world – and I only get to know one or two. That fills me with regret – the same ache that Bono sings about.

    Saturday, December 02, 2006

    Dawkins says ...

    ... apparently, in The God Delusion, "If the design is so improbable, how much more improbable is the designer?"

    This sounds neat, but without reading his book, and relying on Allygally's comment in the preceding post for this quote, it is a poor argument.

    I assume that what Dawkins is suggesting is that the assessment of the probability of something arising by chance is small, and if it is small, then the probability of there being a designer who is capable of producing it is even smaller.

    If this truly reflects his opinion, it represents a change of mind from him. He used to accept that you couldn't go on relying on "lucky breaks" as an explanatory system - and he suggested (if I remember right, in The Blind Watchmaker) that if the probability of life evolving were so small as to be unlikely more than once in the galaxy, then inferring "chance" rather than "design" would be unreasonable. Of course, this was in the good old days when Sagan was confidently telling us that there was likely to be millions of intelligent lifeforms in the galaxy - prior to Brownlee, Ward (Rare Earth), Gonzalez and Richards (Privileged Planet). This original position is, of course, no longer tenable, so perhaps the new position represents the "new" Dawkins.

    The argument can be easily dismantled - first by analogy, and second by logic. It is very improbable that an airliner might appear by chance. Dawkins (via Allygally) argues that if an airliner is an improbable design, it is much more improbable that there is a designer.

    More logically, a corollary of what Dawkins is saying appears to be that the more designed something looks, the less likely it is to have a designer - an absurd position to take. It misses the point that the "design inference" style argument has an implied "given". "The probability of this feature arising given no designer is small" - from a design inference point of view, using the probability boundary, what this is saying is that the probability is such that you would be lucky to see this in the universe. Therefore (I think the force of the argument is) it is unreasonable to assume it is the product of chance, and more reasonable to assume that it is the product of design, even though the probability that there is a designer is indeterminate.

    What is the probability that there is a designer? One way of looking at this is that since at least half the world's population believes that there is one (or more), it can hardly be less than 50:50.

    I find it hard to believe that this really represents Dawkins' position. He might (in Christian terms) be a fool, but he's not stupid. If this is not what Dawkins is getting at, then Allygally (or Dawkins) can feel free to come back and correct me.

    Tuesday, November 28, 2006

    Evolution - moving the discussion on

    Why bother with all that stuff about the specification or probability of cytochrome c? Well, the aim is to move on the debate between evolution, creationism and intelligent design. Certain classes of argument - on both sides of the debate - can be clearly labelled as flawed, on the basis of this work. This isn't to attract glory to me, incidentally - I have little doubt that this work has already been done better elsewhere, and as much as anything else, the fact that it is written down here is as much as anything to give me somewhere to point people to when they present these customary, sloppy arguments.

    A frequent creationist allegation is that "evolution can't happen because a protein with a specific sequence of amino acids is incredibly improbable." Here's an example, from "Answers in Genesis."
    However, ignoring all such problems, and many others that could be detailed, what is the probability of getting just 100 amino acids lined up in a functional manner? Since there are 20 different amino acids involved, it is (1/20)100, which is 10-130. To try to get this in perspective, there are about 1080 fundamental particles (electrons, etc) in the universe. If every one of those particles were an experiment at getting the right sequence with all the correct amino acids present, every microsecond of 15 billion years, that amounts to 4.7 x 10103 experiments. We are still 1027 experiments short of getting an even chance of it happening. In other words, this is IMPOSSIBLE!
    Cytochrome c is a 100 amino acid protein, for all intents and purposes. But I have demonstrated that it is not anywhere near as improbable as 10-130 - in fact, more than forty orders of magnitude more probable - and in a less specified format, as it would have been when it first appeared, it would have been less improbable again.

    There is an argument from improbability - but it doesn't start from a single specific sequence of amino acids.

    Similarly, the vague darwinist assertion that "there are gazillions of bacteria, each of whom have the opportunity to make random sequences, so really the suggestion that there is any problem with improbability is absurd" is just as flawed. TalkOrigins is careful to avoid this - there are other points at which their argument could be challenged, but this isn't one. I'm not sure that this is so much the case with people who casually engage in this debate, and stop counting when they get to 1020 bacteria.

    What I have suggested - and again, I have little doubt that the same calculations have been carried out elsewhere - is that there is an effective probability boundary of 10-60 for proteins generated at around the time of the origin of life, in the conventional evolutionary model. If a protein has to be specified such that it has a lower probability of arising than this, then there aren't sufficient probabilistic resources. A similar analysis can be applied in a similar sort of way to the appearance of proteins and genes at other stages in evolutionary history.

    From here, the analysis opens out in several directions. For a start, in determining a minimum for the improbability of proteins when they initially appear, and determining a current improbability, it is possible to determine how much "work" evolution has done, in refining proteins through evolutionary history. Are the mechanisms available to evolution able to do this work?

    Also, the figures that I put down can be more accurately determined - for example, the time at which cytochrome c first appeared can be pinned down. More accurate estimates of the current improbability of cytochrome c can be determined. (I notice that in a paper cited on the TalkOrigins page given above, Yockey says that the improbability of cytochrome c is nearer 10-68 - which makes me feel quite smug about my guess of 10-70!) More thought-out figures for the amount of carbon available for forming "proto-proteins" can be determined. Big issues related to how the genetic code and translation systems might appear remain unresolved.

    Starting to attach numbers to the specifications of proteins has an impact on the debate about irreducible complexity. Implicit in this concept is that the different components are themselves improbable - and thus the requirement for them to appear simultaneously requires the multiplication of small improbabilities. The probabilities ought to be hardened up. If we can get a handle on the value of these probabilities, we can again move away from the "yes it is, no it isn't" style of debate that is what the current state of the art boils down to.

    Both sides in the debate suggest that this sort of analysis is something that the other side ought to be doing. In truth, without attaching numbers to the propositions that are being kicked around, both sides are debating not science, but presuppositions.

    Beautiful Day

    Man, I love this song! Don't forget to read the comments, to see how U2 themselves make reference to Jeremiah 33.

    H/T U2 Sermons.

    Monday, November 27, 2006

    Christian philosophy

    The Constructive Curmudgeon, Douglas Groothuis, writes a eulogy for Dr. Robert T. Herbert, who died recently.
    Professor Herbert and I disagreed on many things. In a Philosophy of Religion seminar he was leading, I voiced concerns about a paper in which he argued that believers come down with faith as one comes down with a cold. That is, the faith is neither rational nor irrational. (The paper was subsequently published in Faith and Philosophy.) He asked me to write a response to his paper. I agreed with some hesitation. (Whether I really had a choice, I do not know.) When I received the paper back, it was filled with red ink comments challenging nearly every one of my criticisms. At the bottom of the last page was the grade: A+.
    I have been involved in an email discussion recently relating, in effect, to epistemology - that is, the foundation of knowledge, both for Christians and non-Christians. This pointed me in the direction of Cornelius van Til. If you follow this link, you will find his "Credo", at the Center for Reformed Theology and Apologetics. I was surprised at how closely it fit with my own (less organised!) philosophical thoughts - and I was both gratified and a little disturbed to find this perspective described as "calvinist" - a label I had always shied away from for various reasons.

    Sunday, November 26, 2006

    The B of the Bang

    ... is the name of a sculpture, conceived of by Thomas Heatherwick, which was built in Manchester, UK, for the 2002 Commonwealth Games. It derives its name from a quotation from Linford Christie, who said that he had to start not just at the bang of the starting pistol, but at the "B of the Bang".

    It's a stunning sculpture - it stands taller than any other sculpture in Britain. I want to post some comments on Heatherwick as a designer, from an article in yesterday's Daily Telegraph.
    Heatherwick's creations are certainly eccentric, but they never stray from his central belief that good design should be "readable rather than impenetrable": you need no background knowledge to be touched by their immediate, exhilarating brilliance. After studying design in an era when the world of architecture seemed impossibly rarefied - "There were all sorts of discussions about sacred geometries and things, but to my mind nothing interesting was being built" - he is determined to make things that will appeal to a universal audience.

    "Is something dumbed-down if it appeals to an eight-year-old, and therefore of no interest to someone from an academic perspective?" he asks. "Or is there potential for something to have meaning across both levels? It is my aspiration to prove that there is."

    Telegraph Review, 25/11/2006

    Thursday, November 23, 2006

    For the record ...

    Doubtless not as liberal as Bec wishes, but definitely more liberal than most US citizens. (But then, who isn't?!) Thanks Miss Mellifluous for reminding me of this, and showing me I could blog it!
    You are a

    Social Moderate
    (50% permissive)

    and an...

    Economic Liberal
    (31% permissive)

    You are best described as a:

    Centrist










    Link: The Politics Test on Ok Cupid
    Also: The OkCupid Dating Persona Test

    In Our Time ...

    ... in case you missed it, or are unaware of it, featured Richard Dawkins (amongst others) talking to the presenter Melvyn Bragg about altruism.

    I didn't hear it all - if you are interested, the BBC has a "Listen Again" feature that doesn't in fact require you to have heard it once already. Go to this page and follow the link.

    There was the odd bit even in what I heard. In talking about the different effects of culture, Dawkins talked about the "darwinian foundation" - but then added that the zeitgeist might be different from decade to decade. One was tempted to ask - in that case, what exactly is the predictive significance of the darwinian foundation?

    And David Stove's commentary on Darwinism was not mentioned, as far as I know.

    Evolutionary history of cytochrome c

    This post is a little provocative, and is based in part on my earlier posts about the specification of cytochrome c. If this analytical process is fair, then it should make possible the sort of explanation outlined below. Note that, in my opinion, this is substantially more detailed than the traditional evolutionary explanations offered. However, I would be interested in other people's thoughts on the numbers involved, or pointers to other places where similar analysis has been carried out.

    Cytochrome c is genetically coded. Other forms of electron transport are possible, but this one could not have predated the genetic code and transcription mechanisms. This means that it is likely that cytochrome c would not have substantially predated the appearance of prokaryotic life. On the other hand, the gene that became cytochrome c was established in prokaryotic life by the time it became dominant – say 3500 Myr. The reason for this conclusion is that we don't see any forms of life that don't utilize cytochrome c, with the exception of parasitic forms which would not have predated this point.

    We can determine how many possible events there were that might have led to the first appearance of cytochrome c, and thus the “probabilistic resources” available for this evolutionary step. The total time available is (let's say) of the order of 1000 Myr. Assuming that there are biochemical processes that can occur 100 times per second (I suspect that there are very few biochemical processes that are energetically neutral to such an extend that they could oscillate at this frequency – the mechanisms that allow cells to tick “more rapidly” are dependent upon the sort of systems that we are seeking to explain), this represents of the order of 1018 biochemical “ticks”. There are of the order of 1022 moles of organic carbon available – that is, of the order of 1046 atoms in total, of which let's say 1% is in an appropriate environment to be a resource for finding proto-cytochrome c, and proto-cytochrome c requires of the order of 100 carbon atoms.

    Thus, the total number of reconfigurations of hundreds of carbon atoms available to try and find proto-cytochrome c is about 1060. If proto-cytochrome c was substantially more specified (less probable) than this, then there would not have been sufficient probabilistic resources available to consider its appearance to be a likely event in this timescale. Note also that this says nothing about the requirement of existing genetic systems.

    By the time of the appearance of multicellular, eukaryotic life – say 600 Myr: 100 Myr in either direction has little effect on the argument – cytochrome c was already well specified. The reason for this conclusion is that all modern life has a similar high specification – there aren't any substantially different forms in different phyla. It would be possible that the convergence to a generally single form of cytochrome c took place in different phyla after the Cambrian era – but with prokaryotic organisms having faster generation times, asexual reproduction and being dominated for selection by fewer factors, it seems more likely that a highly specified form would have been established before this point in time.

    Based on the list of 113 versions of cytochrome c referenced earlier, the probability of a random sequence of DNA coding for one of these versions of cytochrome c is about 10-85. It was observed that going from 103 to 113 different versions increased this probability by two orders of magnitude (from about 10-87). It is pretty arbitrary, but for the sake of argument, let's say that the probability of a random sequence of DNA coding for any version of cytochrome c (which today would be well-specified) is of the order of 10-70. This is then a measure of the specification of cytochrome c today, following all the years of natural selection on the earlier forms.

    If these figures (or ones like it) are accepted, then we can see the scale of the work done by natural selection from the time random mutation produced proto-cytochrome c to today. Cytochrome c has evolved from requiring a DNA sequence no less probable than 1 in 1060 to having a DNA sequence that we are saying has a probability of 1 in 1070, in around 3 Myr.

    Note that arguing for a lower specification for proto-cytochrome c increases the demand placed on natural selection to arrive at its modern specification. Arguing for too high a specification for proto-cytochrome c runs the risk of exhausting reasonable probabilistic resources for its initial appearance.

    Tuesday, November 21, 2006

    How Music Works with Howard Goodall

    This was an excellent programme. Now where can I get a DVD of it? Channel 4 say it's not going to be made available for sale, and there will only be one repeat on More4 sometime next month. Did anybody video it?

    Saturday, November 18, 2006

    Variability of cytochrome c across species

    (Sorry, I don't know why this image came out like that - click it to get the proper version)

    Using the spreadsheet referred to below, I did this graph of the moving average of the variability of cytochrome c across the 113 species. The x axis represents distance along the amino acid chain. It starts below zero, as some species have chains of amino acids that come before the start of the reference sequence. The y axis represents the number of codons that could be used across to encode each position - this drops as low as 2 in some locations, where only one amino acid is present in the cytochrome c of every species. The average rises to almost 30, signifying that around half the 20 occurring amino acids might be used at a particular location in cytochrome c.

    Friday, November 17, 2006

    The specification of proteins - part 3

    The table referred to in my earlier posts has a list of cytochrome c sequences for 113 different species, all aligned as far as possible using the horse heart cytochrome c as a reference sequence. So, this sequencing and alignment having been carried out, the same process can be carried out for all the positions of all the cytochrome c sequences in the table.

    Incidentally, I would like to point out that I made one change, in sequence 21 (ceratotherium simum) – at position 48, there were a series of amino acids that seemed to be incorrectly aligned with the reference sequence. Inserting two empty locations before the sequence DANKNKG, and removing two of the empty locations afterwards gives better alignment.

    So the list of species provided was rematched with the amino acid sequences in the table, and then the amino acid sequences were “exploded” into individual columns, giving a table with 113 rows and 118 columns, each containing either a letter or a hyphen. I then counted the number of each different amino acid (letter) in each column. Each amino acid can, as was discussed in the previous post, be encoded by one or more codons. So, if I know which particular amino acids can be present at a location, and I know how many codons encode these amino acids, then I can work out how many of the 64 available codons could work at each position. Thence, I can determine an estimate of the probability that, given a random sequence of DNA, it will code for a functional cytochrome c protein.

    Discarding the first 9 places in the table – in other words, accepting position 1 on the horse heart cytochrome c as the first significant position – and the last 2 places, which generally aren't part of the amino acid sequences, I can multiply up the total number of valid codons for each position, to give the total number of possible cytochrome c sequences, assuming that any amino acid that works at a location can be put there to make a viable cytochrome c protein. This comes to about 1.5x10112. There are 106 places under consideration: 64106 is 2.8x10191.

    The proportion of valid cytochrome c sequences in this domain space is the ratio of these – that is, 1 in 1.9x1079.

    Let's unpack the significance of this a little more. I have not assumed that only one cytochrome c sequence is valid – a challenge directed at many ID proponents. Neither have I assumed that only the 113 given cytochrome c sequences are valid. I have assumed that, if an amino acid appears at a given position in any of these cytochrome c sequences, then that is a “possible answer”. This analysis allows me to construct a very large number of possible cytochrome c sequences, only 113 of which happen to constitute the table, and all of which I am assuming would be functional. Despite this, the proportion of valid cytochrome c sequences in the domain space of 106 amino acid polypeptides is of the order of 1 in 1079. For reference, the total number of atoms in the earth is around 1050.

    The range of species that is covered by this survey is very large – everything from humans to rice to saccharomyces. It is likely that additional species would add to the number of conceivable cytochrome c combinations – by showing that different amino acids would work at positions not covered already. However, given how widely the net has been cast with this approach, and the range of species considered, my hunch is that the increase would not be more than a few orders of magnitude.

    However, this can be investigated. This process could be carried out omitting several of the sequences, and seeing what effect this has on the ratio. Or, if other candidate sequences of cytochrome c are available, they could be added, again determining the effect that this has.

    To consider this from a naturalistic perspective, we can assume that given the key role that cytochrome c has within cells, and given its ubiquity, selection pressures on it would be strong, and in billions of years of evolutionary history, this would have allowed it to arrive at a highly specified form. It is possible to argue that this being so, all versions of cytochrome c that we observe in the world today are far more specified than would have been necessary in the most primitive organisms. In fact, this analysis is also useful from a naturalistic perspective. In determining how specified (improbable) cytochrome c is today, and in estimating the probabilistic resources available in early stages of evolutionary history, we can calculate how effective evolutionary processes are in improving the specification of proteins. This sort of analysis would be very useful if darwinists wish to move away from "hand-waving" explanations towards a solid empirical foundation for their beliefs.

    Thursday, November 16, 2006

    That's my king!

    I'd never heard this before. If you are a Christian, this will help to remind you of why. If you aren't a Christian, it is a different experience from my stumbling and inaccurate apologetics.

    H/T Andrew and Cora.

    Pictures from South America

    My wife's cousin James has been living in the west of South America for quite a few years now. Initially, he was doing tour guiding, but more recently he has focussed on photography. He has his own website, through which it is possible to buy copies of the stunning images.

    Wednesday, November 15, 2006

    The specification of proteins - part 2

    For part 1, see below.

    I said below that, in considering how specified Cytochrome C is, “we need to determine what the probability is of a random sequence of DNA coding for Cytochrome C, rather than what the probability is of a random polypeptide being Cytochrome C”. So let's start with Cytochrome C for a horse – the first line in the table referenced before. The sequence of amino acids starts:

    GDVEKGKKIFVQKCA ...

    and ends:

    ... KKTEREDLIAYLKKATNE[Stop]

    Each amino acid can be coded by one or more DNA codon (Incidentally, I will show my ignorance by pointing out that I am working on the basis that Cytochrome C is encoded in normal genes, rather than mitochondrial genes. I understand this to be the case – see here. However, even if Cyt C were encoded in the mitochondria, the principles discussed here could be rewritten to apply to this). Given that there are 64 codons and 21 different items encoded (20 amino acids and the stop sequence), there is an average of 3 codons per item encoded. But, with a table of the genetic code, we can be more precise than this. Four codons can encode the first G (glycine) in the polypeptide. Two can then encode the next D (aspartic acid) and so on, through to the gene termination, which can be one of three codons.

    The entire gene for horse Cytochrome C, then, is encoded by a sequence of 104 codons. There are 64104 possible sequences of codons – that is, 7x10187 – but by multiplying together the numbers of possible codons that would encrypt the given amino acids in each position, we discover that there are 2x1045 permutations that would encode exactly this sequence of amino acids. So the probability of any given sequence of 104 codons encoding precisely the sequence for horse Cytochrome C is the second number divided by the first – that is, about 3.5 x 10-143.

    It is worth noticing that this is 13 orders of magnitude less probable than that a random sequence of 103 amino acids would turn out to be horse Cytochrome C, and it would be interesting to know whether this was generally the case (that is, amino acids used in proteins are more frequently those encoded by less than the average number of codons).

    However, this isn't the whole story (“But that is not all, no that is not all”). We have 113 different versions of Cytochrome C, and we now need to consider what effect these other versions have on what we can say about the specification of this protein. We can continue with the amino acid sequence for a zebra – sequence number 25 in the table. This differs in just one position from that of the horse – at position 47, it has serine (S) rather than threonine (T). Here is where assumptions start to become important. If we assume that this is a neutral substitution, and is simply evidence of evolutionary divergence, then we can conclude that we could now have any one of ten codons at position 47 – one of the six for serine, or one of the four for threonine. This single change doubles the probability of a random sequence of codons encoding Cytochrome C – albeit only to the unhopeful-looking probability of 8 x 10-143, but we can start to see the direction this will move in.

    To be continued ...

    Monday, November 13, 2006

    The specification of proteins - part 1

    Part of the problem in the scientific debate between ID proponents and opponents is that much of it is conducted with relation to intractable universal problems. The debate then tends towards a pythonesque “Yes it does”, “No it doesn't” series of contradictions. I am keen to try and use some of the ideas to look at more tractable specific problems, and use principles learned working on these to extend the debate to areas where more definite conclusions can be drawn.

    Proteins are specified. That is to say, as we find them today, they aren't simply random sequences of amino acids. The information that they incorporate allows them to express functionality that is of use to a cell or to an organism. Take Cytochrome C, for example. It has a functional specification – a substantial one. Its function can be described in terms of what it achieves for an organism – its Wikipedia entry gives detail about this. It can also be functionally described in terms of the low level biochemical reactions that it catalyzes.

    Since Cytochrome C is a protein, it is also coded by a specific sequence of amino acids. Actually, this isn't strictly true. The sequence of amino acids that codes for Cytochrome C is different in different organisms. A table listing different amino acid sequences for Cytochrome C in 113 different species can be found from here.

    Just how much specified information does Cytochrome C contain in the sequence of its amino acids? If we know how much information it contains, then we can calculate how likely it is that Cytochrome C would appear by chance – the probability that a random polypeptide would happen to be Cytochrome C (or close enough to be useful for natural selection). And if we can determine this, we can say how reasonable is the chance hypothesis for explaining the initial appearance of Cytochrome C.

    If there were only one sequence of 100 amino acids that was universally used to code for Cytochrome C, the probability of it (20 possible amino acids, 100 places in the chain) appearing as a random polypeptide sequence could be simplistically expressed as 1 in 20100 – that is about 10-130. This probability is low – but it is above Dembski's UPB, and much more significantly for opponents of ID, it substantially overestimates the specification of Cytochrome C even within our understanding.

    But to make sure that we get our foundations right, it's necessary to see that we have already gone wrong at this point, since we are not properly considering the reference frame. The reference frame is actually not simply the sequence of amino acids that make up Cytochrome C, but the genetic coding of these amino acids. The organism doesn't record Cytochrome C as a polypeptide, but as a DNA sequence. So we need to determine what the probability is of a random sequence of DNA coding for Cytochrome C, rather than what the probability is of a random polypeptide being Cytochrome C.

    It's also important to point out at this stage the fact that we have not derived the reference frame. To understand this, consider the fact that the words of this post are an improbable sequence of letters that convey information. But they only convey information given the pre-existing reference frame of the English language (ignoring the additional layers of complexity which are represented by the medium on which this is being read) – they say nothing about how the English language came about in the first place. The information required for Cytochrome C to be present in an organism is the DNA sequence that encodes for it in the genes of the organism, but it is also the reference frame which includes the mechanism to convert the DNA sequence into a protein. The task of darwinism – or any “ism” that addresses the issue of origins – isn't only to explain the appearance of Cytochrome C (for example), but also to explain the presence of the reference frame which allows Cytochrome C to be encoded and manufactured to demand.

    The question is more subtle even than this. For example, the darwinian presumption is likely to be that the 113 different Cytochrome C sequences enumerated in the table above are functionally identical, and the differences in amino acid sequence simply represent evolutionary divergence. However, it is conceivable that rather than being functionally identical, each version of Cytochrome C is actually specific to the species in which it is found – that the reference frame isn't simply a generic DNA coding and expression framework, but is the specific organism in the case of each protein. This is perhaps unlikely for a relatively simple protein like Cytochrome C, but may be more relevant for complex and specific proteins. This issue is at the heart of the ID objection to many of the co-option scenarios that are proposed to explain the appearance of complex biochemical systems – that it is an unjustified darwinian assumption that proteins can arbitrarily be re-used or re-located within an organism, ignoring the reference frame.

    However, these issues can be put aside for now, as long as they don't disappear off the radar indefinitely.

    To be continued ...

    Wednesday, November 08, 2006

    Hours per degree

    So people on different courses do different amounts of work? Well, how amazing!

    For the record, on my CompSci degree, we had four hours of lectures six days a week, 9 am to 1 pm. We also had two 2-hour practicals in the afternoons. That's 28 hours of scheduled work per week. In addition to that, we had projects that we had to work in, and programming course work. I see the medical student started at 10 am. No such luck for us. In large measure, we simply ended up skipping the 9 am lecture.... Natural Science Part I entailed a similar workload - I think we had three practicals, rather than 2, but on two days a week, we only had 3 hours of lectures. I still remember the regular dash down Tennis Court Road from the Old Museums Site to the Chemistry Department at 11 am - 300 cyclists in tight formation.

    In the meantime, there were rumours that the Land Economy course at the same university involved just 2 hours of lectures a week. That made the student newspaper at the time.

    My nephew has timetabled (I believe) 7 hours lectures per week on his English degree.