Pages

07 December 2017

Panglossian Wonderland

It's quite maddening to watch as politicians do an obviously bad job of Brexit and then give Panglossian self-reviews to the media. We are required to believe that current disaster is the best possible outcome in the best of worlds.

It's as though we've gone down the rabbit hole to Wonderland. Up is down and sense is nonsense.

But this theme of of disaster being spun as a great victory is nothing new. Remember the Alamo? The Alamo was a failed attempt to withstand a siege. All the US soldiers died. The Charge of the Light Brigade? A suicidal cavalry charge at artillery. Everyone died. The more suicidal some mission is, the easier it is to sell failure as success.

While the news is focussed on the Brexit disaster something weird is going on.

Here in the UK the government tops up earnings for almost every working person in some way. And meanwhile the top 1% of the wealthy are squirrelling more and more wealth away in tax havens.

The government subsidises low wages so that wealthy shareholders can take an ever greater slice of the profits of industry. In-work benefits don't benefit workers struggling on subsistence wages, they benefit employers who have no incentive to pay a living wage.

 Landlords are subsidised to charge rents much higher than most people can afford, because govt meets the difference. Business owners are subsidised to pay low wages, or offer zero-hours contracts knowing that govt will top them up with "benefits".

And this is called "Free Market Economics"? Pull the other one. This is market manipulation to favour the rich. It's welfare that is being targeted at the rich. It is one of the biggest confidence tricks in history.

The really creepy thing is that the turkeys are all voting for Christmas and attacking anyone who suggests they should support the turkeys instead of the farmers.

I'm reminded of Monty Python... "Terrific race the Romans, terrific".



Anyway there's some really fucking evil genius at work here.

30 November 2017

Existential

Corresponding with a colleague I summed up life's problems like this.

We're self-aware, which has many advantages. However, it has the disadvantage of making us acutely aware that life is short, precarious, and unjust.

Like all living things we have an imperative to persist. But self-awareness tells us the awful truth. We are going to die.

We need certain things to survive--shelter, food, companions, etc--but self-awareness informs us that getting these things is often beyond our control. Nature just does its thing regardless--floods, fires, hurricanes, predators, draughts, etc.

And finally evolution has prepared us for living in small communities where we know what everyone is doing and following the rules cements the social cohesion that makes the social life-style so successful. So self-awareness has given us the ability to create small scale just societies, or at least societies in which principles of justice can be applied. But it also tells us that nature doesn't follow our rules. And in large societies people are not bound to follow the rules in the same way, so some feel free to break them. And of course we die. Life, from a human point of view, is on balance not just or fair.

Now some of us by luck and hard work can become reasonably insulated from hardship or injustice. We have enough to eat, access to clean water, live in a safe neighbourhood, and have either natural protection from nature, or the ability to rapidly rebuild after a disaster. Some of us manage to make life much less precarious and much less unjust. Its not perfect, but some of us do pretty well.

But no one escapes death. Everyone, every living organism dies. Communities persist much longer, but even they eventually die. Death is the one thing that we cannot escape; and yet we have this cellular imperative to persist. All living things have this imperative to keep going. Life never simply gives up, it always dies trying.

If you want to understand human beings this is one angle that must be considered. We know, we cannot avoid known, that in the end life is short, precarious, and unjust.

What do all religious leaders and politicians promise us? They promise to deal with the easy problem: resources and justice. Work, fair pay, fair prices, decent housing. The whole idea of the market economy was that it would efficiently deliver these. That is made everything worse except for the very wealthy is because it was based on mythology rather than science.

And they promise to be tough on crime. They will keep us safe from threats domestic and foreign.

The only thing that religion offers that is different, is immortality. Religion tells us how to cheat death. And when lose faith in the organised religions, we latch on to the New Age versions of these myths (ironically largely recycled and remixed from organised religion).

Here's the thing though. It is possible to deliver security and justice, at least to some extent. Ok, we've been going in the wrong direction for a few decades because we were hijacked by an ideology, but that doesn't discredit the whole enterprise. It just tells us that strong government and intervention are required to address the precariousness and injustice of life.

What cannot be delivered by anyone anywhere is immortality. Priests of many varieties promise it to us, *after* we first die, but it is not possible. It won't happen. It cannot happen. We have to make our peace with this. But it is much harder it sounds. To really face your own death is horrifying for most people, especially when you're young. Though illness, especially mental illness, can make death seem welcome. For most people the imperative to persist is the strongest motivation they have. People survive concentration camps and all manner of deprivation or brutalisation. They hang onto life.

And to me, if there is only one life, then it is all the more precious. What I do with my life is all the more important. Yes, it is difficult, and sometimes I wish I was dead. But I'm usually sure I can hang on for one more day.

If I'm right about this, and I think its all fairly self-evident, then if I'm going to make a difference, the obvious place to apply my lever is the areas of the precariousness and injustice of life. That is, *helping people*. There are many ways to approach this. We all have different things that spark us off. Not everyone needs to be the Secretary General of the UN. For me being part of a community with a vision is important, because people working together are more effective than individuals. What I do can help make our collective more effective at addressing the problems that people have, even though my personal contribution might be quiet small and obscure, collectively we amount to more than the sum of our parts.

And of course sometimes we have to keep chipping away without much sense of progress or success. The 19 blows that weaken the rock, make it possible for the 20th blow to split it. Persistence is a great virtue.

I think most, if not all, of us know this stuff, at some level. We are all trying to make things better around us. We don't all have effective strategies and tactics, but we're trying (sometimes desperately). We all face the same existential problems. Nothing much has changed, in this sense, in thousands of years (millions even).

We don't need to love everyone or take on the sorrows of the world. All that grandiose rhetoric. We just need to do what we can to make life better for the people in our sphere of influence without making it worse for anyone in the process. If we all put the effort in locally, the global change will come.

23 November 2017

Müller’s daṇḍa

I seldom get to write comedy in my articles, but this line comes close:
"It is possible that Conze was influenced by Müller’s daṇḍa."
Of course I'll have to explain. The word daṇḍa means literally means "stick, rod". It is used figuratively to mean "beating, punishment". It's pronounced a bit like English "dunder" which from the 1620s was used in the (Germanic sounding) expression "dunderhead" ("a ponderously stupid person" according to the OED). The origin is obscure, but it might come from Dutch "donder" meaning "thunder". Both Müller and Conze were German by nationality, though Conze was in fact born in the UK (which enabled him to claim citizenship in 1933 when he fled from the Nazi's - he wasn't a Jew, but he was a communist).

Anyway, daṇḍa is also the name of a Devanāgarī punctuation mark, like this | And, as with all such things, there is always the Freudian connotation. So yeah, this is almost a dick-joke.

In 1884 Müller was working at Oxford University and he produced the very first Sanskrit Heart Sutra published outside Asia (only about 1000 years after the first Chinese printed books). Unfortunately, he inserted a daṇḍa where it should not have been. Conze (presumably following Müller) inserted a full-stop in his Roman script edition in 1948.

The result is that one sentence becomes two, but the second one is a fragment with no verb and no subject, just a string of adjectives hanging around causing trouble.

It would be unfair to refer to Conze as a dunderhead, after all, he spoke 14 languages, but the man was sloppy and often cut corners. He was a kind of intellectual cowboy (in the modern English, rather than the classic American sense). We really need to tear down everything he did and do it again properly this time. It puzzles me that he was ever revered, but he was and still is.

Of course the fact that the text everyone has translated from is garbled and incomprehensible at this point, has never stopped anyone from translating it as though it made perfect sense (though Red Pine slyly switches to translating the Chinese version at this point).

I am writing an article arguing for the removal of the extraneous full stop. In 2015, I made a killer argument for adding a dot over one of the letters, so this kind of balances things out. I get my laughs where I can.

06 November 2017

Justice

Modern life is weird. We have great new gadgets. Even my cheap phone takes amazing photographs, allows me to edit them, and then post them to a globe spanning computer network.

But I believe that we are also witnessing changes that will be extremely deleterious. One of the ancient principles of justice is that someone accused of a crime is presumed to be actually innocent until they are proven to be guilty in a court.

The burden of proof lies entirely on the prosecution (usually the state). They must show beyond any reasonable doubt that the accused is guilty. And they are innocent up to the point where a judge pronounces them guilty.

This principle is article 11 in the UN declaration of Universal Human Rights.

Of course if someone confesses to a crime, that is different. Confession, acknowledges the crime and invites punishment.

I believe that this principle is under serious threat, because accusations are being equated with guilt. This was in fact the post-Roman Germanic system. If twelve people all accused you of the same thing, then you were guilty, unless you could prove otherwise. 

On the other hand, presumption of innocence also presumes an impartial judiciary. And I think it is fair to say that the judiciary and police are not always impartial. They are, for example, often racist and sexist. To my mind this is part of the same problem, it undermines the presumption of innocence if accusations are not taken seriously.

We seem to be veering back to the presumption of guilt. It is tied to the practice of reporting crimes to the media rather than to the police. Of course if the police do not take reports of crimes seriously, it may be argued that they leave victims no choice. So there is blame on the police in this case also. Still we are increasingly seeing this strategy of using the leverage of the mass-media to bypass the judiciary and the presumption of innocence, and to attack people in the public eye.

It seems to me, based on media reports, that, say, Harvey Weinstein is guilty. It certainly *seems* that way. By the Germanic standard, more than 12 people have accused him of similar crimes. I find it hard to believe he is *not* guilty. And in fact Weinstein seems to have made a tacit admission that he has committed crimes of that nature. Where there is admission of guilt, then the presumption of innocence no longer applies.

I also think of Paul Gambaccini, an American living in the UK who gained prominence as a DJ. He is now a patron of the arts, a noted philanthropist, and a BBC radio presenter. Gambaccini was one of many men arrested for "historical sexual offences" during Operation Yewtree. He was suspended from his job while the accusations against him were investigated and became the subject of considerable media speculation. The police were themselves using the media to raise the profile of their work bringing pederasts to justice. Although Gambaccini was never charged with a criminal offence, he was under a cloud for a year, and unable to work during that time. He argued that the presumption of innocence did not seem to apply to him (he wrote a book and is suing the police).

In the case of historical offences that were either not reported at the time, or where reports were ignored, there is a deficit of justice. Clearly there is a huge backlog of allegations that are now emerging. A lot of children we targeted by pederasts and not protected because society as a whole had trouble believing that the problem was there at all, let alone widespread in places like Catholic Churches, Scout groups, or football teams. And the media (especially the UK media) fan the flames, because they thrive of four emotions: anger, fear, disgust, and lust.

The problem is that we cannot suspend the principles of justice in the pursuit of justice delayed or denied. Justice delayed or denied is not justice; it is clearly unjust. But suspending the presumption of innocence is also unjust. Gambaccini was arguably punished by society merely for being accused of a crime. And maybe he does deserve to be compensated for this.

Justice is vitally important. Without it our society will fall apart. People, especially people of wealth/power, have to be held to account. Some of the most ancient rights we have as citizens were wrung from the monarchy by force (or the threat of it). The idea that we all equal in the eyes of the law is a precious victory for ordinary people.

We all have our opinions and intuitions about what we read in the media about people. But the media are not reliable guides to what is going on. The media are a business, whose sole aim is to provide dividends to shareholders (this aim has completely overwhelmed any other aims of business in the 21st Century). They do whatever it takes to make a profit and pay shareholders a fat dividend, limited only by what the law allows, and often not even that (as we know all too well in the UK).

In the end it is only through careful presentation of all of the evidence, and weighing it up in an unbiased manner (without the media hype), that justice can be served. If we serve a lesser mistress than justice, then we are in real trouble. And I suppose that many people would say that we are right now in real trouble because of the past denial of justice. To me this is not an argument for allowing the system to break even more; it is an argument for fixing the system and making it work for everyone.

More than most countries, Britain suffers from an "old-boys" network of men (and some women) educated in expensive and exclusive private schools and brought up to see themselves as naturally morally superior in ways that do not relate to how they behave. This means that they don't see themselves as bound by the laws made to keep lesser people in check. It is a monopoly on power that any good government would smash; but of course they are the government (and more recently the so-called Labour Party was run by them as well). In fact so much progress had been made in the UK that in 1979 a woman who was not part of that social elite became PM. But things have gone severely backward since then, partially as a result of reforms Thatcher herself instituted. The elite make a show of being socially liberal, while trying to entrench the power of their class economically and politically.

Many depictions of Justice, personified as a woman with a pair of scales and a sword, show her blindfolded. That is to say as blind to the social status of those being judged. What such static depictions don't really get across, is that the sword does not strike until there is a clear judgement. Accusations are not convictions. It is all too easy to create smoke without fire, especially nowadays. We cannot trust the media, they do not serve us, they serve only their shareholders. And the shareholders seem only to be interested in accumulating personal wealth. Justice, it seems, cannot bend people to her will. But the will of the people can circumscribe the power of the ruling classes - and that is where most of our human rights come from.

Arguably, a society is fair to the extent that people have wrested power away from the ruling classes and made it fair.

02 November 2017

Scifi Tropes

I didn't take this one. In fact this is the first ever "space selfie" taken by Buzz Aldrin on an EVA during the Gemini 12 mission.

I love scifi, but one thing that always bothers me about scfi in space is that helmets have internal lighting. I can understand why they must do this on TV - glass is reflective and if the interior of the helmet is dark, then you won't be able to see the face of the actor. But it does mean that the actor cannot see out very well, if at all.

In this shot Buzz has managed to get the sun angling in from the right of shot so you can see his face, albeit with some strong shadows.

In my scfi stories, no suits have glass any more. Glass is vulnerable to cracking (common trope) and an astronaut can easily be blinded by unshaded sunlight (also common trope). And glass doesn't stop all forms of harmful radiation - of which there is a great deal in space. Note also in Aldrin's 1960s suit, that his field of view is clear upwards, but he cannot look down at all - he cannot even see his hands unless they are raised to shoulder height!

In my stories, suits have VR goggles and hi-res, multi-spectral cameras (with digital zoom and other processing features). At this point it's probably cheaper to do this than make a sphere of toughened glass. Cameras and screens have better resolution that our eyes can detect now. This also allows the possibility of overlays with infrared or ultra-violet light which would be very useful! And the helmet now provides much better radiation protection.

A super version of this, plumbs the video feed directly into the optic nerve - very limited versions of this are available now for artificial retinas.

But of course you cannot see the face of the person in the suit. So, bad for TV, unless the people are sinister.

Another weird thing, which I think might happen IRL is that the whole suit has atmosphere. So a common scifi trope is that a hole in the leg means all your air leaks out and you die. In my suits only the helmet has air, and there are no external hoses to spring leaks or be yanked out. And the whole thing is self-sealing (using currently available tech).

Which also reminds me, most scifi stories go on and on about "oxygen levels". Oxygen is non-trivial, but the most important thing, as we saw in Apollo 13, is carbon-dioxide levels. CO2 build-up kills quicker than lack of O2 in almost all of the scenarios when people are "running out of air".

One day I'll write some of these stories down!

30 October 2017

That Fucking Cat.

Most people have heard about Schrödinger's cat. Schrödinger was trying to disprove the so-called Copenhagen interpretation of quantum mechanics - the idea that until a particle is "observed" it is literally in all possible quantum states (i.e. spin, charge, mass, position, velocity, etc) at once. He thought this proposition was absurd so came up with this metaphor of the cat in the box.

We don't know what state the cat is in until we look, so the cat is both alive and dead. Which is clearly absurd where cats are concerned. But it might not be so absurd when we are thinking of, say, the "spin" states of sub-atomic particles. This is still the most popular interpretation of the wave equation, though slightly less than half of all physicists agree with it.

The real problem we get at the popular level is that people don't get that the "observer" is also a metaphor. They think consciousness must somehow be involved. But it isn't. No one ever directly observed anything at the nanometer scale - we are not equipped for it. "Observe" here means physically interact with. The wave function of the particle collapses whenever the particle physically interacts with another particle. And at the particle scale it means any time two matter particles (fermions) exchange a force particle (boson). No consciousness is involved or required.

In terms of the cat metaphor this means that the universe does know whether the cat is alive or dead, because the cat interacts with the matter around it—it has to be in contact with the floor of the box for example. Touching the box collapses the wave function of the cat and the cat is either alive or dead and not in superposition of all states (probably never is).

So the thought experiment does not work unless the cat is suspended in a perfect vacuum, in perfect darkness, isolated from all magnetic fields, and not subject to gravity (even from the box!). In other words it would have to be in another universe which consisted of the cat and only the cat, and which could not communicate with our universe in any way. In which case we'd never know the outcome of the experiment. The uncertainty is either permanent or non-existent.

In other words, in the end it's just a bad metaphor. Schrödinger decisively lost his argument with the Copenhagen crowd. They took this weak attempt to discredit them, and turned it into a powerful positive symbol of their approach. Ouch.

In fact in something on our scale - with (literally) millions of billions of trillions of particles in every gram of matter - the wave function is always collapsed because there is interaction going on all the time. People are always in one particular state at any time (though time flows incessantly).

If humans behaved like particles we would be diffracted every time we walked through a doorway. In other words we would emerge at a random angle every time. (I'm assuming we're sober, eh?).

If there were two doorways close together, as we approached  we would blur out, pass through both doorways simultaneously, interfere with ourselves, come back into focus moving at a random angle. I've staggered through some doorways in my time, but never two at once.

If people were like particles, when you did one forward-roll  (↻360°) you would end up upside-down, and would have to do another one to get right-way up.

If people were like particles we could only know where we are or where we were going, but not both at the same time.

If people were like particles, then having people watch you walk down the road would cause you to swerve.

Quantum doesn't work on our scale. There never was a cat. Consciousness has nothing to do with it. Most of us can safely ignore quantum and behave as if the world is classical because at our scale and in our frame of reference, basically, it is.  Or, to be more accurate, classical theories predict the behaviour of matter on scales of mass, length and energy we can actually experience, to a higher degree of accuracy and precision than we can currently measure.

Which is not to say that on very small scales it is not possible to have quantum effects. It is possible. Gravity is so weak you can ignore it on very small scales. If you cool everything down close to absolute zero, exclude electromagnetic fields, and tweak other conditions, you can see quantum effects up to the scale of some fairly large single molecules such as bucky-balls (C60H60). Beyond this, and much above absolute zero, quantum effects simply disappear.





















27 October 2017

Ancestors Fallacy

There's a cognitive mistake we make that needs a name but doesn't have one as far as I can tell. In the linked article about wolves the author starts out with an example from closer to home.


It's often assumed that because great apes knuckle-walk when on the ground, that we must have evolved from knuckle-walkers. We talk about "learning walking upright" to make the distinction clear. In fact we evolved from a common ancestor and that ancestor is likely to have been a brachiator - to have live mostly in the tree tops, like today's gibbons and (some) lemurs.

When a gibbon is on the ground it almost always walks on two legs. Using it's arms to stabilise itself. Youtube thinks this is very funny and has many videos of gibbons walking "like a human". It's not that they have "learned to walk like a human", that's just what they do on the ground. Note even the great apes adopt bipedalism when wading in water.

So there is a fallacy here. The fallacy tells us that the chimp is our closest genetic relative, so our ancestors must have been chimp-like (hiding in their somewhere is the assumption that chimps are "primitive" animals and that only humans evolve). But our ancestors were not chimp-like. Chimps, gorillas, and orangutans evolved knuckle walking, humans never did. Nor did gibbons.

The modern view of evolution is that all organisms currently alive have been evolving for exactly the same amount of time. So just because a bacteria looks archaic, does not mean that it is archaic. Everything currently alive is modern. Evolution doesn't stop for living things.

Chimps and humans have evolved for equal lengths of time. And neither of us is more like our common ancestor than the other.

This means that there are no "primitive" tribes. Hunter-gatherers living now are not necessarily representative of pre-historic humans unless we know the conditions they live in have not changed (and given the intervention of the last ice-age this is extremely unlikely).

Yes, humans arrived in Australia as long ago as 60,000 years before the present, but they constantly changed. The evolved literally hundreds of languages for instance.

In my field of study, modern schools of Buddhism, whether Theravāda or Tibetan or Zen are often seen are representative of some previous era. This is the same fallacy. All forms of Buddhism currently being practised are modern (though not all are modernist). No amount of social conservatism can prevent changes from accumulating over dozens of generations.

I'm not sure what to call this fallacy.

26 October 2017

Studying the Heart Sutra

As everyone knows by now, for the last 5 years I have specialised in studying the Heart Sutra. Apart from identifying several grammatical errors in the standard Sanskrit version, I've also made or substantially confirmed a couple of ground breaking discoveries about the text.
It is a curious thing about Western Buddhism that we privilege texts which we believe to have a Sanskrit "original". It is curious partly because Sanskrit did not come into popular use amongst Buddhists until about the 4th Century of the Common Era. Anything actually composed in Sanskrit, is quite late.

In fact, for well over 1000 years, the Heart Sutra was the most popular text in East Asia in it's Chinese version and probably few people were even aware that there even was a Sanskrit version. They might have understood that it notionally came from India, but the study of Sanskrit was never widespread outside of India and most people wouldn't have known any Sanskrit (much like now). Asians knew the Heart Sutra as the Xinjing and recited it in Chinese, or some approximation of the Chinese (in Korea, Japan, and Vietnam). The Tibetans had a Tibetan translation from about the 10th Century and used that exclusively (it also has errors in it!).

For a brief period in the 20th century, the Sanskrit text came to the fore. An old palm-leaf manuscript preserved in Japan was dusted off and became the primary focus of the study of the text. A number of very late Nepalese manuscripts were found. A couple of stone inscriptions from China were added. And from these Edward Conze constructed his edition. His version of the Sanskrit text (more like an original composition) became established in the minds of Westerners as the "original".
Then in 1992 Jan Nattier showed that the Sanskrit text is a translation from Chinese! The Heart Sutra was composed in China. This is quite a big deal for the most popular sacred text in the Buddhist world.

How have Buddhists reacted to the revelation that their most popular text was a fake, composed ~1100 years after the Buddha is supposed to have died? Mostly with denial or indifference. I'm still not sure whether the latter is admirable insouciance or something more dubious or insidious.

25 years after Nattier's amazing 90 page article (which I still consider the best example of academic writing in our field) there is still not much reaction - even when her idea is acknowledged, it is without any drama. The word apocryphon is sometimes used as a euphemism for "fake", but other than that no one seems bothered.

Nattier's article has had almost no impact on my own Buddhist Order. I suppose as a preeminent scholar of the text, I bear some of the responsibility for that, though I have written 32 essays on my blog which touch on aspects of the Heart Sutra. If anyone were really looking, they'd have found them.

Several people have lately encouraged me to teach on the Heart Sutra at the Cambridge Buddhist Centre. I think probably next year, but yes, I think that might be good.

19 October 2017

The Tyranny of Now*

Some people tell us that we can forget about the past and the future and just focus on now. And we do this then all our problems will disappear. I call bullshit. Screw Eckhart Tolle - I'm sure he's a nice guy, but he seems to oversimplify everything.

To begin with, this attitude assumes that if we don't think about our emotional baggage that it will stop having an impact. I don't know about you, but that's not how it works for me.

Leaving behind a lifetime of habit is a process, not an event. And that process may well take up the rest of our lives and remain incomplete when we die. Generally speaking we have to see that we are moving beyond one situation (past) and towards another (future) to stay motivated (now). 

So the past is useful because it is only in comparison to our memories that we have a sense of progress (or any change at all for that matter).

Similarly with the future. You are asked to set yourself adrift on the ocean with no destination and let the wind and currents take you where they will. But we know what this lack of purpose and direction looks like. It manifests as dissolution or in wasted, directionless lives. In one of my songs I sing about feeling like "a fish with no tail, eye's wide, no idea where I'm going". I don't sing it, but to me this is terrifying. This is Hamlet, watching helplessly as inevitability crushes him. Fuck that!

While it is certainly a good idea to focus on the task at hand, for many reasons one has to keep one foot in the past and an eye on the future. Tasks are more enjoyable if we are focused (now), but they are more meaningful if they take us toward a definite goal (future) and more satisfying if we have a sense of progress (past). And if that goal benefits the community then so much the better for us. When we contribute to something bigger than ourselves (society, basically) it is about the most meaningful thing we can do.

Most of the important things in life are complex and difficult and therefore require persistence over days, months, and years. Bigger goals cannot be achieved without a clear sense of direction or a sense of progress as one proceeds.

Without the future we have no sense of the meaning of our actions. Without the past we have no sense of progress. So whatever you do, do not lose sight of either.


~~oOo~~


*I took my title from a TED talk by Carol Dweck.

But I was also thinking about a Sanskrit phrase which translates as "future, past, and present buddhas" (atītānāgatapratyutpannā buddhāḥ). Chinese Buddhists routinely translated this as 三世諸佛 "all buddhas of the three times", a phrase which is never used in Sanskrit (with one exception). Then, when the Heart Sutra was translated back into Sanskrit, the phrase was literally rendered as "all buddhas of the three times" (tryadhvavyavasthitāḥ sarvabuddhāḥ) - this is the one exception. This is the smoking gun that tells us that the text could only have been composed in China. Expect a publication on this in due course.

18 October 2017

Passive Voice and Crime

This is a response to a Tweet that showed up in my Twitter stream a couple of days ago. We often speak about crime in the passive voice. And there's a thing about passive voice that will be clear to anyone who has learned Pali or Sanskrit - the passive voice has no subject. Which is why I prefer agent/patient to subject/object when discussing grammar.

In the active voice, a subject does an action to an object. In the passive voice an action is done by an agent to an patient. But we can and do use the passive voice without a subject, i.e. with just an action and a patient.

In terms of crime we can say things like:
"The woman was raped."
"A man was mugged."
"A child was run over."
"The official was bribed."
"The house was burgled"
This is a pretty common way of talking about crime. What's missing in all of these statement is the agent of the action: "... by a rapist", "... by a mugger", "... by the dangerous driver", "... by the developer", "... by a thief". And so on. Just because the verb is in the passive voice, does not mean that the action is not carried out by someone.

Similarly, there is a trend for people who have responsibility to skirt it by saying bullshit phrases like "mistakes were made". In which case we can always ask "By whom were the mistakes made?" Just because they shift to the passive voice, does not mean that we are forced to abandon the notion of a grammatical (and real) agent of the action.

Use of the passive voice without an agent is a problem to the extent that it shifts the conversational emphasis onto grammatical patient, i.e. the victim, the location, or the nature of the crime, while obscuring the agent of the action. Of course crimes happen to us, against our will, so the passive voice is designed for exactly these situations. But if we leave off the perpetrator of the crime, we may create an unfair situation.

Why? Because when we comprehend actions we typically understand them in terms of agents with motivations. And why is this? It's because the archetype for a willed action is our own experience of turning our head to say we've had enough milk. Or our first experience of grabbing something and pulling it closer. The archetypes in other words are our own willed actions.

So if we only mention the patient of the criminal action, then we leave a conceptual gap in which the victim (potentially) becomes the agent: i.e. we blame the victim. Someone has to make the action happen, and if the actual agent is out of the picture, then we look to the only other participant. Crime is emotive, and perhaps no crime more so than rape. If someone was raped, then yes, I think it is vital that we insist that it was an action carried out by someone.

In rape, resistance often makes things worse for example, because the assailant may become more violent and it both intensifies and prolongs the experience. Each case is different, but no woman ever wants to be raped, or "asks for it". That much has to be clear. And it ought to be clear in how we talk about it. But its not justice to hold a whole section of society to blame for the crimes of individuals. This is an important principle of our justice system: collective punishment is not just. I cannot be blamed or punished for crimes in which I am not explicitly involved in committing. I don't accept that just being a man makes me complicit in violence. I've been the victim of more violence than most people I know. Quite a bit of that was from women or girls, by the way.

With that said, I do want to continue to think more about the use of passive voice verbs in the way we speak of crimes generally. For example, with respect to the example of bribery, you may have thought, "hang on, the official who was bribed actively committed a crime by accepting the bribe." Yes, they did. The bribe was accepted by the official. Interesting, this is the passive voice, but in discussing bribery always seems to specify an agent. The verb is passive in this sentence, but it is clear who is doing what. So this makes it an interesting one to think about. For every crime there is a criminal.

It's important to specify the agent of the criminal action, especially in the case of groups who tend to be oppressed or disempowered. The story is not, to take a topical example, that some actor was raped, but that an actor was raped by Harvey Weinstein (allegedly). The criminal becomes the focus rather than the victim of the crime. Our justice system is skewed towards punishing perpetrators and so we have to identify them, or we consider that justice has not been served. A more restorative justice system would go about it differently and would require us to focus on the victims. 

A load of crime words are used in both the past active and past passive voice "he raped..." and "she was raped...". Similarly with murdered, robbed etc. We almost always talk about crimes in the past - unless we are in the process of being mugged or whatever.

Of course this is more difficult when the criminal is as yet unknown or as yet not proven guilty (as in Weinstein's case). But the thing about the passive voice is that it cries out to be qualified "by....". Which is why one amusing way to identify a verb in the passive voice is to see if following it with "by zombies" still makes sense. e.g.

  • The man was being pursued [by zombies]. Makes sense, verb is passive. 
  • The man pursued [by zombies] his dog. Doesn't make sense, verb is active. 

The person on twitter who inspired this little rant, was insistent that perpetrators should particularly be identified as men. To me this smacks of the old "all men a rapists" bullshit. A man might have raped a woman, and yes, it is usually a man, but actually the number of men who are rapists is pretty small. I have known many hundreds of men, and I know of one who was accused of rape. I'm not sure that anything is gained by emphasising the gender of criminals. In the case of violence, men are very much more likely to be the victims of violence than women are.

In any case, people sometimes say we should strive to eliminate the passive voice. When I looked at a few news headlines, I did not see much use of the passive voice. Many crime stories do use the active voice and of course are therefore forced to attribute the crime to someone. So maybe the prejudice against the passive voice is having an effect. In which case the original complaint might have overstated the problem.

On the other hand because we attribute crimes to someone, and are often lazy about the adverb allegedly, some people are splattered with guilt by association. The "no smoke without fire" fallacy. I have seen no evidence that anyone thinks that Weistein did not rape, molest, and pester women, though he has yet to be charged by the police, let alone appear in court to be judged. He is being tried in the media and punishment has already commenced.

On the other hand, when the police raided Cliff Richard's house, and tipped off the media so that they could film it, the man's reputation was severely damaged by the allegations. A crime was more or less deliberately attributed to him, when in fact, as far as anyone knows, he is innocent (false accusation is also a crime). Same with Paul Gambaccini, who was caught up in the same furore, but was always innocent. Accusations of child sex-abuse are extremely damaging, especially to someone who makes their living in the public eye. And that is balanced against the damage that sex-offenders cause if left unchecked (as they have been for decades in the entertainment industry).

So, even if we were to switch entirely to using the active voice, the way we talk and think about crime is not a simple matter. We usually have less than perfect knowledge and people are unreliable witnesses (both passively and actively).

There is nothing inherently wrong with the passive voice. Especially when things happen to us against our will, the passive voice is exactly what we need to express that directly. If someone punched me in the face we could look at it in different ways. If I wanted you to empathise with me and perhaps comfort me, I might say "I was punched in the face". The focus is on me. But if I want you to get angry I might say "Phil punched me in the face." Now I am directing your attention to Phil. If you report this to your friend you (unconsciously) make similar determinations, i.e. who is the focus? What emotion am I trying to elicit? Who is to blame? And so on. A good deal of subtly is available to us by adding extra words, stress, and facial expressions to the mix.



16 October 2017

Technological Frogs

The universe as we know it, began 13.7 billion years ago. The earth formed out of the solar disc about 4.5 billions years ago. The first definite evidence of life can be dated to about 3.5 billion years ago. Mammals evolved a bit over 200 millions years ago, and primates about 60 million years. Modern humans first appear between 300-200,000 years ago, they left Africa about 100,000 years ago, settled in Europe about 40,000 years ago (having bonked a few Neanderthals along the way).

Electricity was discovered in the 19th Century. The triode amplifier was invented in 1906. TV was invented in 1927. The first electronic computer was built using vacuum tubes or "valves" in 1943. The transistor in 1947. Integrated circuits combining multiple transistors was invented soon afterwards but were not mass produced until the early 1970s.

The first TV broadcast in New Zealand was in 1960. I was born in 1966. I remember the manual telephone exchange where you told the operator the number you wanted and they manually connected you. I remember the first time I saw colour television (ca. 1972), and the excitement of a second TV channel in 1975. I remember my older brother getting an electronic pocket calculator ca. 1976.

The first personal computer based on ICs was marketed in 1977 (just 40 years ago). You had to assemble the circuit board yourself!

I first saw a personal computer at school in 1980 and learned to program it in BASIC and Assembly Language (though I realised that I didn't really enjoy programming that much).

Computers double in power every 18 months or so (Moore's Law). So my current PC ought to be roughly 17 million times more powerful than those Apple II computers at Northcote College. But with, like, a billion times more RAM and a trillion times more external storage.

When I was born, a single channel black&white TV was the most advance consumer electronics device I knew. An adult could just about lift one on their own.

Now a computer is my TV, record player, clock, telephone, camera, video recorder, tape recorder, library, teacher, publisher, recording studio, translator, etc. And I can carry it around in my pocket.

I worry a bit that we're like the apocryphal frogs being slowly boiled alive and not noticing until it is too late. And I think it is too late already.

14 October 2017

Deliberative Democracy

"When a sample of citizens is brought together, divided into small groups, and, with the soft prodding moderator, made to discuss policy, good things happen. The participants in these discussions end up better informed, with more articulate positions but also a deeper understanding of other people's points of view." Mercier & Sperber. The Enigma of Reason, p.309-10.

Mindset

Last week on the radio, a BBC presenter interviewed Dr Carol Dweck. She was initially a child psychologist interested in why some kids succeed and why some fail (I'm leaving these undefined on purpose). She identified an important pattern that was predictive and found that it applied to adults as well.

She called the discovery "mindset". And it sounds deceptively simple. If you go at a problem with the mindset that you can learn then you will. It doesn't matter what the problem is, if you believe you'll make progress, then you will.

However, if you start with a fixed mindset that says you can't do it, then you won't learn, you won't make progress.

I sort of naturally have a growth mindset when it comes to certain things. I've taught myself to paint, play music, read Pāḷi and Chinese, and a bunch of other stuff, because it never occurs to me that I can't learn. I get interested and just work away at it. Nothing I've ever done was simple. I was never a natural at music for example. I sang incessantly as a kid, but so badly that my mum sent me to a singing teacher so that at least I would sing in tune (so she tells me). When I started playing the guitar nearly 40 years ago, I had no clue. I struggled with everything. I constantly made mistakes. But I just kept at it. I learned. I got better, slowly. It was hard. After 10 years I played pretty well. After 40 years I'm beginning to really understand the instrument. The learning never stops for me.

Now you may say that I have some kind of talent that perhaps you lack. But the research suggests that talent makes much less difference than we think. Mindset is is what makes the difference. Its the approach, that encompasses failure and is not destroyed by it, which makes the difference.

One of the upshots is that we should focus on process - an insight that keeps popping up. If you praise a kid, focus on what they tried, rather than what they achieved. Keep them excited about the process of learning rather the making praise contingent upon success. Ironically, if we make praise contingent upon success, then kids don't succeed as often. In fact they often give up.

How many times have we heard someone say "I'm no good at maths"? That is a mindset problem, not an inability to do maths. Actually, everyone can learn to do high-school maths - its just a matter of learning, and being convinced that learning is fun. If society or our teachers manage to suck the joy out of learning, this is not an indication that we are stupid. Yes?

And actually all along the way we fail. When you start playing the guitar or learning to drive or whatever, you fail every few seconds to start with. At the start, it's almost all failure. But you learn more from a failure than you do from a success; and if you learn then success starts to outweigh failure. If you are focused on *learning* then a failure is no big deal, because you learn more and actually enjoy it more. And with this mindset you succeed more often anyway.

A lot of people come to learn to meditate and the first time their mind wanders they say "I can't meditate" or "it's not for me". This is a fixed mindset. A growth mindset makes the mind wandering a fascinating learning exercise - you first of all realise that your mind simply wanders off without your permission(!), you start to understand why, you start to learn how to focus, and before long you are experiencing the incredible sensations of having a pinpoint focused mind. Then a whole new world can open up in which you use that pinpoint focus to examine your own mind. But only if you have a growth mindset, only if you approach it as something to learn, only if failure at first is not an obstacle to eventual success. Everyone can learn to meditate, with very few exceptions. Everyone would benefit from learning some basic meditation techniques, whether or not they want to take it further.

Learning goes on in a lively mind, it never stops. Every kid starts off with the lively mind. Staying lively has real benefits too. You are less likely to suffer dementia and other brain problems in later life. But you're also more likely to find meaning in what you do, because meaning emerges from being immersed in the process, not in achieving goals. Achieving a goal is a cadence, or punctuation point, in an ongoing process. And it is the process that really satisfies.

Very little else is satisfactory about my life and things have certainly not gotten any easier lately. But I'm still learning, still curious, still willing to take on new ideas and challenges. It's the process of learning that I love. It gets me out of bed each day and literally keeps me alive some days.

02 October 2017

Dunbar and Brain Size and Triratna

One of my colleagues wrote something, a little vague about the importance of the number 150 in human society, and since I have a long fascination with this, I thought I would write a brief introduction.


Dunbar and Brain Size

In 1992 Robin Dunbar published a paper in which he compared the average neo-cortex-to-brain-volume ratio in wild primates with the size of their social groups. There was a linear relationship which enabled him to predict that the average human social group would be 150.

Given that many of us live in cities with millions of people, what does this mean? It means that we use the most recently evolved parts of our brain to keep track of relationships - and to imagine how other people see the world, especially how they view their own relationships. This is an essential skill for a social mammal.

For example, all social mammals understand and operate a system of reciprocity. Sharing food, resources, grooming, guard-duty, or mates etc creates obligations for other group members. If I share with you, you have a social obligation to share with me. And vice versa. In apes and humans, we also keep track of obligations that are between third parties. I may share with you, knowing that you share with Devadatta and that way come into indirect relationship with Devadatta. Devadatta will probably notice that I share with you, and my reputation with him increases. Ans so on.

Humans can routinely track these abstractions into 4th and 5th order. Shakespeare could imagine how his audience would feel that Othello would feel about Cassio, after being convinced that Iago thinks that Desdemona loves Cassio; while we also know that Iago is lying. Shakespeare could imagine our tension as the story progresses. What if Iago is found out? What if he is not? This is part of what makes Shakespeare a great story teller.

Keeping track of these social obligations takes brain power. The more of our brain given over to keeping track of such things, the more relationships we can keep track of.


The Magic Number 

Dunbar predicted that on average the maximum number of relations humans could keep track of in this way would be about 150. And it turns out that the average community size in the New Guinea highlands, the units of Roman armies, and the average village size in the Domesday book (and a whole range of other measures) was .... 150.

But 150 is not the whole story. 150 is the size of an intimate community where everyone knows everyone's business. But we are usually involved in both smaller and larger groupings. If 150 is a tribe, the a tribe is usually made up from several clans of about 50 members. Clans comprise several families of about 15 members. Each person has approximately 5 intimates. These groupings may overlap. On the other hand tribes may be part of larger groupings, of 500, 1500, and 5000 and so on. The smaller the grouping, the more intimate and detailed, the knowledge; and contrarily the larger the grouping the less intimate and detailing the knowledge. The limits seem to go roughly in multiples of three, starting with 5 as the smallest.

What we expect is that, in a society of 150 people who live together, relying on each other, each will know all of the others, and who is friends and relations with whom. They will be intimately familiar with trists and disputes. And they will know who has what status under what circumstances.

In larger groupings there may be people we don't know. Larger tribal grouping may adopt symbols of membership with which to recognise other members. For example they may hang a strip of white cloth around their neck. They know that anyone who has one of these white strips is a member of the tribe. They can expect to have some basic values and interests in common, and thus are open to each other socially in ways they might not be with complete strangers. It may even be the case where the tribe mandates certain levels of hospitality are required. Some cultures require this even for strangers, when travellers are particularly vulnerable (as in the desert).

In larger groupings there are a number of ways of ensuring that every gets a say in how things are run. But let's face it, beyond 7 ± 2 everyone having a say is unrealistic. This is another magic number (aka Miller's Number) and relates to the capacity of our working memory. Groups bigger than ca 9, tend to schism into separate conversations, unless formal procedures apply.


Schism

With respect to schism, the 150 level is the limit of a sense of knowing everyone in a society living together on a daily basis. Much beyond it, and some of the people are going to start seemingly like relative outsiders. We don't know who they are friends with for example. This may explain why when humans meet who are part of a larger less intimate grouping, they often exchange information that establishes *who* they know. It's likely that some above average connectors know many more people, and across social networks. They are the glue that hold larger groupings together.

There is no absolute requirement to schism at any number. Schisms happen in small groups and large. But primates feel more comfortable with groups where they know the others. Being surrounded by strangers is often quite stressful for a social primate because they have none of the knowledge they need to know how to relate to everyone. On the other hand, being experts at empathy, primates pick up this info very quickly.

What tends to happen is that we are comfortable being relatively informal members of several larger groups, but prioritise our most intimate relationships and family.


The Order

The Order is complicated because most members of the Order are still enmeshed in other groups, particularly family. Even if there were only 150 of us, we don't live together as one community, relying on each other to survive. It is already a somewhat looser grouping than that, so the fact that it has crossed several thresholds (in total membership) is not a clear cut indicator of anything. Those who were around in the early days do tend to reminisce about how good it felt when you knew everyone. I would expect nothing less. But they all still had friends and family outside the movement too.

The Dunbar Number describes the dynamics in close knit societies living together. Beyond 150, such communities do tend to split into more manageable groups.However, it doesn't really say anything about the Order because we are not that kind of society.

Note that the more plugged into other groups we are, the fewer relationships we can track in the Order. And vice versa. Note also that pair-bonding makes no difference to Dunbar's numbers. Primates adopt a wide variety of lifestyles and these are secondary.


Conclusion

Dunbar's original article rapidly became a classic of anthropology and evolutionary psychology (the latter being Dunbar's main subject of interest). His predictions became known as Dunbar Numbers while he was comparatively young (he is still alive and working at Oxford University). If there was a Nobel for evolutionary psychology, he'd have won it for this discovery.

As a final caveat I would insist that Dunbar's numbers are theoretical averages, albeit with considerable empirical support. There will be a bell-curve on which individuals sit. Some will easily cope with 300 relations, some will barely cope with 50. There will always be outliers, but the existence of outliers does not alter the theory or the supporting empirical evidence of accuracy.

For further reading on the Dunbar Numbers and other concepts mentioned above, I very highly recommend "Human Evolution" by Robin Dunbar, published by Pelican in 2016. Aimed at a general readership, and highly readable, it nonetheless takes a cutting edge look at human evolution by incorporating Dunbar and his group's research on group sizes and theory of mind. Dunbar explains how we went from being general purpose apes, to highly specialised humans. How we solved the energy gap required by our big brains and big social groups through cooking, dance, laughter, and religion.

05 September 2017

Rationality

In the new definition of reasoning, what reasoning is, is the process of finding reasons (justifications, rationalisations etc) for decisions made and/or actions taken. First comes the decision, then the reasons. It's always this way around for us, and unless someone enquires, we may not even have reasons for things we do, think or say. Unconscious processes guide all of our actions, but we are equipped to explain them to others if required. But we do this in a post hoc manner: reasons come after the fact and on demand.

Unfortunately, humans have biases in this department. For example, we stop searching when we find any plausible reason, we don't keep searching for the best reason. Unless we are arguing with someone who shoots down our reasoning. Reasoning is a group activity and solo humans don't do it very well.

When we don't have strong intuitions about a decision, it still better to go with our gut. When we stop to reason about a decision it drives us towards decisions that are easier to justify. But in the long run, such reasoned decisions turn out to be less satisfying.

One of the reasons we do this is to appear rational to our peers. This is a very important for humans. We are social and in the modern world appearing to be rational is an important aspect of group membership. Rational is defined locally, however. What is rational for the girl guides, is not rational for the Tory Party or the Hell's Angels or my family.

Rationality is being able to offer reasons for actions and decisions that one's peer group accept as being rational.

Sometimes when trying to fit into our social group we make decisions that seem less than rational to an outsider. "Would you jump off a cliff if they told you to?" Anyone who has heard this in earnest will know what I mean. As if happens my paralysing fear of falling kept me from jumping off cliffs, but it was a situation I faced in real life and yes, had I not been phobic, I would have jumped. I wanted nothing more than to jump off that cliff and be one of the gang. I did other brave things. Just don't ask me a jump of a curb, let alone a cliff. Although I was always fascinated by space, I knew at a very young age that I did not want to be an astronaut for this very reason.

An outsider may see this as irrational. But as human beings, it may be more rational for us to do some mildly irrational things that assure us of group membership because group membership is a long term survival mechanism. We evolved to live in groups.

While making irrational decisions may be suboptimal, losing my social status, let alone being ostracized, is a catastrophe. So there is a delicate balance that we all know. We allow ourselves to be pressured into conforming because instinct tells us that acceptance is more important than rationality. And this is true.

Or it was true 12,000 years ago in our ancestral environment. In that milieu, living as hunter-gatherers, satisfying the expectations of our peers, was probably a good rule of thumb for life. More so when we consider that our "peers" included the older more experienced members of the tribe.

So yes, people succumb to peer pressure. They behave in atrocious ways. But at the time, in their milieu, it may have been the rational thing to do, no matter how ugly it seems to us now. Until you're in the situation, you don't know how you'll react. This is why surveying someone's opinion of how they would react is meaningless. What we do in crucial situations cannot be predicted, especially by ourselves. Asking people about the trolley problem (where you can rescue 5 people by killing 1) for example is meaningless. No one knows what they would do in that situation.

All we can do is imagine that we have done something and how easily we can justify it. If we are further asked to explain ourselves, it will often change our answer, since we have to say the reasons out loud and watch the reactions of the person asking the questions. We get a better idea of how the justifications sound and we chose the best justification, which tells us what action we might do in that situation. I'd be willing to bet that there is no long term relationship between what we say we might do in these extreme hypothetical situations and what we actually do when it comes down to it. Although in more realistic scenarios that we actually have experience of, we can turn to that experience to guide us.

So rationality is not what we were taught. It is not what philosophers have classically defined it to be. Most solo humans are poor at reasoning and only reason well when arguing against someone else's proposed proposition. Reasoning certainly uses inference to produce reasons, but it does not help us find truth or make better decisions. It may help us convince people that the decision we have already made is the only decision they could have made, or the best one, or it may help us describe why someone else's decision is the worst one.

The problem with the classical view of rationality and reasoning is that it is completely at odds with the empirical evidence. It is a fiction maintained in spite of the evidence. The classical view of rationality and reasoning is so far past its use-by date that it approaches being intellectual fraud or hoax. What is actually happening is a lot less grandiose, a lot more banal, but it is what it is. We are what we are. Living a fantasy is the epitome of irrationality.



04 September 2017

Fermented Foods

I'm not into food fads. Not at all. But I am intrigued by a recent documentary I heard about fermented foods. Foods transformed by microorganisms are very common: cheese, wine, beer, yoghurt, pickles, soy sauce, etc. But in most of them, the bugs are either dead or we kill em.

Yogurt, sauerkraut, tempeh, blue cheeses, and other foods contain living microorganisms: bacteria and fungi (including yeasts).

Giving rise to the joke.

Q. What is the difference between yogurt and {country X that you wish to ridicule}? 
A. Yogurt has a living culture. 

And the idea is that these bugs take up residence and help make us healthy.

One of my great science heroes is microbiologist Lynn Margulis (d. 2011), one of the great scientists of the 20th Century. Margulis established that the mitochondria which live in all of our cells were once free living bacteria. She emphasised the role of symbiosis in evolution (in contradiction to the fetishisation of competition amongst male biologists). This has been one of the strongest influences on my thinking about the world: the importance of symbiosis, hybridization, communities, and cooperation. We are not only social animals, but in fact, we are colonies of cells, with many different symbionts living in our gut. A colony of colonies.

For many decades the existence of bacteria and fungi living in our gut, as symbionts, not pathogens was scarcely acknowledged. In the last ten years or so it has started to dawn on the world of biology that Margulis was on to something big. These intestinal flora are not passive hitch-hikers. They are actively involved in homoeostasis - the collection of processes by which we maintain our internal milieu at the optimum for life.

We now know, for example, that gut microbes participate in and contribute to our immune system. They are involved in processes that govern blood-sugar. And so on. Our gut is full of symbionts - a mutually beneficial association. Thousands of species of them and in vast numbers (perhaps as many as 100 of their cells for every cell in our body, though this figure has been challenged).

I think most people are probably aware that yoghurt has this reputation for repopulating the gut with healthy bacteria. But now expand that out to every food with living bugs. And keep in mind that the gut contains a community of bugs, all "communicating" and working together. Thousands of species are involved. And it seems the more the merrier.

I'm certainly not conducting a scientific experiment, but as part of an effort to eat healthily, I'm now regularly including sauerkraut in my diet and some soy-based yoghurt. The sauerkraut is a bit of an acquired taste, but tastes can be acquired with repeated exposure (like olives). And actually, sauerkraut is *very* easy to make so I might have a go at it.

02 September 2017

Life Goes On

The cells that make up our bodies all come from that single fertilised egg created at our conception. It divides into 2, 4, 8, 16, etc. None of our cells was ever dead and infused with life. All of our cells were always living because each cell was created by a mother cell dividing into two daughters.

The sperm and ova that became our first cell were also living cells. produced by cell division in our parents. All of our parents' cells were also always alive and multiplied by dividing.

All the cells of every animal, going back into the mists of time originating by one living cell becoming two living cells. Similarly for all plants, fungi, and bacteria too. All cells come from dividing. All cells except the original cells.

We have a pretty good idea of how such cells might have formed, but we don't know for sure. But in any case, everything alive to day, literally every living cell, was produced by cell division. Every living cell, and thus every living thing, is a direct-line descendant of those first living cells. Every living cell is directly related to every other living cell.

Along the way, some of the cells recombined to make more complex cells or formed symbiotic relationships. Combining is as important as division in evolution, though it happens less often.

The lines of living cells, going back to the original cells, are unbroken for at least 3.5 billion years, possibly longer. Each individual cell eventually dies, but the processes of life continue, without interruption. And even if humans manage to wipe themselves out, bacteria will survive literally anything we can do. Some bacteria live in boiling pools of acid, so nothing we do is going to kill them all. Life will continue on earth at least until our sun expands out to become a red giant, engulfing the earth in fire, about 5 billion years from now. But there is a good chance that by then humans will have seeded life on other planets, if only in our solar system. So in all probability, life will go on indefinitely.

The only limit is that life requires an input of energy which can be put to use. And this will have completely run out in our universe by about 10^100 (1 followed by 100 zeros)  years from now. Then it's curtains for life in this universe. Until then, however, life goes on.

01 September 2017

Theseus's Boat and Grandfather's Axe.

My writing in the last couple of days has been exploring the ancient philosophical problem known as The Ship of Theseus, which you might know as grandfather's axe - when granddad says it's his favourite axe; and that he has replaced the head 3 times and the handle twice. The question philosophers usually as is, "Is it really the same axe?"

Unpacking this problem and establishing useful ways of thinking about it has been very enjoyable.

My way into the problem was to notice that no matter whether we think it is the same axe or a different axe, we never doubt that it is an axe. Because the parts are generic we can replace them at will without changing the intrinsic properties of the object. Any correctly assembled combination of axe-head and axe-handle makes up an axe. Change of a part does not affect the identity of the complex object as a whole.

So, at least at this level, the object has identity and continuity as an axe. It is an axe and we know it is an axe. These are objective facts. The first is an objective fact about what is (ontically objective), the second is an objective fact about what we know (epistemically objective). The object either has the relevant properties or it does not. The fact that it is an axe is dependent on the observer knowing what an axe is. But any observer who knows what an axe is (no matter what they call it) will correctly identify it as an axe.

But is this grandfather's axe? Ownership depends entirely on the minds of grandfather and his community. He asserts "this is my axe" and the community either ascent or they don't. So ownership is some kind of subjective fact. In which case, there is no one right answer. Some might feel that property is theft, in which case grandfather's assertion carries no weight. Or grandfather might have become confused with another similar axe.

Maybe it's not so much a matter of ownership, but of close association. In which case this is also a subjective fact. Recognition is a matter of seeing the object and having a feeling about it. In the bizarre neurological disorder, Capgras Syndrome, people visually recognise their loved ones (usually, but it might also include pets or familiar objects like one's home), but the identification does not set off an emotional reaction. The spouse looks exactly right but feels wrong. The person with Capgras is usually at a loss to explain this. And the explanation that they have suffered brain damage doesn't help much. They often confabulate stories - the spouse has been replaced by a duplicate or doppelganger for nefarious purposes. Again there is no right answer. If grandfather feels that this is his axe, then that is what he feels. That we do not feel it only tells us that we are not grandfather (which we already knew).

Objective facts are independent of observers. Metal is hard, it can be shaped into a cutting edge. Wood is firm but flexible and can be shaped into a handle. None of these statements depends on an observer or what they believe. Subjective facts are not always shared. They do depend on the observer. Money, for example, is based on us all agreeing that bits of paper or plastic represent units of wealth. A £5 note is intrinsically almost worthless. But £5 of wealth is enough to redeem for a cup of coffee and a slice of cake (outside of London). If we stop agreeing to those special bits of paper or plastic are valid tokens, then the system breaks down. This is what happens when there is hyper-inflation for example.

The Athenians maintained a boat that at one point in its history carried Theseus and his companions to Minos, where he overthrew the Minotaur, and then it ferried him back to Athens. Theseus went on to become a great general/admiral. So for Athenians the boat is a symbol of a national hero; of  someone they feel epitomises their national character. For the Athenians it is definitely Theseus's boat. If we don't know who Theseus was, or his story, or anything much about ancient Athens, then we may not feel any connection with the symbol. We may conclude that it is not Theseus's boat. But even if we had lived at the time, what we believed would probably not have changed the minds of the Athenians.

If they had been celebrating a goat as the boat of Theseus, then we could have made an objective argument that a goat and a boat are not the same. A goat might be Theseus's goat, but it cannot be Theseus's boat. But because it was a boat, and remained a boat despite repairs, we can only make subjective arguments. And, frankly, why should the Athenians care what we think about their hero and his boat?

And of course it gets much more interesting when we get to the fact that the boat or the axe is a metaphor for ourselves.

31 August 2017

Asimov and his Laws

In the original Asimov books, robots are conceived of as servants to humans, hence the original Laws are formulated the way they are

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The robots in the stories become more autonomous and are portrayed civil servants, especially a detective named  R-Elijah Baley. The name Elijah is surely no coincidence. As a detective in a world with almost total surveillance, Elijah is confronted with highly devious and irrational human behaviour. He has to put himself in the shoes of the criminals in order to solve the crimes.

Asimov, like most writers on robots was basically retelling the Pinocchio story over and over. How does a machine think like a human? It can only do so by becoming ever more human. There is no other solution to this problem.

After writing a bunch of robot stories (and seemingly thoroughly exhausting Pinocchio as a trope) Asimov moved onto the Foundation novels - two sets of them written decades apart. In the first set a shadowy organisation, headed by Hari Seldon, is guiding humanity through an impending crisis. In other words Seldon is also a prophet, though armed with science rather than righteousness. Seldon has invented a calculus of human behaviour, psycho-history. He sees patterns that only become apparent when trillions of us span the galaxy. Using the methods of psycho-history, Seldon sees the crisis coming and he prepares for the knowledge of humanity to survive.

But it gets very weird after this. Asimov becomes increasingly interested in telepathy. And it begins to permeate all the stories. And now he goes back to robots. What if a robot is like a human, but also telepath... of course he would see how human frailty would lead to suffering. Any robot cursed with telepathy would suffer an existential crisis. And so was born the zeroeth law:
0. A robot may not harm humanity, or through inaction allow humanity to come to harm.
Elijah, returns able to read minds. He can understand what motivates humans and tries to stop them from destroying themselves. It is he who guides Seldon to psychohistory and pulls many other strings behind the scenes. Note that Elijah is still bound by the three laws.

Asimov's earlier books place Pinocchio in a future utopia that is marred by humans who are what we might call psychopaths - incapable or unwilling to behave according to the law, despite universal surveillance. Asimov becomes consumed by contemplating impending disaster and how a great empire might avoid collapse. In other words he reflected some of the major social issues of 1950s USA; through a rather messianic lens.

By the time he came to reinvent Elijah as telepath, in the second set of Foundation novels, the Cold War and arms race were in full-swing. Asimov was apparently fantasising about how we could avoid Armageddon (and I know that feeling quite well). If only someone (messiah/angel) could come along and save us from ourselves, by reading our thoughts and changing them for us so we didn't mess things up. But what if they could only nudge us towards the good. Note that at present the UK has a shadowy quango department--"The Behavioural Insight team"--designed to nudge citizens towards "good" behaviour (as defined by the government, mostly in economic terms).

Ironically, Asimov's themes were not rocket science. He sought to save us from ourselves.

Humanity is going through one of those phases in which we hate ourselves. We may not agree with Jihadis, but we do think that people are vile, mean, greedy, lazy, untrustworthy, etc. Most of us don't really know how to behave and the world would be a much better place if humans were gone. We are, the central narrative goes, "destroying the planet".

For example. We drive like idiots and kill vast numbers of people as a result. In the UK in 2016 24,000 people were killed or seriously on our roads. This includes 1780 fatalities. AI can drive much better and save us from ourselves. The AI can even make logical moral decisions based on Game Theory (aka psychopathy) - the trolley problem is simply a matter of calculation. Though of course to describe a person as "calculating" is not a compliment.

It's a given in this AI scenario that humans are redundant as decision makers. This is another scifi trope. And if we don't make decisions, we just consume resources and produce excrement. So if we hand over decision making to AIs then we may as well kill ourselves and save the AI the trouble.

If we want AIs to make decisions that will benefit humans, then we're back to Pinocchio. But I think most AI people don't want to benefit humans, they want to *replace* us. In which case it will be war. In a sense the war over whether humans are worth saving has already begun. A vocal minority are all for wiping us out and letting evolution start over. I'm not one of them.

Computers are tools. We already suffer from the bias that when we have a hammer everything looks like a nail. May the gods help us if we ever put the hammer itself in charge.

22 August 2017

Uniforms

Thinking about uniforms. Most schools I attended were run like North Korea.

Inmates wore uniforms. Uniform codes were strictly enforced.

There were many arbitrary rules. Breaking rules resulted in arbitrary detention and in my day beatings, some of which were quite brutal. Prisoners were often kept in solitary confinement.

There was "nationalism", school songs and so on.

We were all indoctrinated with the same useless knowledge designed to make us better citizens.

In my day this included systematic lies about the history of our country and especially the wars of aggression we fought against the Māori in order to steal their land. I believe this has changed to some extent in NZ. Here in the UK, they mostly still seem to believe that the British Empire was a benign force for spreading civilisation.

The leader or headmaster generally had a funny haircut and we had to treat them with exaggerated deference. They held assemblies in which we were forced to listen to interminable speeches which extolled the ideology of the state. [An obvious difference is that we did not have to salute].

The schools were surrounded by fences and no one was permitted to leave.

The staff were frequently paranoid about what inmates got up to and we were constantly under surveillance. Teachers had networks of informants.

I've never been to school in the UK, but looking at the uniforms and the environments, as well as what I can glean from TV, the whole set up is far worse here.

A lot of work places are also like North Korea these days. Democracy has seldom extended to the workplace or school. And they wonder why we don't take it seriously?

20 August 2017

Persuasion (reprise)

A consequence of Einstein's theory of relativity is that we can no longer think of space and time as distinct:
“Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.” — Herman Minkowski, 1908.
I more or less understand the reasoning behind this (if not the maths), but I admit that in terms of my experience it is completely counter-intuitive. So in fact, a century after Einstein, space by itself and time by itself have not faded into mere shadows. Maybe they have in the higher echelons of university physics departments, but not in general use.

And this is the thing about intuition and counter-intuitive ideas. For many people, evolution is simply counterintuitive. It feels wrong. So facts presented without values don't make much difference to how people *feel* about evolution. And how they feel about it determines how they think about it. This is simply a fact about how humans work.

The question then is not why ordinary people who find science counter-intuitive don't change their minds. Why would they? The question is why scientists are so bad at communicating? In fact, there is a well-developed science of persuasion, which we see at work in our daily lives across the media in advertising, promotions, political speeches and so on.

A single example will suffice. A century ago in the West, very few people were in debt. Since the 1970s this has changed so that now almost everyone is in debt. From credit cards to payday loans, we all seem to have forgotten the virtues of thrift, saving, and financial prudence. That we would borrow money rather than save up for something would have been considered counter-intuitive 100 years ago. If my great parents had talked about borrowing money at 30% APR while inflation was at 2%, just to buy something they wanted by did not absolutely need, their family and friends would have thought them mentally ill. Now it is just what everyone does.

I have no credit rating in the UK, having never borrowed money here, and I am still regularly sent credit card applications by major banks.

The counter-intuitive becomes intuitive and vice versa. Persuasion is rocket science, but it is science. It's about time that scientists cottoned on to this and stopped blaming other people for their failures to communicate.

16 August 2017

Buddhism and Cessation

I was talking with my friend Satyapriya last night. We were discussing my work on the Heart Sutra and his experiences in meditation.

The non-Buddhist approach to life is generally to cram in as much experience as possible. In NZ people used to say they "lived life to the full" and this meant having as many experiences as possible, and as intense as possible. Extreme sports, bungy jumping, white water rafting, night-clubbing, and so on.

The Buddhist approach is the opposite. Buddhists, ideally, strive to calm down, to eliminate unnecessary distractions, to reduce the intensity of experiences. Ultimately the goal is to meditate in such a way that one is aware and alert, but there is no sensual or mental experience whatever. A state traditionally called cessation (nirodha) or emptiness (śūnyatā).

I should emphasise that this is not ceasing to be. It is not non-existence. It is a state of perfect balance and contentment, with no attention being paid to the senses or too superficial mental processes (like our inner monologue). One is emphatically alive and *existent*, just without all the distracting effects of experience.

Which already sounds weird to people oriented towards experience. Why would you want to experience nothing?

Cessation is not an end in itself. The experience of no experience is *profoundly* transformative. It reorganises how you perceive the world. It often results in an attenuation of the first-person perspective so that "ego" or self-seeking drops off. One stops being selfish and self-centred because there is no self to centre on. (This practical result has led to much unhelpful metaphysical speculation, but I'm not going to get into that today).

The trouble is that it takes a particular kind of person to experience cessation. In our Order, to 2000 members we have a handful with any experience of cessation, and a minority of them have any great depth of experience.

The rest of us know fairly early on that we're not that kind of person. If you discover meditation and just naturally start doing it for two hours a day, then you're in with a chance. If you struggle to sustain 20 minutes a day, then you're not in the running. You still *benefit* from calming down, but you'll always be too over-stimulated for cessation. We don't often state this up front. Indeed we tend to maintain the myth that anyone can experience cessation. In theory, maybe, but in practice, no.

One has to be thoroughly disinterested in the pleasure of sense experience. To be happy with very low levels of stimulation. To be fascinated by just watching one's mind for hours on end. One has to be quite non-reactive to other people. Most of these qualities cannot be learned, at least not to the extent required. We can get better at all of them, but unless we have the temperament or talent to start with, we're always going to be mediocre.

So the rest of us form an auxiliary that ideally would support the people who are experiencing cessation/emptiness, or who genuinely have the potential to.

For example, I try to write about issues of conceptualising this process and the philosophy that is often invoked. In doing so I'm trying to clarify things, to eliminate wrong or unhelpful views, and assessing whether or not certain ideas serve the greater goal of our community (i.e. cessation). On the whole, our conceptualisation of the process and the goals appear to be highly convoluted and confused. Our metaphysics are a mess. I advocate a radical clean out - we could eliminate all the history and 90% of the metaphysics we talk about without any deleterious effect on those who seem cessation.

Indeed, the history and a lot of the stories are to gee up the auxiliary. Because, deep down, we know that we're not going to be anything special. We're not going to experience cessation or anything like it. So we constantly have motivation problems. Pursuing a low stimulation lifestyle against one's natural inclinations is pretty difficult. Without the payoff of deep meditative states, it is not very rewarding and we end up getting a bit nihilistic or cynical. There is only so much reward to be gained from taking the moral high-ground and criticising people who seek pleasure. There's a lot of that about. A lot of criticising other people for not being good enough Buddhists from people who will themselves never experience cessation.

It's a weird thing to be involved in. At first, it seems like a cornucopia - a solution to all of one's problems. Many of the people get religion have major problems (or they wouldn't be looking). Religion promises the universe. We all start off with convert zeal. What religion delivers, on the whole, and at its best, is a supportive group of like-minded friends and one or two inspiring role models. If you have the kind of talent required, you'll find an outlet for it one way or another. If you don't, you'll be filling the pews, making financial contributions, and hanging out with the talented people. At its best, this set-up does allow some people to shine in mundane ways. Me as a writer for example. Someone else as an administrator. Another as a teacher of values or basic principles.

Still, the ideal of cessation inspires many people to slow down, to calm down, to stop being overstimulated, and so on. And on the whole, I think many of us who live simpler, calmer lives, find them more satisfying than the usual alternatives.

12 August 2017

The Evil of Mercantilism

When I was studying library management I clearly remember reading a book on technology published in 1971. It noted that immediately after WWII there were very significant gains in productivity due to mechanisation of work. The early prediction was that everyone would work less and retire early. Filling up our leisure time was predicted to be our pressing problem. ROFL.

Here it is, 2017, and productivity is something like hundreds of times higher than it was in 1945 and we are working longer and retirement as a concept is being phased out. What went wrong?

One answer is that the share of the wealth created by the economy going to the ruling classes has increased exponentially. So despite the fact that productivity has increased by so much, inequality has grown even faster.

Capitalists will rightly point out that everyone has benefited - we are all richer than we were in 1945. We all eat better, lived longer, child mortality is down etc. This is all true. But the rich have benefited more.

The thing is that if you worked hard to get by in 1945; your family are probably still working hard to get by in 2017. The poor still have to work very hard just to get by. And that is the plan. That has been the plan for 600 years. Marx and Engels noted it 150 years ago, but even then it had been going on for more than four centuries.

The plan is always for the poor to have to work hard all their lives just to get by.

600 years ago it wasn't like this. Poor people mostly worked in the fields and had little supervision. Staying alive was quite a good motivator. They might have paid a tax once per year, but the rest of the time ordered their own lives. They worked hard at planting and harvest time; moderately in the middle, and not much at all over winter. They grew all their own food, mostly on common land. If they were lucky they might own a cow or a goat or two. At that level, they all had to look after each other and work together. At that point it was probably the Church who inflicted artificial rules on the people, telling them how to live.

The ruling classes technically provided law and order to enable trading on a wider scale (between towns for example) but in practice, they often just fought amongst themselves for profit. The taxes paid for a standing army, and crimes like theft and murder were adjudicated by a ruler, if at all.

Gradually work and wealth took on moral tones. Being rich or working hard were good. Being idle or poor were bad. Working hard but being poor was OK; being idle but rich was also OK. Working hard and being rich was the ideal. Working hard was linked to being rich, though for most of history and now, the two are usually unrelated. The people who work the hardest, doing physical labour, are paid the least.

Since the ruling classes wanted to see the poor working hard, they took away the common land and forced the poor to pay for food. The industrial revolution offered crippling hours and dangerous conditions for the poor, so they could just about earn enough to live in unsanitary conditions and eat food that was often unfit for consumption. Sometimes whole families had to work for 12 hours a day to achieve this. And this was seen as a good thing by the mercantilists. It also broke up communities and the networks of care and assistance that had existed for centuries.

The mercantilists gradually took over running things from the aristocracy and the church. Hereditary wealth replaced mere birth as the mark of the ruling class, and morality changed from saving souls to ensuring that people were useful.

Increased wealth and reach required increased administration and bean-counting. Universities that used to train priests now trained civil servants. The middle classes were inculcated with the values of mercantilism: consumerism was born. From the middle class, some hoped to ascend into the ruling class - though opportunities for outsiders were strictly limited. Others simply became acquisitive.

As technology destroyed more and more of the jobs of traditionally working class people, the idea of social mobility was born. Let the working poor become middle class. Infect them with the virus of consumerism and acquisitiveness to distract them from the fact that their communities were being destroyed. Flood the market with cheap imitations built by their even poorer counterparts in Asia.

The thing is that this story arc is hardly affected by the politics of the government or by wars. Women hail the "progress" of them re-entering the workforce, but they mostly did so at rock bottom wages. Nowadays only a two salary family can afford to own a home. 70 years later they have almost reached pay parity, but generally speaking wages are falling and the poor and getting less and less from participating in production. Far from winning, they have simply played into the hands of mercantilists. The idea is that we all work very hard to just get by. Nothing we do is going to change this unless we stop acting like mushrooms. A smart woman might have fought for her right not to work. Nowadays women's empowerment seems to mean parading around in your underwear, while the idea of empowering men is seen as akin to genocide or eugenics.

Humans need time for socialising. For sitting around chewing the fat, telling stories, and laughing. We need time to make music, to sing and dance together. Working together for a common goal is uplifting, but what is the common goal of most workplaces now? Certainly screwing workers out of their fair share is inherent in all workplaces these days. We thrive in small communities where most people are social equals but merits are acknowledged. We still have not figured out a good way to organise ourselves in larger units. Democracy is, as that epitome of the ruling classes, Winston Churchill said, the worst form of government, except for all the others that have been tried.

But until workers get their fair share of production; until workers own the means of production; this world is going to be unfair and unjust and it will continue to break the backs of the poor so that the ruling classes can be comfortable and fight wars when they get bored.

I have no hope that technology is going to change the basic philosophy of mercantilism. Look at the internet. It was supposed to give power to the people. But it is clearly just another tool for enslaving people now. I get to say what I like, but amidst millions of conflicting voices, what I say doesn't register or matter. Those who do register are part of the system and therefore part of the problem.

Mercantile capitalism, or mercantilism, has been winning, largely in the background, for 600 years. Despite changes in technology, revolutions, wars, and empires.

01 August 2017

Are We Living in a Simulation? No, we aren't.

Anyone who has listened to the latest Infinite Monkey Cage (BBC Radio 4) and is worried that we might live in a simulation can relax. Anil Seth was talking bollocks. He and a lot of other bad philosophers have this method that is mostly hand-waving. It breaks down like this:

To yourself
1. State your belief.
2. Derive assumptions from this belief
To others
3. State your starting assumptions as axioms.
4. Use straight-line deduction to produce a paraphrase of your starting assumptions.
5. Claim that *logic* supports your conclusion.

Assumptions are propositions that you believe in the absence of evidence or things you take on faith. Axioms are propositions stated as universal truths. If you are reduced to stating assumptions as axioms, you're already floundering. Far from being "logical", this is completely irrational.

And then deduction is a very weak logical operation. All you can do with deduction is draw out the implications of your starting axioms. And what this usually boils down to is a paraphrase of your axioms.

All of the assumptions that Anil Seth stated last night struck me as demonstrably false or at best highly questionable. Here is his "logic".

1. Assume we live in a simulation
2. State some fact consistent with living in a simulation
3. Restate that fact as a universal truth
4. Deduce from this that *must* live in a simulation
5. Therefore it is only logical that we do live in a simulation

For example, he glibly stated that it would be possible to replace a neuron with an electrical device in such a way as you would not notice. For a start to do this you'd have to crack my skull open and I promise you I'd notice! Second, this is a bold claim for which there is absolutely no empirical evidence. No one has ever accomplished this or anything like it and had the recipient *not notice*.

The surgical techniques currently do exist to operate on the molecular level. And really there's no plausible way to do this type of surgery - our synapses are chemical, not electrical. It's not remotely plausible to transplant an identical neuron, let alone some electrical device that imitates one. So Anil Seth is asking us to take a science fiction idea as a universal truth. And he can just fuck off as far as I'm concerned. He's just making shit up and giving public intellectuals a bad name.

Furthermore, there is a 1mm long round worm called C. elegans. We know that it has exactly 280 neurons with  6393 chemical synapses, 890 electrical junctions, and 1410 neuromuscular junctions. It's whole brain has been mapped out in exquisite detail at the cellular level. So you'd think that we'd be able to exactly simulate the worm. Yes? No. Not even close. Else modelling the brain of C elegans would be easy and you'd be able to buy scaled up working models that had all the same behaviour by now.

So Seth takes this idea as trivial and true, but in fact, it is very, very complex and almost certainly false. His starting assumption is nowhere near plausible, let alone "true". And if this is so, then his subsequent "logic" is dubious at best.

I call bullshit. This is bullshit philosophy. And it's not the only bullshit philosophy I've seen associated with Anil Seth. He is a bullshitter and no one need be perturbed by anything he says.