12 December 2016

Anālayo and Momentariness

This was a originally supposed to be comment on a Facebook post by Kamalaśīla. He posted a video of Anālayo talking and then an article about mindfulness in different traditions. But it was too long and broke Facebook.

~


I respect Anālayo, but I'm not convinced by this analysis, especially with respect to the need for and the impact of the Doctrine of Momentariness. This subject is a big part of the book that I'm currently writing.

Across the early Buddhist world there was a recognition, especially within the Abhidharma "schools", that karma cannot work with pratītysamutpāda. The former requires that consequences of conditions manifest long after the condition has ceased. The latter says that the condition must be present for the effect to manifest. The two as found in the suttas are mutually exclusive This is my independent observation, but it is also one that Nāgārjuna makes in the Mūlamadhyamakakārika (opening verses of chp16). Nāgārjuna says that to admit that an effect does not cease immediate that the condition ceases is tantamount to eternalism.

Something is wrong with either the theory of karma or with the theory of dependent arising. And Buddhists of all stripes chose to modify the theory of dependent arising in order to allow karma to function. My conclusion is that the doctrine of karma was far more important to early Buddhists and central to Early Buddhism than is usually recognised.

The solution to this problem that came to dominate Buddhist doctrine was the doctrine of momentariness, but it was not the only contender as the time. The sarvāstivāda (always-existing theory) dominated the intellectual landscape of North India for a few centuries around the time of Nāgārjuna. And there were many others, some of which are documented only by their opponent's arguments against them. Nāgārjunas own, wildly unpopular solution was to relegate the whole mess of karma (agents, actions, results, rebirth) into saṃvṛtia-satya or relative truth. In other words none of it is real. But the primacy of karma in Buddhist intellectual life asserted itself once more and Nāgārjunas solution lost out to Vasubandhu's invention of the alāyavijñāna.

Momentariness modifies dependent arising to say that rather than a single step between action and consequence, as implied many suttas, that there are an infinite number of infintesimal steps. It is the calculus to the algebra of Early Buddhism.

Where Anālayo and I begin to converge is on the problem that though the doctrine of momentariness is a step forward, it did not actually solve the continuity problem. This is to say it did not provide a substantial enough link between action and consequence across lifetimes for karma to operate.

It is axiomatic that karma *accumulates* and momentariness doesn't seem to allow for this. Another axiom of Buddhist Abhidharma, that cittas happen one at a time, means that continuity is not possible when there are two or more karmas that accumulate to cause effects in the future. On its own momentariness doesn't provide the continuity required for karma to work.

Contra their own preserved texts Theravādins promoted the idea that the viññāna is what carries karma through lifetimes (which can be found in modern expositions of karma), and, as Anālayo says, Yogācārins, especially Vasubandhu, invented a new viññāṅa/vijñāna, in the ālayavijñāna. A place for karma to accumulate before manifesting. Yogācārins also accepted momentariness, so contrary to what I was taught, the karmic seeds to not lie dormant until ripening, but in fact immediately ripen into identical cittas that create a stream of identical cittas connecting result to condition.

Now Anālayo is very diplomatic about it, but the fact is that the ālayavijñāna doesn't exist. It's a hypothetical entity made up solely for the purposes of fixing a broken theory of karma - the theory that lies at the heart of Buddhism (though it is down-played in modernist accounts of Buddhism). It's just a post-hoc rationalisation and not a very good one as it doesn't really solve the problems with momentariness. So you cannot meditate on it or be mindful of it. At best it is an imaginative exercise in picturing the accumulation of karma and how that might affect your next life (which is where it will most likely manifest according to most of the very many versions of the theory).

The continuity that we experience from moment to moment, and the accumulation of experience into habits of thought cannot be explained by traditional doctrines. I tried for many years to go along with it all, but the internal logic doesn't work. The phenomenology we experience requires a wholly different set of propositions than what we get from medieval Buddhism. For example, what we experience is smeared out across time for example, or we could not process change: change is the past being different from the immediate past and we have to hold both in mind simultaneously to appreciate change. One citta at a time would not allow us to experience change. Similarly nothing could surprise us if we did not constant construct and hold in mind a probable future, with which we constantly compared the present. There is very good evidence to suggest that anticipation is an important aspect of perception. Under certain experimental situations we cannot tell what a percept is until we are told what to expect, and then it is clear as day. (Cf my essay critiquing Momentariness http://jayarava.blogspot.co.uk/2016/07/the-citta-bottleneck.html)

I can of course see the continuing appeal of these medieval models of the mind and why the reasons they persist despite being inaccurate and imprecise. After all, we still use the language of the four humours in every day life. I know people who still understand that the common cold is related to catching a "chill". That is the remnant of ancient Greek four humours theory where cold + wet results in too much black bile and this leads to illnesses such as melancholia and "colds". This is emphatically not how people catch colds, but it is widely believed to be the case, probably because colds are more prevalent in winter. All of us carry around superstitions and legacies of medieval thinking about the world.

But at some point that fact that our model is wrong will hamper our efforts to awaken and to pass on our understanding of the cultivation of awakening. If we are going to prosper then we need to update out models according to the best scholarship of the day.

There are two problems with this. Firstly, authorities who change their mind lose their authority. We see this regularly with politicians. Critics will hound a government minster to change their policy; but if they do the same people will turn around and savage them even more for "doing a u-turn", "dithering", or "lacking conviction". The public perception of an authority is that they must be decisive and resolute, especially in the face of criticism. They do not go around changing their minds. This is one of the great problems with climate change. In the early days the scientists involved trumpeted first one and then another dire warning, each different from the last. The media isn't good at subtlety so the way they portrayed it didn't help. The public just interpreted this as "they don't know what they are talking about" or "they're just making it up - for some reason they are not revealing". Now that the scientific consensus has emerged more decisively it's an uphill battle to convinced most people because climate scientists are not seen as authoritative.

A second difficulty is that the people in the best position to talk about the phenomenology of awakening are mostly steeped for decades in the medieval worldview of Buddhism and like everyone else in the world fall victims to confirmation bias. The tradition itself is hyper-valued and almost never critiqued or criticised (in the way that I do for example) so there is very little motivation to change: it all feels right, the scholars are not challenging the view, and it's our connection to a long tradition that validates our approach to life. And since that approach to life often entails considerable sacrifice and hardship, endorsing and emphasising the "truth" of the motivation to undertake that sacrifice seems intuitively right.

I said in an earlier comment that Anālayo is truly non-sectarian. But he is still a celibate Buddhist monk. His Theravādin scholar colleagues are quite openly biased for the Theravādin sect. They have given up family, career, sex, and sexual relationships to commit themselves to being Theravāda minks, so it's no surprise that they find plenty of confirmation for their chosen lifestyle. And no surprise that in this day of challenges to their authority from outside of Buddhism that they have started to write impassioned apologetics for the medieval worldview they follow.

Anālayo is not sectarian in this narrow sense. But he is still partisan for the Buddhist tradition more broadly. He is no doubt brilliant, industrious, and dedicated; and he meditates a lot as well; but he is wholly engaged in confirming the medieval worldview that motivates him to do what he does in the lifestyle he has chosen. This is great for people that share his worldview and his lifestyle, including many of my friends, colleagues, and acquaintances.

Somewhere along the way that medieval worldview fell apart for me. I realised that it was inaccurate. I'm sure *something* does happen when one experiences emptiness, for example. And I'm sure that people who have these experiences find them enormously valuable. But my experience certainly does not fit the medieval models I've spent most of the last 10 years studying. From talking to friends, colleagues, and acquaintances (some of whom have very strong insight experiences) my sense is that the models don't really describe their experiences either.

Let me give a couple of examples. We all still talk about viññāna, but for the life of me, after more than 10 years of learning Pāḷi, reading texts in Pāḷi, and saturating myself in the academic literature on the language and doctrines, I cannot tell you what viññāna meant to the people who composed those texts. I can confidently say that it does *not* mean "consciousness", even though it is still almost universally translated with that word. It seems more to relate to a conscious state, but the meaning was so transparent 2000 years ago that the term was never defined in detail. I have similar problems with a number of common Pāḷi terms that seem not to give anyone else any trouble: nāmarūpa, saṇkhāra, vedanā, dhamma, manas, citta. We all think we know what these words mean, but inevitably we define them in ways that make sense to us. So the worldview is medieval, but much of the terminology is given a modern spin because it's actual meaning is no longer clear. The etymology of vedanā for example is that it is related to veda and to the root √vid 'to know, to see, to find'. It's from an action noun vedana 'knowing', or perhaps 'seeing'. And cognate with either our "wise, wisdom" or with "video, vision". And we translate it as "feeling"? That has to be wrong.

Another conundrum is that in Pāḷi emotion is not a separate category of experience to thought. In our world we have thoughts, emotions, and physical sensations. In Pāḷi there are just sensations related to the body (kāyika) and sensations related to the mind (cetasika). They have names for states that we label "emotion" but they don't understand an emotion to be different from a thought. There is a mismatch here that I for one now find confusing. Why would I insist on using medieval (or even Iron Age) Indian ways of thinking about my experience when I grew up in 20th Century New Zealand and now live in 21st Century England?

What I would love to see is that we start to move out of the medieval world and into the time we actually live in. Living in the present, as well as in the present moment. Which is about a lot more than using Facebook to keep in touch. I would love to see those with a fascination for meditation start finding a fascination for Antonio Damasio, Thomas Metzinger et al and that we examine the phenomenology of experience and express what we find anew. That rather than looking for confirmation of our existing view, and celebrating when we find it, that we describe what is going on in the language of *our* day. There's no legitimacy in couching everything in archaic terms any more.

People sometimes argue that the words don't exist in English. but English has a very much larger vocabulary than either Sanskrit or Pāli (and most other modern languages in fact) and it is supremely welcoming of neologisms and loan words. So do what Shakespeare, Milton, and other writers did when they couldn't find the exact word they wanted, and make something up!

But then we come back to the motivation to break from tradition when tradition is what validates us as community members and/or leaders and as experts in the field. Facility with the traditional jargon has its own kudos, even if the words are not wearing any clothes. Unless we are sure that others will follow, most of us are not willing to go out on the limb that I'm on. I happen to have the kind of personality disorder that means I'll be out here sabotaging my membership of the group anyway, so I might as well do everyone a last service by shouting my conclusions as I fall from the tree........

08 December 2016

The Hard Truth Behind Post-Truth

This post-truth thing is a blessing in disguise. We've been labouring under a massive misapprehension for a couple of centuries, i.e. that human beings are fundamentally rational.

Those who understand this have been exploiting it since at least the 1920's, when Freud's nephew, Edward Bernays, convinced US women that smoking cigarettes was a symbol of their freedom, thus dooming millions of them to miserable deaths from lung cancer and/or emphysema.

Of course using fantasies to change minds on a mass scale has been stock in trade for religion over millennia, but the priests were probably no more informed than their flock and going on instinct. .

There is something sinister about this knowing manipulation of our decision making processes by psychologists. The government now do it as a matter of course. And it's out in the open that lies can be more persuasive than truth in the right mouth.

We knew all this. If for no other reason, than because it came out in the close examination of how Nazi propaganda turned the German people against their neighbours near and far. Of course our governments are almost as bad. The British Empire was a fucking disaster outside of Britain, a genocidal monster, but the government has continuity and controls the narrative at home. They made sure we knew what monsters the Nazis were, without ever admitting to the atrocities that they committed. None of want to believe that our side are the monsters. But in this case we are.

A market executive explained it to me this 25 years ago (For Kiwis, he invented the hugely successful "Trim Pork" marketing campaign). People, he told me on the marketing course for librarians that I attened, make emotional decisions and then look for rationalisations after the fact. All my research, reading, and experience since then has borne this out. Reasoning is often just an after thought to make sense of how we feel about things. It has very little to do with how we decide things.

Sit down to think a problem through and most individual humans immediately fall into one or more of dozens of cognitive biases and/or logical fallacies. It turns out, however, that we do much better in small groups. Here, the ubiquitous confirmation bias allows me to present the strongest case I can for our idea, while the group will look for and find flaws because they have no investment in confirming my bias. Small groups of like minded people are by far the best approach to decision making. of course groups are also susceptible to group-think. Nothing is perfect.

Democracy, as we now employ it, is pretty hopeless because it is predicated on voters having accurate information about who they are voting for and making a rational decision about who would best represent their interests. Since neither of these propositions have *ever* been true, it's probably best that we see the system comprehensively failing because this might provide the motivation to fix it.

But the fact is that the lesson has not yet gotten into the core of our understanding of ourselves. We probably have a couple more generations of ruthless exploitation of our myopia with respect ourselves, by knowing and unscrupulous parasites, before we start to clock that the story was wrong all along and think about rewiring society.

And there is no point in demonising ordinary people in any of this. This is not happening because ordinary people are stupid. If anything it is happening because intellectuals are stupid. After all it is intellectuals that have promoted this completely false view of humanity. Mind you they generally replaced a religious view that was even more wrong, so generally speaking the trend is towards less stupidity. As I've said before, I'm mildly optimistic about the species, it's just the individual members I don't like.

In which case the question becomes, can we finally discover what we really are (i.e. social monkeys) before we cause our own mass-extinction. I'm not necessarily against the mass extinction of Homo sapiens, but it is a shame that we seem so determined to take so many other species with us. But in the long run, life will continue on well beyond any ecological disaster we might cause.

Bacteria are the dominant life-form on the planet, and have been for 3.5 billions years since they appeared. They've survived much worse than humanity in those thousands of millennia. Much worse! So that's a happy thought, eh?



07 December 2016

Frequentives

Here's a nice little English thing. Those words that end in -le are frequentives. I tramp once, but if I do it a lot it's a trample. A lot of the original words are now lost. If you are in fine fettle for instance, there is no word for the single instance of being fet (fettle is from the Lancastrian dialect).

If scrambled eggs are too intense, then just ask for them to be scrammed once and leave it at that. Cuddle? No just one cud, please! Too many wrangs make a wrangle (wrang is the past-participle of wring). And so on. On the other hand a bell rings, and what a telephone does is, in fact, ringle.
Sometimes a frequentive is just avoided. I can stomp, but no matter how many times I stomp and I am not stompling.

However, there are a few faux amis. A "single" is not singing all the time. Single comes from a Latin word sim, with a diminutive suffix -lus, giving singulus (one, individual, unaccompanied). From this root we also get simple. Singulus in Middle French became sengle or sangle and by the 14th Century, English single.

I often ride, but this is not riddling. In fact riddle is an imposter that ought to be spelled riddel. Here the root is related to read (originally "to advise or counsel") and the original noun suffix was
-els. Some genius thought the s was an incorrect and so it was dropped. And then the -el became -le. Why this happened is a riddle in itself. There is another kind of riddle, a kind of coarse sieve, which ultimate comes from a Proto-Indo-European root *√krei.

"Disgruntled" is interesting, because we still have grunt, but we've lost the positive frequentive, gruntle. We can't be gruntled, but we can be dis-gruntled (which I often am). Gruntle means to grunt a lot. Something a pig is thought to do when content. Though I suspect a lot of things that people say about pigs are made up.

It gets more interesting when you create an agent noun by adding -er (almost the same as the Sanskrit form with -ṛ). To frequently whit is to whittle, and one who does this is whittler. Of if you often whis, you are whistler. Politicians are bullshittlers.

The origins of babble are not, as something, concerned with the story of the Tower of Babel, but with a now unknown word bab for a sound, perhaps onomatopoeic or imitative of baby talk. Bab bab bab.... babble.

I'll stop prattling now.