Saturday, 2 February 2019

Thoughts on Ros Coward's 'The Whole Truth'

This is going to be more a collection of fairly arbitrary thoughts and half-remembered impressions.

Rosalind Coward is a writer on social issues, often from a feminist and semiotic perspective.  I read her 1989 book 'The Whole Truth:  The Myth Of Alternative Health' in the mid-'90s as an attempt to take on board a contrary voice during my training as a herbalist.  My recall of what she wrote is fairly poor, but I found it rather unsatisfactory, and not just because I disagree with her.

From the distance of over two decades, two impressions have persisted, which may be unfair or disproportionate in significance, but I'll offer them anyway.   One is that it seemed rather tangential to the point of alternative medicine, a term which I should unpack at some point but not yet.  The other, which is much clearer to me even now, is Coward making much of the information people gather about the putative workings of their bodies.  She likens it to mapping routes in terra incognita.  This reminded me of Deleuze and Guattari's «Mille Plateaux», which considering her semiological approach is unsurprising.  The philosophers are quite keen on using the metaphor of landscape and use it to deconstruct Levinas's ethics of the face, which they portray as that.  Deleuze also uses the idea of the landscape in reference to the egg, which can be understood externally as a curved surface about which, for example, all sorts of mathematical formulae can be used to describe its shape and patterns on its shell, but it also conceals potentiality in the form of the embryo within it, just as the body hides human potential.  There's a contrast between Coward's view of a medical landscape being mapped and Deleuze's and Guattari's willingness to dive into the contents of the body without organs, an impression they attribute to many people diagnosed with schizophrenia, which they portray as positive in its subversion of what they see as a reductivist psychiatric view of the condition.  It brings to mind a dialogue taking place between a psychiatrist and a patient diagnosed with schizophrenia with no observers, where there is no consensus about the relative authority of those involved.  It asks the question, who has a firmer grip on the world?  And my immediate reaction to that is that although there is such a thing as the wisdom of madness, this seems quite a romanticised view of someone who may be terrified and damaged by life experiences and at a severe disadvantage.  It also feels to me that Deleuze and Guattari have exploited the condition and are refusing to take it seriously, rather like Jacques Lacan, who strongly influenced them.

Hence right now I feel rather torn, because although I feel that Coward's view is rather too trusting of authority and for some reason considers patients' own forays into finding out about their conditions to be a bad thing, I feel equally that Deleuze and Guattari are not really facing the patient but kind of detached and rather too playful about the condition, and also rather keen on building their own reputations and stati along with probably a lot of power and money.  I don't think Coward recognises the inordinate trust she has implicitly placed in health care professionals and the subordinate position in which she has put patients, and I wonder how much experience she has had of being either.

Studying health as a detached bystander is very different from doing so with personal investment in health conditions.  Some research into health is strongly motivated in this way and can be self-taught.  One example of this is Michaela and Augusto Odone, whose son Lorenzo suffered from the progressively degenerative neurological condition of adrenoleukodystrophy, which has a typical prognosis of being fatal from about two years after symptoms become noticeable.  Despite having no medical background of their own, the parents developed an oil based on canola and olive oils which slows the degeneration of the insulating myelin sheaths around the neurones.  Their son survived to the age of thirty and Augusto received an honorary doctorate from the University of Stirling for his work.  The consensus is that it substantially increases the time before symptoms develop if the patient has been established to have the inherited trait causing the accumulation of long-chain fatty acids in the first place.  However, it's also possible that the parents experienced confirmation bias in the form of assessing Lorenzo's symptoms as milder than an uninvolved medical professional would have done.  Again the question arises of where the authority should be placed.

Another example of an involved patient, which is probably the tip of the iceberg, is the case of an academic expert in bipolar disorder who was herself diagnosed as bipolar.  Unfortunately I can't recall the name here, but it's easy to see that when an individual is "up", they are likely to devote a lot of energy and drive to a particular project which may of course rapidly founder without their help and turn out not to be as productive as they have perceived it to be themselves.  The reverse would happen when someone was "down".  They might fail to recognise the merits of their work or be able to do it at all.  Perhaps the two states balance out.  Nonetheless, this person, and I really wish I could present her to you as more than a vague memory, was most successful in her research.

The rest of the iceberg is something I've experienced and been involved with, and since my initial academic education was not specifically medical, I'm aware only of this motivation among clinical psychologists.  These people have often embarked on a degree in psychology in order to understand their own unusual behaviour or mental conditions, and I would include myself in that as I was partly myself motivated to study the subject by my gender dysphoria, a subject for another blog.  I presume that this motivation also exists among medics to some extent.

Many Complementary and Alternative Medicine (CAM) practitioners themselves suffered from disordered health which they feel what I'm going to call orthodox medicine failed to help them with adequately.  On the other hand, a little knowledge can be hazardous and many orthodox medics might feel that this is all they have, hence the problem with inaccurately mapping a landscape mentioned above.  Moreover, this mapping often risks only including the immediate neighbourhood and fails to see a bigger picture.  There are clear principles which become apparent on observing or studying a variety of conditions and their pathology, that processes such as feedback and moving away from homeostasis are very important to illness.  Much disease is literally tragic.  A minor flaw in the body's way of dealing with an issue early on can trigger a fatal avalanche.  And the thing about tragedies is that they should make the audience feel sad and perhaps achieve a form of closure on things in their own lives.  The former makes sense in a medical context, although excessive involvement probably helps nobody.  The latter can also be useful, and is probably the source of this very blog post, but there's a moral question in using people's conditions to deal with your own issues, and probably also a mental health one.

I'm aware that I've talked about conditions here rather than the people diagnosed with them, which suggests I've got my priorities wrong.   Although it's possible and popular to look at health in terms of disorders which people have, this is not only a negative view of health which can lose sight of well-being as the unmarked state, but also abstracts the conditions from the people whom they constitute a problem.   Rather than being seen as having disorders, people could be regarded as having "disordered X".  For instance, whereas it may make sense to see someone as having an eating disorder, and in fact links it to a larger world in which eating disorders exist as abstract objects which can be studied and addressed, it also makes sense to talk about people as having "disordered eating".  Likewise, to take an arbitrary physically-constructed example, many people with respiratory disorders are also people with disordered respiration.  This shifts the focus from the disease to the patient, and naturally there's a further possible shift away from the patient into their living circumstances and the social and political factors which give rise to them. 

This last shift does exist in CAM in the form of clinical ecology, which assesses health in relation to pollutants.  Clinical ecology has been criticised for extending the scope of the concept of allergy too far, including sensitivities and intolerance, and so can be seen as controversial.  Similar extensions are seen elsewhere, for example in certain forms of chiropractic which attributes what some would think was far too many disorders to subluxations which may not in any case be demonstrable.  Herbalism itself started to go in this direction a century or so ago, where all herbal remedies were analysed in terms of their influence on the autonomic nervous system, but this phase is now regarded as an irrelevant and embarrassing blind alley.  Nonetheless there is a risk of reductivism in CAM which seems quite common and I suspect is motivated by economic forces, or possibly by overvalued ideas which "explain everything".  Then again, the question arises of whether the same can be said of orthodox medicine, and that is a question.

CAM is not the only field which can be criticised for untestability.  The same might apply to another field which deals with mental health, namely counselling psychology and psychoanalysis.  Whereas the second of these is not so much in vogue as it once was, the former is generally seen as a viable and valid approach, and there is little criticism of psychotherapy, although it does exist.  Consequently, one is given to wonder why CAM is such an object of "skepticism" and psychotherapy and counselling aren't. 

As I said, these are just a few fairly unformed, nebulous thoughts and I've set them down so people can take advantage of them.  I apologise for them not being better-formed or systematic.

Wednesday, 22 March 2017

Language Learning

More specifically, this is "Why Learn Swedish Part II" except that it's not really about Swedish so much as language learning in general, which is why it's here rather than zerothly.

Sarada once mentioned to someone that I had books in about fifty different languages but wasn't actually fluent in most of them, to which our friend responded that I was "Mr Theory".  Whereas I may disagree with the title she applied to me at that point, the surname is, sadly, entirely appropriate.  As a child, I used to practice my cursive handwriting in the requisite exercise books at school but continued to print in every other context.  To me this reflects the way formal education can fail to work at times and it leads to the oft-uttered objection "we're never going to use that in the real world", something which, to me oddly, is particularly often applied to algebra which I find myself using constantly.  However, I've always very much been into learning for the sake of learning more than concerning myself with applications and in fact I'm sometimes startled when something actually turns out to apply to the real world for once.  It's no coincidence that I have a Masters in Philosophy, although I would argue that Philosophy, which always has a capital P for me, applies par excellence to everyday life and when you look at events such as the election of a certain head of state you can probably see why.

Fundamentally then, the reason I'm learning Swedish is just because I want to and find it interesting.  I'm learning it for its own sake.  I will naturally hope to understand Scandinavians when they speak as a result, and read it, and that's part of my motivation, but in the end it amounts mainly to my fascination with language generally.  Languages to me are interesting because of the world view they represent.  It isn't true, or rather it's very misleading, to say that Esquimaux (no, not Inuit but that's another story) have hundreds of words for snow, but it is true that culture and environment have major implications for language, which develops organically rather than through deliberate design, and the demotic has long fascinated me.  This, incidentally, is why I feel pretty sure that new gender-neutral pronouns such as "ze" will never catch on.

All this, though, reminded me of our approach to language-learning when we were more involved in "home education", also known as parenting.  Clearly the children picked English up about as quickly as could be expected although I was also speaking to them in German.  As far as they were concerned, though, German was just this strange noise one of their parents made, unlike Castilian Spanish and French which they heard from their other parent and also from a number of other people.  This seems a bit odd to me because children tend to be very keen on playing and often learn to play, and pretend play is important to them when it's often very divorced from their everyday experience.  I would expect other languages to appeal to them for this reason and also because it's like a secret code and, depending on where you live, a family thing which binds people together.  Nonetheless they were not keen on German.  Oddly, some of it really did seem to be about the sound because they preferred French even though they never heard it except from their mother.  I always assumed that the sound of languages was liked or disliked because of their associations, so you might expect Brits in the Second World War to dislike German, but judging by the reactions of our offspring, this isn't so.  Moreover, they seem to need a practical reason to learn a language, mother tongue or otherwise.  Having said that, the relics of my efforts with German persist in the fact that our daughter occasionally speaks German in her sleep, and it's said that children who have grown up hearing more than one language can pick up pronunciation and distinguish the sounds of other languages as adults more easily than monoglots can.

Bilingual children may be the norm, historically speaking.  There was a time when people lived in small tribes speaking their own language and in some places this is still the case or was so quite recently.  For instance I've heard the claim that before the Europeans got there the people on either side of what's now Sydney Harbour spoke different languages, and there are also hundreds of languages spoken in Papua New Gunea, with a population of seven million and an area slightly larger than Sweden.  Intermarriage between tribes in such situations, if it happened, would mean growing up with the need to speak two languages, and it would also be very helpful to be able to talk to the neighbours.  This would lead to pidgins and creolisation of course, but there are also very firm boundaries sometimes, such as the Solway Firth, which is the most abrupt in the English-speaking world. Even in Europe the bilingual condition is very common.  It occurs, for example, in Belgium, with great reluctance of course, in the non-hexagonal parts of France and in Switzerland.  Linguistic continua are also very common, although they are quite unfamiliar to Sassenachs.  Basically the whole area of Spain, Portugal, France, Italy and French-speaking Belgium and Romance Switzerland along with the smaller countries within them consists of people who can speak perfectly well with their neighbours but have completely different languages at a distance, and the same is true of the area of the Low Countries, Germany, Austria, Northern Italy, German Switzerland and continental Scandinavia with Germanic languages, and so on.  This often means that people speak in two registers there, with their regional dialects along with the official version of the language.

Nonetheless, this doesn't mean we're actually set up to be polyglots.  Whereas Noam Chomsky believes humans have a brain hard-wired to pick up language, my own view is that we have stumbled upon it, although that has undoubtedly influenced our evolution since we did so.  I consider Chomsky's view problematic because it sets humans apart as special while ignoring the apparent linguistic abilities of various other species, some of which are not closely related to us at all and don't appear to use anything like language in their own habitats, and also because the human brain is very flexible.  Therefore when we pick up a language it's just like any other learning.  It should be noted, though, that those other kinds of learning often do involve language, for instance jargon.

On that subject, one of the things we did with children in the 1990s and 2000s was to facilitate their learning of Latin and Greek.  These are of course the posh dead languages, and you might be given to wonder what the point of focussing on them is.  Well, there are in fact plenty of reasons for doing so, one of which is that because they used to be the language of learning in Europe, much of our jargon uses Greek and Latin, meaning that if you want to learn a technical subject of some kind it will really help you to be able to pick apart those terms.  For instance, a dermatologist has been described as a person who gives a skin condition a Greek name and prescribes a topical steroid.  This is in fact quite unfair because there's also the maxim, "if it's dry, wet it.  If it's wet, dry it.  Congratulations, you are now a dermatologist", and the observation that if you didn't know what an African elephant was but saw two in a row, you would still realise you'd seen two of the same animal.  I mention this partly because this blog is called "home ed and herbs".

That's one reason but there are several others.  It helps to know Latin in particular because its descendants are so widely used, and it helps to know both because they have been systematically analysed and that lends itself both to formal logical thinking and the related skill of good grammar in one's first language.  There are many other reasons, and some are elitist, but when they are that can be a good rather than a bad reason for learning them.  For instance, works on rhetoric written in classical languages are a source of political speeches and other means of persuasion and it can be important to pick those apart when you listen to cabinet ministers and the like.  They themselves learnt these skills in public schools and they got them where they are today.  Learning these skills as hoi polloi gives us power.

The other languages we've engaged with during our involvement with home ed were German, French, Spanish and Japanese.  This is where one of the problems, to my mind, of home ed emerges.  I used to help out in a group for learning German and found that because children, being autonomous learners, would constantly join and leave it, there was basically no progress because there were always beginners in the group.  Although this is problematic, particularly because it means you can't learn by immersion, it does at least perform the function of reassuring them about learning another language and making the process more familiar and associated with fun.  Nonetheless I was always quite frustrated (and this was pre-transition sote frustration was a major part of my life at the time but I still don't think it was ideal) that progress was not being made.  However, clearly I am talking about what I've called Sassenach families here, i.e. white families with an ethnically English background growing up in England, and there are plenty of counter-examples to that, such as Cornish families, other "Celtic" families families with South Asian origins, Muslims, Jews and families with non-British parents, all of which were found among us at the time.  There are a lot of resources there in fact, which are frequently underexploited in schools.  Japanese proceeded rather differently for us, as it happened spontaneously when the children watched Japanese television and anime, although it didn't seem to get very far.

This brings up a further approach to second language learning which has been tried among home edders, although I haven't seen it first hand.  This is the attempt to apply learning through play without using foreign language input beyond what's known within the group.  This is more an experiment in exploring the limits of what learning through play can do, and was tried with Japanese.  It's similar to "social sight reading", which is where the shapes of words are acquired, often by osmosis, rather than learning through phonics or in other formal ways.  For instance, a small child may recognise the "WAIT" sign on a pedestrian crossing and know not to cross if it's lit, the logo of a supermarket or the title of a favourite TV programme from their look rather than by actually reading it.  Fully literate adults do this too:  if other words are written in the style of the "Coca-Cola" logo it may take some time to recognise that it doesn't in fact refer to a soft drink at all.  Similarly, learning Japanese through play might lead someone to recognise trade names such as Kawasaki and Yamaha, and perhaps words such as karate, karaoke or sushi.  Beyond that, certain Japanese characters could be recognised, including 東京 (Tokyo) or exclamation sounds in manga, and more intense examination of the words would reveal other features, such as the probable meaning of "kara" as "empry", "te" as hand and the fact that consonants and vowels alternate with few exceptions and consonants rarely occur together.  Beyond that, though, learning would grind to a halt unless one was actually in Japan or experienced a lot of Japanese culture.  The question then arises of what can and cannot be learned through play as a general prinicple.

On this blog and elsewhere online, when I have tried to bring up the subject of home ed I have very much seen it as an opening line in a discussion. Although this does sometimes happen, the normal situation is that my thoughts disappear into a void and there's very little response.  I know there are many home edding families out there who have approached second and further language learning in various ways, and I'm really hoping against hope that this time, at last, this blog entry is the beginning of a discussion.  I don't expect this, but I would very much like to be proven wrong.

Tuesday, 27 September 2016

Basic Income - A Case Study

This is here now because Corbyn is proposing a basic income scheme, something I strongly agree with in a society where money exists, and I want to use my work as a way of illustrating how this would have been of benefit to people with whom I come in contact if it had existed since the time I started out as a herbalist and a home edder.

For those who don't know, a basic income scheme is a guaranteed unconditional minimum income for all.  Social security is currently conditional, the apparent idea behind that being that it's an incentive to find paid work.  This is a flawed idea.  Against the idea of a basic income, naturally, is the thought that there would be less incentive to work, to which my answer would be that if work is worth doing, it's worth doing for free, but in the meantime you need money to keep going so there has to be some compensation for your time.  Anyway, let's get on with it.  And yes, this most definitely is left-wing propaganda, and since left-wing ideas are good sense, that's not a problem.

Over the past two and a bit decades my main activities have been home ed and herbalism, hence the title of this blog.  Home edding is an extension of parenting which is done for the benefit of society as a whole as well as for the children's sake.  It generally results in better-adjusted adults who engage more socially and are more motivated in their work, more likely to start their own businesses and earn better wages than adults who have been through the school system.  The average home edded child is three years ahead of its peers at school.  I'm sure you don't need to hear all this again but it bears repeating.  It could also be observed that once one has deliberately had children one has a duty to them and society to parent them as well as one can.  Moreover, to some extent schooling is childcare, and in other circumstances adults perform the function of childminders, after-school clubs, parent and toddler groups and so forth.  All of this is done as a matter of course rather than as a paid chore.

As I continued to home educate, I began to offer workshops and the like on a variety of subjects which I hoped would interest and stimulate the children concerned as well as provide them with an ultimately useful skill and knowledge set.  This worked quite well.  In some circumstances I would ask for a small sum of money for this and after some time I noticed that it was providing about a third of my income.  I then decided to concentrate on it and the income dried up, which is not surprising.  If you're looking for an equivalent in wider society, this was rather similar to the role of a school teacher although very differently structured.  However, unlike a schoolteacher's role, it cost the state absolutely nothing, and we also saved the state several thousand pounds a year by not sending our children to school.  Over a ten-year period this would have added up to something like £100 000.

There was, however, pressure to work for money during that period which restricted our ability to concentrate on these activities, and my need to charge families for participation made it too expensive for some of them.

Now, suppose a basic income scheme had been in place while I was doing that.  I could have provided all of this for free (at the point of delivery, as the saying goes).  I could, moreover, not have had my horizons narrowed by fear of penury and the fact of poverty.  The situation where I chose to switch to the educational side of my activities as a main source of income which led to a reduction in those activities wouldn't have occurred either.  All of those things could have been avoided by having such a scheme, and whereas it would cost the taxpayer a lot, it would also save the taxpayer at the very least that £100 000 I mentioned earlier.

Now for the herbalism.

I've practiced as a herbalist for seventeen years now, and have seen thousands of patients in that time.  I've managed to help nearly all of them, and moreover I've succeeded in keeping many of them away from the NHS where they would have taken up time and money.  However, I've also needed to charge them and although I operate a sliding scale there are limits to what I can afford to do.  I also do benefit appeals for free and give out a fair amount of advice without asking for payment.  All this is very good of course.  However, like most other herbalists, I cannot make a living on the herbalism alone.  Most ongoing herbal practices are propped up by other sources of income,  Now, in recent months I myself have a reliable source of income, and have found that over that period the income from herbalism has risen considerably because I am not now in a position of having my horizons and plans restricted emotionally and mentally by that lack of income I previously experienced.  I believe my practice has been an asset to the community even though my income is small.

Once again, let's look at this in terms of basic income.  Had I had a guaranteed income from somewhere over all that time, it looks very much like my business would have grown and the taxes I paid would have increased accordingly, and the evidence for this is in the fact that my income from herbalism is in fact increasing right now due to the security of another source of money coming in.  Even if it hadn't, though, the benefit to my patients would have been considerable, and that benefit would have been paid forward.  Patients whose health improves are likely to be more economically productive and also less likely to cost the NHS as much money.  A major reason why I haven't been able to do this as easily is fear and the effect it has on future planning and rationality.  That fear could have been dispelled by a basic income.

You may ask, of course, why I wouldn't just have sat back and let the money from basic income roll in without lifting a finger.  The answer to this is twofold.  Firstly, that's no way to live.  It would lead to mental health problems and unhappiness and people want to be occupied and useful, something we should all trust in each other.  Secondly, I honestly don't believe people actually do that when they have a secure source of income unrelated to how much work they do.  Rich people don't, for example.  Just because Bill Gates is a billionaire, that hasn't stopped him from working, and poor people on social security also work.  They just work without being paid for it.  They parent, do housework, volunteer for socially useful causes and so on.

Even if this wasn't true though, why would you even care?  People are of infinite value already because of the fact that they are people, and that needs to be recognised.  Isn't it worth paying that amount of public money simply to remove the motive for doing socially harmful work and to remove an enormous burden of fear and stress from people's everyday lives?  Not to mention the burden on society of mental and physical illness, crime and law enforcement, which is not only onerous but also financial in nature.

That's just a personal view on how a basic income scheme would have helped me.  The research is out there to support it for other reasons.  That policy alone would make voting Labour worthwhile if they adopt it, and don't let anyone talk it down for you because it would be bloody brilliant for the whole of society, not just the poor and underemployed.

Wednesday, 25 May 2016

English Spelling And History

Two things annoy me about how we write in this country.  One of them is our insistence on refusing to count in duodecimal.  We count in tens, like most of the rest of the world, and we have decimal currency and metric units, again like the rest of the world.  This is purely and simply disabling and there's no real reason for doing it.  People don't complain that much about numerical notation even though there is basically no justification for making arithmetic harder for everyone.

The other thing is of course English spelling.  English spelling is far from phonetic and makes literacy much harder for ourselves and people trying to learn English.  Moreover, for some reason the slightly more logical American spelling system is considered infra dig by most Brits, which is quite annoying in itself considering how much people complain about spelling.  However, unlike the use of base ten for numbers, English spelling does have some value and I think I've probably written about that elsewhere on this blog so I won't go on about it here.  The history of English spelling is quite interesting and carries with it the history of the English-speaking people, so it's an educational resource in that, and other, ways.

English writing begins before the English language even existed.  Around the beginning of the Christian era, Germanic tribes in what is now Northern Italy came into contact with the writing of the Rhaetic people, who may have spoken a language related to Etruscan or possibly a Celtic or Italic language, and adapted it for their own language.  Germanic was also written in an Etruscan-like script on the Helmet of Negau, possibly added later, in the form transliterated into our modern Latin alphabet as "HARIGAST TEIWAZ" - possibly "Harigast the Priest".  The word teiwaz lives on in today's English as the first syllable of the word "Tuesday", and means "god".  It's cognate with the Latin deus, the Greek θεός and the Sanskrit devas, all of which again mean "god".

The first widespread script used for Germanic languages was the Elder Futhark:



This was used to scratch inscriptions on the likes of bones, wood and stone, and resembles to some extent earlier rock carvings found in the prehistoric Germanic area.  The earliest Germanic writing found in Britain is in this script.  This is the fifth century deer bone found in Caistor, Norfolk, which has the word raihan written on it, thus:

This means "row" as in the thing you do with oars. At the time it would've been a reduplicative verb, meaning that certain forms referring to events in the past would have done so by repeating the stem, and coincidentally this type of verb survived longer in East Anglia than elsewhere in the seven kingdoms established by the Germanic invaders of the English heptarchy.  There are a couple of interesting things about this spelling.  One is that it uses a different character for "I" than usual, indicating that the proto-Germanic "AI" had a particular kind of pronunciation by this point.  The "H" also has a single bar, unlike the later Old English rune for the same letter, and this single-bar form is also found in Scandinavia.  This artifact dates from within three decades either side of the officially recorded arrival of the ancestors of the English in this land.  Before the arrival of Hengest and Horsa, however, there were already Germanic people living in Britain as part of the Roman army, including Saxons.

For the next century and a half, Old English was written using Anglo-Saxon runes, an alphabet known as the futhorc:

The last rune, stan, is said to be spurious for some reason.  There are not a huge number of inscriptions in this text although runes themselves continued to be used later as symbols for the words they stand for, and in fact I still use runes today when I'm writing something in public I don't want people to be able to read, for instance ideas for confidential matters involving people I know to pray for when I'm in church or medical notes, and the word "rune" originally meant "secret".

At this point in time, spelling was very simple:  you just wrote down what you heard.  Several different dialects of Germanic were spoken in the Heptarchy at the time, meaning that there was different spelling in different parts of the British lowlands, but this spelling represented the different pronunciations and accents rather than real variants in spelling which didn't correspond to a logical plan of some kind.  Reading was always out loud at the time, so it consisted of simply pronouncing the letters written on the page, and this continued well into the Middle Ages.

It appears also that there were still native Latin, and of course Celtic, speakers in lowland Great Britain at this time, the former of whom, if literate, would've been using the same alphabet as we do again today, but not much is known about them.

In 597, Augustine came to Kent and is said to have established the King's School in Canterbury, although to my mind there is a suspiciously long gap between that event and the earliest records of the place.  In doing so, he would've brought the Latin alphabet here and is the ultimate reason this blog entry uses it.  At around the same time, Irish missionaries entered more northwestern parts of Great Britain and introduced the same alphabet there.

Anglo-Saxon writing is dominated by West Saxon, since at that time and for a while after, the Kingdom of Wessex was dominant among the Anglo-Saxons.  The spelling and pronunciation of English was again pretty close.  There are, incidentally, ways of establishing how languages were pronounced before the advent of recorded sound but I can't be bothered to go into them here.  Just take my word for it.  West Saxon spelling is pretty clearly based on Latin but there were a few sounds in it which didn't exist in Latin and weren't even close.  Two of these were written using the relevant runes, namely þ (þorn) for the sound now represented as "TH", and ƿ (ƿynn) for "W".  That said, in some writing W was already written "UU" and "TH" was even written like that in some places.  Another sound, A as in today's southern pronunciation of "cat", was written using the Latin digraph for the diphthong we now pronounce as "I", that is, æ, also known as æsc - ash - after the name of the rune.  A further letter, ð, eð,  appeared later to indicate the softer "th" sound.

West Saxon didn't have a particularly strong influence on later English compared to other dialects, so today it seems idiosyncratic, particularly in its use of "eo" and its longer-vowelled sister.  However, this isn't really a spelling anomaly as far as anyone can tell, except that I suppose it's possible that people with other accents might sometimes have felt the pressure to "talk posh".  Coincidentally, "eo" has re-entered the English language due to a change in the pronunciation of "L", and is now found, for example, in the word "melt".  There is a little deviation from pronunciation also in the "ea" and long "ea" sounds, which started with æ when spoken.

Then we got invaded by the Danes of course, but that doesn't seem to have made much difference to spelling.  Some people think that English might actually be completely Scandinavian, but the Danes and the English in any case spoke languages similar enough to be kind of understood by each other and may have developed a patois to make themselves understood, from which modern English is descended.  After a period during which they controlled much of the Midlands, including Leicester, they left as a political power, though not necessarily the people themselves, and for the next hundred years or so we all sort of spoke English again.  However, one of the changes was that the sound represented by Y, which was previously like the French U today, started to be pronounced as "ee" and short I, identically to the letter I.  This led to the familiar old-fashioned looking confusion between Y and I and spellings like "hys" and "ys" for "his" and "is".  Another difference was that by this time, the English spoken in the Midlands was becoming more important than West Saxon.

Then, of course, the Normans invaded.  This meant that English became the language of the oppressed poor and they were largely illiterate.  The spelling of English then went two ways.  In the South, it started to be written using French spelling, meaning for example that the previous long U, though still pronounced "oo", was now written "ou" as in the French <<tout>>.  This eventually became our "ow" sound.  Meanwhile, in the Midlands and North English carried on being spelt as it was pronounced, more or less, I'm guessing because the Normans had less influence there.  The runes were used less, possibly because they had pagan overtones to the Normans, and were replaced by "TH" and "W" to some extent, though the old thorn survived in shorter words, and even today is recognised in phrases such as "Ye Olde Coffee Shoppe", where the supposed "Y" is in fact thorn.

During the twelfth century, some kind of spelling reform seemed to be attempted, possibly by just one monk called Orm.  He complains about how people are mispronouncing English and attempts to indicate short vowels by doubling the consonants after them and leaving them single after long vowels, using accents to show long vowels at the end of words and using yogh (ȝ) for the "J" sound of G and the newer letter "g" for the hard G.  This didn't catch on but it does serve to indicate a clue as to how Orm's English was pronounced at the time.

Another innovation from this period is the use of "wh" for "hw".  It's thought that this, along with the likes of "th", emerged from the French habit of using "ch" to indicate a sound which at the time sounded like our modern "ch" and therefore also a bit like the hard C.

England was basically a Norman colony by this point, but as time went by, ever more territory was lost in France and the focus of English monarchs became England.  This led to the return of English as the vernacular, until in 1362 the Statute of Pleading became law.  This allowed English to be used in court due to the loss of good knowledge of Norman French.  However, the English Crown only dropped the claim to France at the start of the nineteenth century, by which time France was a republic anyway.

English began to be written much more often at this point.  A drift can be seen in London English from a southern dialect to a Midlands one, meaning that the so-called "Queen's English" as heard today is originally not southern at all.  Southern English is more or less extinct nowadays, although there are a few remnants of it in words such as "vixen" and "vat", and the use of Z sounds in the Southwest instead of S at the beginnings of words.

What happened next is a bit of a mystery.  At some point soon after Chaucer, all the long vowels in English began to change pronunciation dramatically in a process which continues today.  The long U, by then written "ou", and the long I, began to change their pronunciation into diphthongs which in a sensibly spelt language would be written "au" and "ai" respectively.  This destabilised the whole system of long vowels, leading to a shift in all of them.  This is known as the Great Vowel Shift and may have been caused by the movement of people southwards after the Black Death killed much of the population of northern England, a process which also led to the increase in wages in the South and the Peasant's Revolt at the Poll Tax in the late fourteenth century.  The practical result of the vowel shift is that the way we write our vowels is dramatically different from virtually all other languages.

Most people were of course still illiterate in the fifteenth century, although being the century the printing press was invented, the production of books suddenly got a lot cheaper.  The Reformation also led to more people reading the Bible in English and William Caxton began to standardise English spelling again.

Later in Tudor times, the English began to explore the New World and establish colonies there.  Early American writing shows very little standardisation in its spelling.  Another effect of colonisation was that many new words entered the English language, for instance the names of fruits and vegetables new to European knowledge.  These were spelt differently and thus more variant spellings entered English although the ground had been laid for this possibility through the fact that the nature of our language had already been opened to foreign influence via Danish and Norman French many centuries before.  Many other languages, for instance Mandarin Chinese, lack the flexibility to allow foreign loanwords to any extent.

Later attempts were made to Latinise spelling, sometimes introducing dubious silent letters such as the B in "debt", which is related to the Latin debit but was never really present in the English spelling of the word, which used to be "dette".  Another non-historical example was the spelling of "could", which introduces an L by analogy with "should" and "would" which was never there at all historically.  Changes in pronunciation also meant that many letters, such as the K in "knight" and "know", ceased to be pronounced but stayed in the spelling, although in those cases they do serve to distinguish between those words and the identically pronounced "night" and "no".

American independence led to a split into two standard dialects of English, one spoken in the Commonwealth and the other in North America.  Canada uses an intermediate version of the spelling system, associated with Melvil Dewey, the inventor of the Dewey Decimal System used to classify library books.  His advocacy of spelling reform can be seen in the spelling of his own name, which he altered from "Melville".  Dewey was in fact much more radical than what survived in American spelling.  For instance, he spelt "philosophy" with two F's, and tried to introduce the macron, a horizontal line over long vowels still used in pronouncing dictionaries.  Although these didn't catch on, other ideas of his were adopted and survive in today's American English.

Speaking of American spelling, users of British English in the twenty-first century frequently find themselves hesitating regarding the way we spell words because the internet exposes us to so much American material.  There are now more words ending in "-ize" than there used to be in British English for this reason.  I would also contend that just as there was a time before English spelling was standardised, we may now be leaving this phase in the history of the language due to the common variant spellings of English found in places like social networks, text speech and internet fora.  Whereas we are very attached to the way English is spelt, I suspect this may soon become a thing of the past, and we will probably return to the idea that we spell as we hear the words in our heads when we write.  There's nothing new about this.  Also, unlike other countries we have never had an official body to standardise the language, which again reflects our history as a collection of cultures.

So there you go.  That's basically my account of the history of English spelling and as you can see it also constitutes a bit of a history lesson.  I can't guarantee everything is correct in this, so you may want to look into it yourself.  I just thought it would be a helpful and hopefully interesting educational resource, which is why it's on this blog.


Tuesday, 10 May 2016

Degrees, Minutes And Seconds

Here's a printable 360 degree protractor:


If you print this on plastic film and cut it out, you will have a protractor.  This to me is a tiny example of how post-scarcity would work because although you could in theory go down the local stationer's and buy a protractor, this one's available for free plus the cost of the ink and plastic.  In an ideal world, oh never mind.

The reason I've put this here, apart from it being a resource for home edders, which in theory is what this blog's supposed to be about, is that it clearly shows the 360 degrees of a circle.  Most people know degrees well enough to be able to understand what's meant by a ninety or forty-five degree angle.  One's surroundings can also be thought of as a 360 degree circle, meaning that the traditional description of objects being at "two o'clock" or "six o'clock" can generally be supplanted by talk of degrees.  It also turns up as a way of describing positions in the sky and on the surface of this planet and other roughly spherical celestial bodies such as Mercury, turning up as latitude and longitude:



In the case of the sky, i.e. the celestial sphere, there's a slightly different system where right ascension and declination replace longitude and latitude respectively, and whereas declination also uses degrees, right ascension uses hours, minutes and seconds.  I tend to think of it as representing the time something rises above the horizon even though that can't be exactly what it is:


This last system, however, is confusing because the words "minutes" and "seconds" refer to different things in different directions.  There are of course 86 400 seconds in a day, so the smallest complete unit of right ascension, a second, is in fact fifteen seconds of arc (arc seconds).  This is the bit I didn't explain.

Minutes and seconds are not just units of time but also of angle.  Perhaps surprisingly, neither of these two systems are completely metricated although metric systems for time and angle do exist.  I'll get back to those.  A minute of arc, or arc minute, is a sixtieth of a degree, that is a relatively tiny shift in direction which would be microscopic if it was marked on the above protractor.  However, in terms of things such as the night sky, the circle of vision and the surface of Earth, arc minutes are fairly large.  An arc minute, coincidentally, is the size of the smallest object visible to the naked eye.  An object an arc minute across would be a tenth of a millimetre across from a distance of twenty-five centimetres.  In fact it's not strictly true that a minute of arc is the smallest part of the visual field discernible because luminous objects are still visible even when they're much smaller and the brighter they are, the smaller they can be.  This means, for example, that the immense distances to the stars still doesn't render them invisible to most people even though they are far smaller than one minute of arc across.

Both big lights in the sky, the Sun and Cynthia, are well over a minute of arc across.  They are in fact both around thirty minutes of arc in diameter from here, meaning that solar eclipses are possible.  This is such an unlikely coincidence that unless there's some reason why habitable planets need such a ratio, solar eclipses are not only a wonder of the world but of the entire Milky Way.  It's unlikely that there are any other planets in this whole Galaxy which have solar eclipses, and in some imaginary Galactic Empire where faster than light travel is possible, this could make Earth a top tourist destination.

Arc minutes also turn up in the form of nautical miles.  If you went a quarter of the way round the equator you would have travelled ninety degrees.   A single degree of longitude on the equator is around fifty miles.  Hence in this context a mile on the equator can be thought of as a measure of angle as well as distance, but it's not a convenient unit because the planet is not exactly 25 000 miles in circumference.  It ought to be exactly 40 000 kilometres in circumference because the kilometre was originally defined as a ten thousandth of the distance between the North Pole and the Equator on a line across the surface passing through Calais.  This is not how it works due to surveying inaccuracies and the fact that the planet is slightly tangerine-shaped rather than absolutely spherical.  However, a nautical mile is defined in a similar way as a minute of longitude, which is 1852 metres, slightly more than the 1609-metre mile.  Knots are then defined as nautical miles per hour, thereby combining two angular sexigesimal (sixty-based) systems in a very neat and appealing way.

Unsurprisingly, minutes themselves are divided into sixty seconds in this system too.  A second of arc or arc second is a sixtieth of an arc minute, which is such a tiny angle that there are 3 600 in a degree.  Again, this sounds fairly useless but again it has many functions.  I'll just mention two.  Measured six months apart, Earth's orbit is 300 million kilometres across, so objects in space shift very slightly against their background in a process referred to as parallax.  The nearest star apart from the Sun shifts by slightly under one arc second in position due to this.  The distance over which an object would shift by this angle is known as a parsec - parallax of one second.  A parsec is just over three and a quarter light years.  Another use of an arc second is to describe the size of distant objects in the sky, so for instance yesterday Mercury was twelve arc seconds across as it crossed the Sun.  Pluto is between four and seven seconds in diameter as seen from here.

As I mentioned before, there are metric and decimal versions of angles.  One of these is the radian:


A radian is easy to describe but for me very hard to understand the point of.  It's simply the radius of a circle moved onto its circumference, or 360 degrees divided by π.  This is around 57 degrees.  I don't know why this is useful although programming languages tend to use it, so I have to use it.

Another decimal angle system was used on the Peter's Projection map, though not on this one:

By Strebe - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=16115242
I'm not a fan of the Peter's projection but that's another story.  Some versions of the Peter's world map divide the "globe" not into degrees but gradians.  There are four hundred of these in a circle instead of three hundred and sixty and they were probably invented by someone French.  They're used in surveying.

I hope this makes more sense now.

Monday, 9 May 2016

Sic Transit Mercurius Monday

This afternoon, there was a transit of Mercury.  That is, from our perspective the planet Mercury passed in front of the Sun.  This happens at seven, thirty-three and thirteen year intervals, in either May or November, and takes up to three hours.  If I remember correctly, and I might well not, the May transits are better than the November ones because during the May ones Mercury looks bigger because it's closer.  Mercury is on one occasion ten arc seconds across and on the other twelve.  Apparently people don't know what arc seconds are, so here's an explanation.  A circle is divided into three hundred and sixty degrees.  Each degree is divided into sixty minutes of arc and each minute of arc divided into sixty seconds of arc.  If you consider the three hundred and sixty degrees of the horizon and the sky from horizon to horizon to be a hundred and eighty degrees, the smallest non-luminous object which can be seen is one minute of arc across.  Mercury is up to twelve seconds across, so it would need to be magnified five times to be visible even as a dot crossing the Sun,

I've never knowingly seen Mercury.  During the transit of Venus on 8th June 2004, I managed to project the Sun onto a piece of paper and it was quite clear.  Transits of Venus are much rarer than those of Mercury because Venus, being further from the Sun than Mercury, takes longer to orbit it and so passes between us and the Sun less often.  They occur in pairs a few years apart separated by a century or so.  This sounds odd at first until you realise that in order for Venus to pass in front of the disc of the Sun it has to do so in the plane of Earth's orbit.  Normally Venus passes by the Sun on either side, but if it is passing from "below" or "above" through the equator of the Sun it will be clearly outlined.  It also has the "black drop" effect, where Venus looks like a drop of black liquid as it passes the limb of the Sun because of its atmosphere, which is extremely dense.  At ground level, the atmosphere of Venus is about as thick as the water a whole kilometre down in the ocean here on Earth.  I found the transit of Venus to be particularly interesting because Venus is about the same size as Earth and so it gives you an idea of the scale of the Solar System.

Mercury does this much more often, but has no substantial atmosphere and is only a little larger than Cynthia as well as being further away than Venus on the whole, so although its transits are much more frequent they're harder to see.  This is what I got today when I was trying to project the transit of Mercury:

I've posted this as a full size image and you still can't see Mercury on it.  I don't think I was in the right state of mind to image it.  In other words, it didn't work.

What you need to do is get either a refracting telescope or, as in this case, a pair of binoculars and place a plain white surface at the appropriate distance to get a clearly focussed image.  This is a little risky since you're focussing sunlight and it can break prisms and lenses and cause fires.  However, I think people worry too much about this since as a child I used to filter the Sun through overexposed negative photographic film and look directly at it through binoculars and I can still kind of see.  I have multiple blind spots but probably not because of this activity.

What I like about transits, solar eclipses and looking at the sun safely from a home ed perspective is that they're daylight astronomy.  Two big problems with astronomy and home ed are that if you have a bedtime, you generally don't get to do much of it at night and, as a consequence of this, it tends to get rather abstract.  For this reason I didn't do a lot of astronomy with the children when I was doing Big Science and I'm not keen although it and linguistics are probably my personal two favourite subjects.

This is how the planets concerned were lined up this afternoon at about 2 pm:


As you can see, the Sun, Mercury and Earth are in a straight line.  This happens to be true seen from the side as well:


The next question might be why Mercury looks bigger in May, i.e. why is it closer.  This is because of Kepler's First Law of Planetary Motion.  Planets move in elliptical orbits with the Sun at one focus, i,e. the Sun is never at the exact centre of an orbit because that would be too improbable, and orbits are never circular because that also would be too improbable.  The orbit of Mercury is also more elliptical than any other true planet, but less so than that of Pluto.  Here are the orbits of Mercury and Earth side by side:




The Sun is clearly off to one side in the left hand picture and Mercury also clearly has a much less circular orbit.

This is a close up of Mercury:


Seen from even closer, even experts can't distinguish between it and Cynthia although it lacks the seas of the latter.  The craters, however, look very similar.

In fact, although both of them are barren cratered balls of grey rock, Mercury actually has quite a lot in common with Earth and Mars rather than Cynthia.  Mercury and Earth are the densest planets with Venus a close second, whereas Mars and Cynthia are both only about two-thirds as dense as Earth, Venus and Mercury.  It's been suggested that the reason for this is that Mars and Cynthia are formed from the debris of the outer layers of the original body making Earth up and that we are in fact the bits that sunk to the bottom.  Mercury is a bit bigger than Cynthia but quite a bit smaller than Mars, the figures being:

Cynthia:  3476 km diameter.
Mercury:  4879 km
Mars:  6792 km
Earth:  12756 km.

Even though Mercury is quite a bit smaller than Mars, its higher density gives it a surface gravity about the same.  This is interesting because it means that if Mercury were to be at about the same distance from the Sun as Earth is, it would probably have an atmosphere about as dense as that of Mars and also be roughly the same temperature as we are, notwithstanding a weaker greenhouse effect.  However, the air would probably be somewhat thinner than the Martian air because it has a lower escape velocity, as smaller planets have, so gases leak away into space more easily.  This is because an object doesn't have to go as far to orbit the planet, so a gas particle doesn't either and is more likely to be lost.  Even so, if Mercury was at the same distance as Earth is from the Sun, its sky would be blue and have occasional clouds in it and it might even have a little liquid water on its surface.  However, it wouldn't really have seasons because unlike Earth and Mars it doesn't really tilt.  What it might have if its orbit were still the same shape is the same seasons in both northern and southern hemispheres.  It also has a fairly strong magnetic field partly due to having a large iron core, which would protect it from radiation somewhat, and at a greater distance from the Sun it would have a shorter day (its day currently lasts 56 of ours) because of the weaker tidal forces.  All of this, rather surprisingly, adds up to Mercury being a more friendly place for life in this scenario than Mars is.

Mercury, ironically, probably has no Mercury on its surface since during the day it's hotter than its boiling point.  Strangely, however, Mercury has water ice at one pole, in the crater Chao Meng-Fu, because it's in permanent shadow.

I was going to say a lot more about Mercury but it's past my bedtime, so goodnight!

Saturday, 30 April 2016

Dreamed-Up Alphabets

The last entry on this blog mentioned Vai.  Sarada complained that this was an unwarranted irrelevance, but a while after writing it I realised that the example makes quite an interesting story in connection with autonomous learning and invention.

The Vai people live in Liberia and Sierra Leone.  Almost all languages spoken today in Africa use either Latin or Arabic script.  There are some exceptions, notably Amharic, spoken in Ethiopia and using the Ge'ez script, and Berber languages of North West Africa, which use the Tifinagh scripts.  Coptic, which was originally Ancient Egyptian, uses an alphabet derived from demotic hieroglyphics and Greek, but is now only a liturgical language and is no longer spoken.

Vai, by contrast, uses this syllabary:


Unlike alphabets, syllabaries use one sign per syllable, as the name suggests.  The history of the Vai script is quite remarkable.  It was first written down in the early nineteenth century by Momolu Duwalu Bukele and is said to have been revealed to him in a dream!  This could, of course, merely be a picturesque origin story, but it's entirely feasible that this could have happened.  The Bach flower remedies are another example of a system revealed in a dream and are quite involved and complex.  I feel I should trust the opinion that it did come to him in a dream because the doubts expressed seem to be about not trusting people in an in-group.  It's also difficult to know whether to call the appearance of the Vai script discovery or invention:  did he think it up subconsciously or did he consciously invent it?  If the former, how much is that a revelation and how much is it subconscious invention?  It's similar to the issue of confabulation and false memories, which edges towards Mandela Effect territory.  Whatever else was the case, it's highly likely that Momolu Duwalu Bukele got the idea, consciously or otherwise, from another equally remarkable writing system.

Liberia has an unusual history, and forgive me if you know this because I have no idea what other people do and don't know.  Its history is evident in its flag:


which of course resembles a certain other flag.  In the early nineteenth century, the American Colonization Society established Liberia as an African homeland for free African Americans because they believed their presence in the American South would make slaves there rebel.  This policy was supported by Abraham Lincoln.  Later, other colonies were established in the area which did include freed slaves.  In 1847, the area became an independent republic based on the US constitution.  More recently, Liberia became known for being used as a flag of convenience and had the largest shipping registry in the world.

Due to its connections with North America, members of the Cherokee nation also emigrated there on occasion, and an early Vai inscription was in fact found on a Liberian house belonging to Austin Curtis, who was Cherokee.  This is significant because it so happens that the Cherokee language itself is one of the few Native American languages to have its own script.  Excluding Quechua with its quipu, the knotted strings used for I think accounting purposes, the only languages with their own script there which come to mind are Yucatec Maya, Nahuatl (which is arguably not a form of writing as such) and the Cree syllabary.  There may be others but I don't recall them.

Cherokee is unusual by virtue of the fact that its writing was invented by someone who was previously illiterate, namely Sequoyah.  Here it is in its modern form:



This is Sequoyah.  He was born in the late eighteenth century and invented the syllabary in the early nineteenth.  It was so successful that literacy among the Cherokee soon surpassed that of the European-American settlers around them.  Although he originally intended fot the characters to be ideograms - one symbol per word - he changed his mind and settled on one symbol per syllable, as it is today.  The Vai script has a similar history in that it too used to have ideograms but has mainly dropped them with one or two exceptions.  Latin has a few widely used ideograms today, including "@" and "&", and the numerals we use could also be seen in that way although they're not strictly part of our script as such, being used by peoples all over the world, including Cherokee itself.

Sequoyah developed his script by studying his copy of the Bible, which, being illiterate, he couldn't read.  The script is still used today by the Cherokee.

There are a couple of other examples of illiterate people creating scripts about which I know far less.   One of them is Hmong, a language spoken in parts of China and Indochina.  This is written in a script called Pahawh Hmong:

Once again, this is a syllabary, invented by one Shong Lue Yang, also known as the Mother Of Writing, in the twentieth century.  Living in Vietnam, he was an illiterate farmer and basketmaker living hand to mouth, who probably did see writing at some point.  Starting in 1959, he received a series of visions in which divine twins taught him this writing and commanded him to pass it on to his people, which clearly he proceeded to do.

Finally, although there may be others, there is the Nüshu script, a secret writing by otherwise illiterate women in Hunan province, China:


Nüshu came into existence many centuries ago but nobody knows exactly when except that it was some time between the years 900 and 1600.  Most of the population was illiterate at the time but women learned to write this script, which they used for poetry, and again it's syllabic.  In a sense, like some other forms of communication such as Laadan, it's a specifically female mode of writing.  It was suppressed by the Japanese in their occupation because of the possibility of being used for secret messages, and again later by the Maoists during and after the Cultural Revolution for the same reason.  The last native user of Nüshu died in 2004 although it's not lost, since it's known in academic circles.  I wish I knew more about it, and shortly will.  There's a website here.

The notable feature of all these scripts is that they're all syllabaries in spite of the other writing used where they came to be.  Hmong and Nüshu had ideographic script, namely Chinese characters, used around them and Vai and Cherokee, with alphabets used around them.  To me, this suggests that the "natural" form of human writing is not alphabetic, ideographic or even pictographic but for some reason syllabic - one syllable per symbol.  Also, this has happened at least four times.

Why is this here, on this blog?  Well, to me this is a supreme example of what can be achieved by supposedly uneducated, illiterate people without any formal instruction, at least from other human beings.  Also, given the sources of information, at least two arrived in the human psyche from dreamlike states, at least according to the origin stories.  This shows how there's a sense in which we don't need to be taught to learn how to read and write, although on the whole if we never were we would presumably end up with hundreds or thousands of mutually unreadable forms of writing.  It also illustrates how we can even learn, discover and invent massively useful things in our sleep or at least in non-waking states of consciousness, even if it turns out that the information is from supernatural sources.  That's not a necessary supposition of course, and I prefer to think of this as in those two ways a marvellous instance of how amazing human beings are as a species.