-
[Dorothy Bishop] I'm going to talk today about evaluating alternative intervention approaches to dyslexia.
-
[slide with talk title]
-
The conventional approaches that you get really tend to rely on the fact that we've now got of evidence
-
that most children with dyslexia have problems in what is called phonological awareness,
-
that is, they don't necessarily hear all the different sounds in speech,
-
and therefore have difficulty relating them to letters when they are trying to read.
-
And most of the interventions that are mainstream these days would focus on trying to train children to identify sounds in words and relate them to letters.
-
But this sort of intervention has been shown to be effective, and there have been a number of large scale studies.
-
But nevertheless, it has to be fairly prolonged for some children, and there are children for whom,
-
even though they can learn this way to actually sound out words and read, they don't necessarily read fluently.
-
It's still an effort for them, and they don't sort of get to the degree of automaticity that you might expect,
-
And it's certainly the case that methods that work for many children don't work for all children,
-
and there is a hard core of children who remain very hard to treat.
-
It is for this reason that many parents do get very concerned about whether there is something else they should be doing
-
if they are finding that their child is either not getting intervention, or that the intervention doesn't seem to be working very well.
-
And there are a whole load of things out there that are on offer, and the problem for the parents, I think -
-
and/or indeed for adults who themselves, might want to have further intervention for dyslexia -
-
is that they want to know, "how do I distinguish something that might work for me",
-
from something that is just some sort of snake-oil merchant who is out there to make money.
-
And that's what I want to try and address today.
-
principally from the perspective of how you might evaluate scientific evidence that people put forward.
-
But perhaps before going onto that, it's worth going into some relatively commonsense things.
-
I would say that there are certain things that should ring alarm bells if people are advertising some sort of treatment for child dyslexia.
-
The first thing is if the intervention has been developped by somebody who has no academic track record,
-
no experience of doing research in this field, and hasn't published anything in this field,
-
if the intervention isn't endorsed by people in the mainstream dyslexia field,
-
that should also sound a note of caution.
-
Of course, the mainstream people aren't always right.
-
It's possible that somebody with no background will develop something marvelous.
-
But if that were the case, you would expect it to be pretty quickly picked up by people in the mainstream,
-
who are really, on the whole, pretty keen to find things that will work.
-
And you obviously want to look at whether somebody is asking for a lot of money for something that hasn't been proven.
-
And what is also, to my mind, a worrying sign, is if somebody promoting a treatment is relying heavily just on testimonials
-
from individuals who claimed to have been cured, rather than having any sort of proper scientific evaluation or kind of controls.
-
And it's worth noting that human beings have a tendency to be terribly impressed by testimonials,
-
and even myself, as somebody with a scientific training, I find if, you know, I've got headaches and somebody comes along and says:
-
"I was cured by such and such, and I went to my herbalist and it worked",
-
you know, you're often very tempted to be much more swayed by that sort of evidence than by a pagefull of numbers and figures.
-
And this just a human tendency: we are naturally built to really take advice from other people and to rely on what they tell us.
-
But in the contexts of these sorts of interventions, that's really quite dangerous,
-
because, when somebody gives a testimonial, that's just one person, their only individual experience,
-
And people you don't hear from tend to be the people who tried it, and it didn't work.
-
And you don't know how many of them there are: there may be thousands of them.
-
But they're not going to publicize the fact that they tried it and it didn't work.
-
And so, testimonials are often very much at odds with more scientific evaluations.
-
.... to turn out that when somebody says there is scientific evidence for what they're doing, how you should interpret that.
-
And that's jolly difficult even for scientists sometimes: there is disagreement - let alone for the general public.
-
But again, I think, there are some sort of general rules of thumb that you can go by
-
for telling that a treatment is likely to be effective.
-
And when I discuss this, I'm going to illustrate it by taking the example of the Dore treatment - that's DORE, named after Wynford Dore, it's originator.
-
ANd I'm picking on this largely because it is a non-mainstream treatment that isn't widely accepted by the experts,
-
and yet it does claim that there is some scientific evidence to support it,
-
which has lead the scientists to look at it quite critically and quite carefully,
-
which is what we would do with any scientific evidence that comes along:
-
once it's out in the public domain and published, people tend to go and look at it as carefully as they possibly can.
-
Now, the Dore method is interesting to us, because it does illustrate the case where there is disagreement
-
as to whether the evidence is showing that its' effective or not.
-
And so, what I want to explain is why it is the case that despite this published evidence,
-
most of the experts are not impressed of the efficacy of the Dore treatment.
-
But the general points that I'll make would apply to any other treatment that was out there, whether (?) there was evidence being produced.
-
So, first of all, what is the Dore method? Well, it's a method that has been proposed for curing problems
-
that are thought to originate in the part of the brain called the cerebellum, which is at the back of the brain,
-
and it was developed by Wynford Dore as a method for helping his dyslexic daughter.
-
He has written a book about the history of how this came to became (?) about,
-
and he was a classic instance of a parent who was rather desperate to help his daughter who, for many years,
-
had been through the educational system and failed, and was getting increasingly depressed.
-
And he tried various things, he talked to various experts, and he ended up with this program that's been put forward,
-
which is an individualized program, where the child follows various sorts of exercises,
-
which are done for about ten minutes twice a day, over quite a long period of time,
-
varying, depending on the severity of the problem, from maybe 6 months to 2 years.
-
And the child is assessed at regular intervals and different exercises may be prescribed.
-
Now, the theory behind the Dore method is that dyslexia and other learning difficulties -
-
it's not just dyslexia it claims to help, but also Attention Deficit problems ...(?) hyperactive -
-
are thought to arise within the cerebellum: the cerebellum just doesn't develop normally,
-
and the argument is that you can have different cerebellar impediments in different people,
-
and that's why you can get this range of different symptoms,
-
but that you can diagnose them by specific tests of test of mental and physical coordinations.
-
And what you are then supposed to do is these exercises, which are not anywhere fully described in the public domain,
-
because they are commercially sensitive, but there are some examples given, and it's clear that what they do
-
is focused largely on training balance and hand-eye coordination in children.
-
So you might be asked to stand on a cushion on one leg, or to throw a bean bag from one hand to another
-
while you are doing that, just stand on a wobble board (?) and balance, or to follow something with your eyes in a particular way.
-
So, the idea is that these are all things that the cerebellum is involved in, by training up the cerebellum, you may improve its general abilities.
-
So, what is the evidence for this underlying theory?
-
Well, it's not a proven theory, but there is some support for it.
-
Certainly, people trying to look at what is going on in the brain in dyslexia have proposed many different theories
-
about what the underlying causes might be.
-
If you look at the brain in a brain scanner of somebody with dyslexia, it typically looks totally normal.
-
There's certainly no big holes in the head or anything like that, that you are going to see on a scanner.
-
But the argument is being made that there may be regions of the cerebellum that are perhaps slightly smaller than they should be
-
or not functioning quite as they should be.
-
And this theory has some support, although not everybody would agree with it
-
and there is certainly other theories equally plausible at the moment that are around.
-
The notion - the cerebellum is important for getting things automated.
-
So you can - when you learn to drive a car, first of all, it's very slow and effortfull, and you have to think about everything you do.
-
By the time you are a skilled driver, it's no longe the case that you have to do that,
-
you just drive around without thinking about it.
-
You can do all sorts of other things while you are driving.
-
So, the argument is that with reading, most people, similarly, become very automatic in how they learn to read:
-
you do it without thinking about it, but for the dylexic it remains effortfull because the cerebellum is not functioning normally
-
and it's the cerebellum that helps you get your skills automatized.
-
And in support of this, it has been argued that in many people with dyslexia, there are some associated problems with motor coordination,
-
..... (?) physical skills and so on, and that too could be a sign of a problem with the cerebellum.
-
Again, that's fairly controversial, it's not being found in all children, and the arguments go to and fro.
-
But this is not a sort of theory that is particularly disapproved of by the mainstream. People are debating it.
-
The difficult stumbling block, though, for the Dore approach to treatment comes with the idea that
-
if you train the motor skills, that is a sort of coordination between different muscles and movements
-
and between their eyes and hands, that this will somehow have a knock-on effect on things like reading.
-
And indeed, David Randall and colleagues, who published this initial study on the treatment,
-
describe it as something of a leap of faith, because the cerebellum is actually known to be a very complicated organ,
-
with lots of different regions, which are fairly independent from one another.
-
So there is no real reason to suppose that if you train one part of the cerebellum, it will have somehow a generalized benefit.
-
And indeed, you could say: "Well, if it were the case that this is true, if you'd had a chance, you would go to skateboarding,
-
or playing ping-pong, or things like that, ....... (?) or perhaps ballet dancing, things that require balance and coordination,
-
that should protect you against dyslexia". There is really not much evidence for that,
-
on the contrary, there is some very good sportsmen who - gymnasts and people with dyslexia.
-
So it is hard to see how the logic of saying "Train these motor skills and somehow the whole cerebellum function some day improves2
-
But what does the published evidence look like? Because the theory might be, you know, questionable,
-
but basically, what the parents are going to say is, "What matter is, does it work?"
-
Well, there is a published study on the intervention, which claims that it shows that it really does work
-
if you compare children who have the intervention and children who don't.
-
And two papers have been reported - one from the initial phase of the study, and the other from a subsequent phase -
-
And they are reported in the Journal of Dyslexia which, in 2003, published the first paper
-
which was on just under - started with a sample statistics on the 300 children who were all attending a .... (?) primary school.
-
And the researchers went in and screened all the children on the dyslexia screening test,
-
to pull out children who would be suitable for enrolment in the study.
-
But the first thing that is more important to note is that these were not children who had a very high rate of diagnosis of dyslexia.
-
So, there were 35 in the group, and about a third of those came out as having a strong risk of dyslexia on this dyslexia screening test.
-
Another 21% came out with a mild risk, but about half of these children were not really ...... (?) in this category
-
and they were just picked because their schools (?) were relatively lower compared to the other children.
-
And there were only a total of 6 children who had previously been diagnosed with dyslexia, out of the 35.
-
There were a couple with a diagnosis of dyspraxia and one with ADHD diagnosis.
-
So this is not really a sample consisting of children really with severe problems on the whole.
-
There were few in there with major difficulties.
-
Nevertheless, the originators of the treatment would argue even quite mild problems might be worth treating with this
-
and so you could argue this study is nevertheless of value.
-
So what they did, they started out well in this study: they divided the children randomly in treated and untreated groups,
-
which is, as I am going on to explain later, is an important part of a good study.
-
And if you look at the results that are described on the promotion materials of the DORE organization,
-
they are all in Dore's book that he published, "Dyslexia, the miracle cure", he described the results as stunning
-
and said that reading age increased threefold, comprehension age increased almost fivefold
-
and writing skills by what he described as "an extraordinary 17-fold".
-
Of course, everybody reading that think "Wow, my child is going to take off like a rocket if we put him on this intervention."
-
But unfortunately, these figures are really a classic instance of how statistics can be manipulated in a very misleading way.
-
So, for a start, they were not based on any comparison between the control children and the untreated children -
-
sorry, the control children and the treated children.
-
They were - instead, they just took all the children who would be treated and looked at how they did on a group reading test
-
that had been administered by the school every year.
-
And the children had had this on two occasions prior to the intervention - so, 3 months before it started and a year before that -
-
and on two occasions after the intervention, after this whole long 4-year period.
-
And what the researchers did was to really just plot the average schools of the group over these 4 time periods
-
and show that if you compared the amount of change from the first time point to the second,
-
which was before they had had any treatment, it was a certain amount
-
and if you then compared the second to the third time point, so the treatment had been going on (?) between those two,
-
there was a different amount of change.
-
And then they divided one by the other and showed that there was this threefold improvement.
-
But it's a very, very misleading way of depicting these data, because if you look at them on a graph, here,
-
you can see that the only odd thing about the data - well, there's two odd things about the data:
-
one is that at most time points, these children are reading at absolutely normal levels.
-
So it's not clear why they are regarded as having risk for dyslexia;
-
and the one time point when they're not, is the time point 3 months before they are involved in the study,
-
where there is a bit of a drop. But it's really not an impressive demonstration of change
-
and this division of one time period by another is very misleading, because it just gives double weighting
-
to this one low period of three months before the treatment started.
-
And they did the same thing again with these other figures of massive increases that they talk about,
-
using data from the SATS tests administered by teachers, which are not really regarded as particularly precise or rigorous tests,
-
and really group children in a fairly global way at level 2, 3 or 4.
-
Level 2 is average for 7 year old, 3 is average for 9 year old, and 4 is average for an 11 year old.
-
And to give you an idea of the sort of misleading nature of these massive changes they talk about,
-
on the writing test, where there is this incredible change that they talk about, of a 17-fold increase,
-
the score at age 8, the average score was 2.5, which is about what you'd expect from a 8 year old.
-
At age 9, it was 2.56, which is a little bit better, but not much.
-
And then, they argue, the intervention came in, and at age 10, the children scored 2.95.
-
They are still rather below where they ought to be at the age of 10.
-
It looks as if on this particular writing assessment, the children were just rather creeping along.
-
But because the difference between 2.53 and 2.56 is less than the difference between 2.56 and 2.95,
-
they make a big computation of dividing one by the other, actually coming out with the number 17, which is a wrong number (?): it's actually 13.
-
So there is a 13-fold change. But if you look at the overall numbers, this is really not so an impressive game at all.
-
It's really a very misleading way of presenting the numbers.
-
So, most people would say, this is really smoke in mirrors in terms of using statistics in a way that isn't really valid.
-
The other thing that is of notice is that all these results that have given so much publicity
-
in promoting the treatment about these massive changes, haven't talked about the control group at all.
-
They've just talked about, "Well, we've got these children, before treatment they did this,
-
and after treatment they did that, and it has all gone up".
-
And of course, if schools do go up after treatment, it's not necessarily because the treatment works:
-
There are lots of other reasons you need to bear in mind.
-
And the first of which is just, on some things, you get better because you get older,
-
so that if you were to measure shoe sizes before the DORE treatment and after it, it would go up,
-
but it wouldn't mean that it made your feet grow bigger.
-
Now, clearly, that's a silly example in most cases, because people try to use measures that don't necessarily change with age,
-
or that are adjusted in some way for age.
-
But it's important to bear that in mind when people are talking about changes on things like -
-
the DORE program, they talk about changes on balance, balance improves dramatically after the program.
-
These are measures that have not been adjusted for age at all, and so, some of these changes could well be due to the fact
-
that the children are getting older and getting better at doing these things because of that.
-
Another uninteresting reason why schools may improve is that the children may be having some other sort of special help.
-
So, if a child is having reading difficulty, they may very well be getting some special help at the school, in addition to following this program.
-
And that may be what's causing the change, rather than the particular intervention you are interested in.
-
What's very well known, of course, is the placebo effect, which is a sort of concept coming from medicine,
-
which also says that you can get better just because you think you are going to get better, because you think somebody has done something effective.
-
And in the case of educational treatments, you can see effects where -
-
because the teachers and the parents and the children themselves are all full of expectations of how this is going to improve them -
-
there is more motivation: everybody gets positive attention and this itself can cause positive effects.
-
The fourth reason, which is often neglected, because it really doesn't affect things in medicine so much,
-
but in education, it's actually rather important, using the sort of thing like reading tests:
-
you can have practice effects. So you can get better upon some things, just because you've done it before.
-
And we've seen this quite a lot with language tests, for example, that we give to children,
-
where, the first time you test a child, they don't know what to expect, they don't know what's coming,
-
you aske them to do something that's unfamiliar and they are a little bit nervous, maybe.
-
You test them again on the same thing a month later: they are much, much better, simply because they've done it before
-
and they are calmer about it, they know what to expect, and so on.
-
So you can get practice effects that can make quite a difference, just because you know what to expect
-
and you are familiar with the whole situation of the test.
-
The fifth reason - and the last one, you'll be pleased to hear - why people may improve for no good reason
-
is the hardest to explain and it's something known as regression to the mean,
-
and it's just a statistical artefact, which has to do with, if you pick somebody because they're bad at something,
-
the odds are, when you test them on a second occasion, they'll be a little bit better (?).
-
The converse is also true: if you pick somebody who is very good, they tend to get a little bit worse when you test them a second time.
-
Why should that be? The reason why this occurs is because our measures are not entirely perfect an accurate -
-
I'm showing a graph here, where we have a measure that is almost perfect,
-
and you test people on two occasions, and you just will see that their scores on time 1 and time 2 are identical: we are assuming that there is no genune change.
-
If you do that, then you don't get regression to the mean, because the measure is perfect
-
and if you test them a second time, they'll get exactly the same sort of score.
-
And what you can see on the right hand side of the graph here, is people divided up according to the average score they started with.
-
So we've put people into groups who were very poor to start with, who were medium, less good and so on.
-
And these are just fictitious data made up to illustrate the point.
-
So you just generate these numbers by saying, "We've got a measure that has this particular characteristic
-
that if you measure on one occasion, on another occasion it remains pretty much the same".
-
So then, you don't get regression to mean and you get people to maintain their position across time.
-
So if you then see change, you can say "Well, it's genuine change."
-
But most of our measures are not like that, most of our measures are not perfectly correlated:
-
that means, you measure them on one time, and another time, and they actually change because of all sorts of things.
-
Things like the particular test items that you're using, whether you are in a good mood, whether you've made a lucky guess in some items.
-
And what you can see is that if you do that, that some people's scores go up with time, some people's go down with time.
-
But on average, if you start with a low score, the odds are, you come a little bit closer to the average when you are tested on another occasion.
-
If you start with a high score, you get a little bit worse.
-
And this is nothing to do with genuine change: it's just to do with the fact that our measures are imperfect.
-
And it has been argued that this is a major reason - all sorts of treatments that work (?) but don't really work.
-
It's just that it looks as if you've seen a change, and you tend to attribute it to the treatment.
-
Now, this sounded very depressing, because it means there's all sorts of reasons why we can see change,
-
and how do we distinguish whether we've got a genuine change due to our treatment?
-
But the fact is that you can control for most of these things if you do a study that has a control group.
-
That's why those who are trying to do scientific evaluations are really keen to include control groups in studies and argue that they are essential.
-
Because if you have another group of children who have been selected to be as similar as possible to your treated group,
-
and are the same tests before and after the period where the treated group are treated,
-
you are actually controlling for the effects of maturation, the effects of any other intervention they might be having,
-
practice effects in particular, and also this dreadful regression to the mean.
-
All of those things can be then taken into account.
-
And in so far as they have effects, what you would expect to see is that you may see improvement in your control group
-
because of these spurious things that we don't really want to see.
-
And then you can say, "well there is more improvement in the treated group" (?) and it is that difference that is really critical.
-
It doesn't actually control to use - if you have a group who have not been given any treatment - it doesn't control for placebo effects.
-
So you've still got the problem that maybe your treated group will improve just because everybody is focusing on them with great excitement.
-
But you could actually also have control for that, and it's becoming increasingly popular in this field
-
to say that what you should have is a control group who are actually given some alternative treatment.
-
So, for example, if you are interested in a treatment that might improve reading,
-
you could either get children some standard educational treatment that they are getting anyway
-
So if your claim is that you are doing better than a phonological-based treatment,
-
you could have a control group given that treatment and see if you are making really that much difference,
-
or you might prefer to say, "Well, let's treat something else, let's give children training in something completely different
-
that isn't focused on reading, but nevertheless could benefit them in other ways."
-
And then you can do that sort of comparison.
-
So what about the DORE study, because I mentioned at the outset, when talking about this study, that they did have a control group.
-
But so far, talking about the results, are only mentioned (?) the dramatic changes that they saw,
-
which ignored the control group.
-
The interesting thing is that when you look at their control group, it illustrates perfectly the importance of having a control group.
-
So, on they dylexia risk's score, where a high score is bad,
-
they had a change in the treated group, from 0.74 to 0.34.
-
So you think: "Wow, that's great, these children's risks for dyslexia have really come down."
-
In the control group, the average score changed from 0.72 to 0.44.
-
Now, you could say: "Well, it's not so big a change."
-
The trouble is, with groups this size, you can't really tell whether that's meaningful.
-
But certainly, what is clear is that both groups improved on the dyslexia screening test,
-
even though the control group had not had the intervention.
-
So, it really illustrates the point very clearly that on a lot of these measures,
-
everybody gets better, even if they are not treated.
-
Now, if we look at the more precise data that they presented, they presented average scores on the different subtests from the dyslexia screening tests,
-
I won't talk about all of them, I have got a fuller presentation
-
where I do talk about all the different measures they use
-
and I don't want to sort of be accused of delberately hiding things,
-
but I think the tests that people would be most interested in are the literacy tests.
-
So, you undertake the DORE treatment because you want to get better at reading and writing, if you are a parent of a dyslexic child, at any rate.
-
So, looking at the results on those tests, what they found was that there were a total of 4 tests that had to do with literacy directly.
-
And on one of those, it looked as if the treated group did better than the untreated group.
-
But there is a problem with that, though, because on this reading test, the untreated -
-
the control group are actually right on the average score for their age at the start of the treatment - at the start of the study.
-
So, in a sense, you could argue, "Is there really room for improvement ...... (?) school absolutely average,
-
whereas it just so happened that the children who had treatment started a little bit lower and therefore had more improvement.
-
And their improvement was not dramatic, one has to say as well.
-
Their school went up from 3 to 3.5, on a scale of 0 to 10.
-
On the other measures, again it illustrated that on two of them, everybody improved, regardless of whether they had the treatment.
-
And on the third one, nobody changed very much at all.
-
So, this is not dramatic evidence of improvement but you could argue: "Well, nevertheless there was one measure that looked a little bit promising."
-
But they then, in the second phase of the study, went on to give the control group the same treatment, and they published this in 2007.
-
So we now don't anymore have a control group as everybody has been treated:
-
one group early on, and the other group with a delayed time scale.
-
And they presented the data between time 1 at the start of the study and right at the end of the study, when everybody had had this treatment.
-
But when you look at the results there, it's clear that there really is a, you know, no persistent improvement in reading.
-
In fact, the mean scores for the children having the delayed treatment on the reading test
-
have now really gone down, rather than up, at the end of treatment.
-
And the general impression, I would say, is that there is nothing very stunning going on here,
-
certainly nothing that matches the description that you get on the promotion materials for the intervention.
-
So, overall, I would argue that the evidence for gains associated with this treatment is really not at all compelling.
-
First of all, the claims that are made for stunning changes are all coming from analyses where they didn't incorporate the controls
-
and they just tried to argue that any change you see at the time must be due to the treatment.
-
and not taking into account all these other factors.
-
And on reading measures, where there was control group data available,
-
there was an initial small gain in the treated group, but it wasn't sustained by the end of the study.
-
So, it really doesn't look terribly promising.
-
Now, this is why in general, I think it's true to say this:
-
I don't know of anybody in the dyslexic community who is an advocate - in the academic community
-
who is an advocate of the DORE treatment, other than people that are directly associated with the DORE organization.
-
And so, the reason really is just that the evidence is not at all compelling,
-
although the study was small and you could argue a larger study should be done.
-
There is a real mismatch between the claims that are being made and the evidence that is available.
-
But the interesting thing is also why so many people seem to nevertheless regard this as an effective treatment.
-
If the testimonials are to be believed, there are many satisfied customers and happy parents who feel that their children have been helped.
-
I think there is quite an interesting set of reasons why this may be so.
-
And one is that there is a well known - in the psychological field - well known human tendency
-
to think that something that you've put in a lot of time and money too, was worthwhile.
-
It's called cognitive dissonance, and it means that if you've actually put in the effort, you tend to feel that there was an effect.
-
You have to somehow resolve this sort of inconsistency, otherwise, in your mind.
-
And this was beautifully illustrated, not by the trial of the DORE treatment, but in another trial,
-
which was a very nicely well-conducted trial of something called Sunflower therapy,
-
which is a rather holistic approach to intervention for dyslexia that involves kinesiology and physical manipulation,
-
massage, homeopathy, herbal remedies and neurolingusitc programming.
-
And there was a very rigorous study done for this, and what was interesting about it
-
was that, like so many of these things, they didn't really find a lot of evidence for any better change in the clinical versus the control group,
-
although, to some extent, both groups were securing (?) their schools were improving.
-
What they did find, though, is that the children themselves had higher self-esteem if they had undergone the Sunflower treatment,
-
but that also, 57% of the parents did think that Sunflower therapy was effective in treating their child.
-
So there is a clear mismatch between what the study showed of the objective evidence on the children's learning difficulties,
-
and what the parents actually thought.
-
It is possible that this could be related to the fact that the children's scores did improve,
-
but if you didn't know that the control children had also improved, you might attribute that to the therapy -
-
but also to the fact that people were again being given a lot of encouragement,
-
there was a lot invested in that treatment, and then there might well (?) have been some sort of sense of cognitive dissonance there.
-
There's also a strong human tendency to be impressed by certain kinds of explanation that get more biological about dyslexia,
-
particularly those such as the DORE treatment that get more neurological and claim to be doing something to the brain in treating dyslexia.
-
There was a beautiful study done - published - in 2008, not on dyslexia,
-
but just more generally on people's tendency to be impressed by scientific eyplanations.
-
and what these researchers did was to give people explanations of psychological phenomena that are well known
-
and they either gave them a good explanation or they gave them a not very good explanation
-
that was more like just a re-description of the effect, and asked people to judge whether this was a good explanation or not.
-
And what was fascinating about this study was that in general, people were quite good at doing this:
-
even if they had no familiarity or background in psychology, they could distinguish a good explanation from a bad one.
-
But what they found was that if they added some verbiage that just talked about the brain in various ways,
-
and said, "This result came about because brain scans showed it", or "because we looked at the frontal lobes",
-
people were much more impressed with the bad explanations.
-
So a good explanation didn't get any better when you added all this neuroscience waffle,
-
but if you added neuroscience waffle to a bad explanation, people thought it not so bad.
-
And so, there is a tendency to be very impressed by anything that talks about, adds the brain in to an ........ (?) explanation.
-
And I think this is used by people who then try and add spurious neuroscience sometimes
-
to their accounts of their particular promissing theory.
-
And it really is not - we shouldn't allow ourselves to be mislead.
-
So I think, to sum up, there are a number of barriers to objective evaluation of intelligence,
-
which, to some extent, functions about (?) our human condition,
-
that we are not naturally good at taking in lots of numbers and looking at graphs
-
and trying to sort of take into account alternative explanations.
-
We tend to be impressed when we hear other people tell us that something has worked
-
and it's hard - you have to almost guard yourself against the tendency to do that
-
and to look rather for the hard evidence, to look for the actual numerical data.
-
We have to be very careful when people start giving us explanations that have got a lot of neuroscience in them
-
and check out, is this real neuroscience or is it just put in there to impress us?
-
We have to be aware of the effect of cognitive dissonance and the tendency to believe some things
-
simply because we have invested time and money in it
-
and, most importantly, we have to bear in mind that there will be effects on children's performance
-
of maturation, of our expectations, of just get practising on things,
-
and there are also these dreadful statistical artefacts that can make it look as if a change has occurred,
-
when it's really not particularly impressive.
-
But I think, if one bears these things in mind, the bottom line is really:
-
"Look for evidence from studies that have got adequate controls"
-
and if you do, you'll be in - a standard, I think, by how far you can see improvements in children, even if they haven't had the treatment.
-
and there are lots of things that will make things a lot better, just with the passage of time.
-
But if you really want to demonstrate that there has been an effective treatment,
-
you do have to show an improvement relative to a control group,
-
rather than just, somebody started out not so good and is now a little bit better after the treatment.
-
I hope that that might give you some useful indicators when trying to look at new treatments that are out there and on offer,
-
and for a more detailed account of some of this work, there are various -
-
there is a powerpoint presentation with notes on my website on this topic.