Evidence Has To Do With Trust

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

Evidence has to do with trust. evidence that god exists. we need evidence to believe.

to believe without evidence would be innocence. innocence is the lost of trust? innocence is oblivious. oblvious is happy. i think an evidence complex is rather common in many adults and those who are always unsatisfied. it comes down to a real flaw in character. assuming that the plain enjoyment of life is an ideal characteristic. forgive me if i sound like an imbecile. http://www.down-syndrome.org/editorials/2032/

The importance of evidence-based practice


Sue Buckley This editorial discusses the ways in which evidence-based practice should be developed and evaluated, from first hypotheses to gold standard blind randomised control trials but also acknowledges that parents, educators and therapists usually have to make decisions on how to best help children with Down syndrome in the absence of this evidence. Guidance is offered on the ways in which new therapies can be evaluated, arguing strongly for objective evaluations and the avoidance of unproven and scientifically implausible approaches. Buckley SJ. The importance of evidence-based practice. Down Syndrome Research and Practice. 2009;12(3);165-167. doi:10.3104/editorials.2032

What is evidence-based practice?


In this issue we have several contributions that raise the issue of how we decide if a treatment, therapy or educational approach is evidence-based. I have spent my career advocating evidencebased practice and I was recently challenged to explain what I mean. This is not just a research issue, it is an issue which confronts parents, physicians, teachers and therapists daily and there is not a simple answer.

Rigorous scientific evaluation


Researchers and physicians rightly seek the gold standard for an evidence-based approach. This is a randomised control trial like the trial for the effects of giving vitamins and other supplements[1] described in the Research Highlights on page 175. They want to know that a treatment is both effective and safe. This requires, at a minimum, a control group who do not get the treatment and an experimental group who do. For effectiveness we need to see the experimental group doing better than the control group. To ensure the groups are comparable at the start, ideally, children are allocated to one or other group on a random basis. Randomisation is important to try to ensure that any other factors which might influence outcomes are balanced

out across treatment groups (for language and cognitive development these could be how stimulating the childs environment is, the number of brother and sisters, the educational levels or wealth of parents, quality of local early intervention services). These other factors may also be measured and their influence actually investigated if the study groups are large enough to allow this.

Measuring effectiveness
Outcome measures need to be objective and robust. Ideally, the researchers measuring outcomes should not know which group each child is in. This was the case with the supplementation study[1] parents of the children and the research team were blind they did not know which treatment the child was receiving. This is very important as, when any treatment begins, everyone wants to see progress and just the additional attention a child is getting may improve their progress. If everyone knows that the research is evaluating a treatment or therapy which may improve spoken language, then both treatment group and comparison group are likely to pay much more attention to the childrens language learning. In clinical trials, patients taking a placebo which has no known benefits (though they do not know whether they are on placebo or the drug being tested) will still report improvements. There have been recent examples of open (not blinded) pilot studies, where everyone knew which treatment group a person was in, that have shown positive effects but when, at the next stage, the treatment has been subjected to a blind trial, no effects have been found. For example, the use of donepezil hydrochloride (Aricept) to improve language and cognitive outcomes in children with Down syndrome looked promising in an open trial[2] but when subjected recently to blinded clinical trials, these were terminated due to insufficient evidence of benefit see http://www.clinicaltrials.gov/ct2/results?term=Down+syndrome

Safety
Even if a treatment is effective and improves development or symptoms of illness, demonstrating safety is a longer term issue. Trials for medicines in most countries go through rigorous steps. The research may start with laboratory research with animals suggesting that a treatment might work. Any such animal studies then need to be replicated to ensure the evidence is reliable. The next step may be treatment of another animal species before human trials. Once we get to human trials, there are typically 4 required phases, 3 phases before the drug is licensed for use and 1 after. Phase 1 is a trial with a small number of volunteers (around 30) to test for safe dosages and any immediate side-effects, Phase 2 assesses effectiveness, safety and what dose to give for effect usually a trial with up to 200 people comparing the new treatment with an alternative treatment or a placebo and Phase 3 tests for effectiveness on a larger number (1000s) to get information on more unusual side effects. If these phases produce evidence of effectiveness and safety a license may be granted for general use. In Phase 4 the initial use of the drug in clinical practice is monitored and evaluated and data is collected systematically on side-effects. These clinical trials take time and this can be very frustrating for those who might benefit if the treatment is effective. This is especially the case when the very first animal studies are reported

in newspapers with headlines that imply cures or breakthroughs, which is often the case. In a careful review Kathleen Gardiner describes the progress that has been made in research laboratories using the mouse models bred for Trisomy 21 research. Several of the studies she has described have been hailed as having potentially dramatic effects on memory function and learning, in the press, and even at conferences. However, most have not yet even been replicated in further mouse model studies. The next step is replication in animal studies. Some of the effective treatments may not have safe equivalents for human use if these can be found then clinical trials need to begin. Treatments that seem to work therapeutically in mice may not have the same effects in humans. For these reasons, a group of scientists (researchers, practitioners and clinicians) and Down syndrome associations recently worked together to issue the caution on the protocol being recommended by the Changing Minds Foundation.

Education and therapy


The need for gold standard evaluations applies equally to therapies and teaching methods. The truth is that we have very few studies providing evidence of effectiveness or lack of harm in these areas. Everyone will know what we mean by safety when thinking of giving a child pills what are the side effects and potential physical harm? Once we think about therapies and education programmes, they may directly harm the child by having a negative effect on their learning or development but there are a range of other problems to consider. These include the time spent on the therapy and the effect of this on the other activities parents and children can be involved in, the effects on brothers, sisters and other family members and the financial costs of some therapies. Financial costs will be relevant to families, to service providers and to schools. All this means that evaluations of therapies and teaching methods should be as rigorous as they are for medicines and supplements (unfortunately supplements are considered as foods and are not rigorously controlled though potentially just as harmful as medicines).

If no gold standard evidence exists?


However, as a parent or a teacher, I need to know how to teach my child to talk or to read now, therefore, I still need to make choices even though hard scientific evidence is not available, so how might we approach this? When asked to explain my approach to evidence-based practice in education and therapy, I state the following:
1. The gold standard for evidence-based practice is a randomised control trial of an intervention or teaching approach as described above for medicines. At DSE International, we are about to embark on such a study to evaluate a reading and language intervention for 6-9 year olds with Down syndrome over a 4 year period at a cost of some 500,000 so you can see why this is difficult to do! Grants of this size are difficult to obtain and we are delighted to have obtained this money (see http://blogs.downsed.org/downsed/2008/10/downsed-wins-05.html). Where we have gold standard evidence, we are on firm ground, but there are very few teaching approaches in education that have been subjected to this rigorous testing. We are looking for 270,000 for an evaluation of interventions to improve speech clarity and we are working on a bid for evaluating memory training, recently shown to produce dramatic positive effects for

other slow learners and children with ADHD. In order to make progress with good research adequate funding is essential. 2. Smaller, less well-controlled studies may provide some degree of confidence in a method and are better than no evidence. We have done some of these in the past, for example with memory training, and shown positive effects but much more needs to be done and work in this area is rarely replicated. 3. Where no evaluation studies exist, then my approach is to look at what we know about how all children learn for example, how they learn to read from the scientific literature and what current best practice advice seems to be based on that work. In other words, does a new teaching approach being recommended make sense given what we already know about how children learn is it based on sound hypotheses? 4. For children with Down syndrome - we would then consider what we know about their learning strengths and weaknesses from research before suggesting how we might adapt the way we would teach typical children to make the learning more effective for children with Down syndrome e.g., as we know they tend to have a verbal working memory weakness, we adapt using all visual cues and supports for memory that we can think of.

If a therapy or teaching approach does not meet the above criteria then we may be moving into the realms of quackery. My approach is supported by Stephen Barretts definition of quackery on the Quackwatch site:
"All things considered, I find it most useful to define quackery as the promotion of unsubstantiated methods that lack a scientifically plausible rationale. Promotion usually involves a profit motive. Unsubstantiated means either unproven or disproven. Implausible means that it either clashes with well-established facts or makes so little sense that it is not worth testing". http://www.quackwatch.org/01QuackeryRelatedTopics/quackdef.html

Stephen notes the profit motive and when talking with parents, I point out that if they are paying for treatments that have not been objectively evaluated they should ask why not? Proof that a treatment works would lead to increased profits so why are the promoters not conducting rigorous evaluations? Unfortunately, it is not easy to change practices; even when no evidence exists to support them, unproven treatments often still flourish. They may even be taught in professional training programmes.

The perspectives of parents


A recent small-scale qualitative study has explored the views of parents about the use of complementary or alternative medicines[3] and the parent interviews illustrate a variety of reasons, including the belief that they are enhancing their childrens health and development, that they need to feel they are pursuing all avenues and to take charge/advocate for their children. Some parents do feel that the professionals they meet have negative attitudes and low expectations, others feel that the research and medical communities are not doing enough. In recent years, practically relevant research into brain function and into learning and development for children with Down syndrome has increased. It still takes time to go from research to practice but hopefully, if we focus funds in the most promising areas, we will reduce some of the frustrations felt by families and move forward more quickly.

References
1. Ellis JM, Tan HK, Gilbert RE, Muller DPR, Henley W, Moy R, Pumphrey R, Ani C, Davies S, Edwards V, Green H, Salt A, Logan S. Supplementation with antioxidants and folinic acid for children with Downs syndrome: randomised control trial. British Medical Journal. 2008; 336: 594-597. doi:10.1136/bmj.39465.544028 2. Heller JH, Spiridigliozzi GA, Doraiswamy PM, Sullivan JA, Crissman BG, Kishnani PS. Donepezil effects on language in children with Down syndrome: Results of the first 22-week pilot clinical trial. American Journal of Medical Genetics part A. 2004;130A:325-326. 3. Prussing E, Sobo EJ, Walker E, Kurtin, PS. Between desperation and disability rights: a narrative analysis of complementary/alternative medicine use by parents for children with Down syndrome. Social Science and Medicine. 2005;60:587-598. http://allthingsanalytics.com/2011/12/06/prove-it-the-importance-of-evidence/

Prove it! The Importance of Evidence


About a month ago I read an blog entry from @alanlepo where he spoke about the importance of evidence, see Recommendations Done Right Via IBM Connections. It struck a cord and I promised I would share my thoughts on the subject. So here they are :-) Some years ago I made a slightly tongue in cheek prediction that Content is Dead. Well, to be more precise I suggested that content will be replaced with a cloud of shared insights (the knowledge base) backed up with concrete evidence (content). As the breath of knowledge increases and the trust network grows there will be increasingly less need to go back to the evidence. Knowledge will be the cloud sitting on evidence integrated seamlessly into applications Ok, this may sound completely futuristic I blame all the science fiction I watched as a child however as weird as it may sound there is growing evidence that might support such a position. The volume, variety, and velocity of content is already overwhelming and its not going to get better any time soon. We already frequently rely on analytics to help us navigate the information and make sense of it all and this is again likely to become more prevalent with the increasing adoption of bigdata analytics. Even today, often unbeknownst to ourselves, our perception of the world around us is being influenced by analytics algorithms, and nowadays more specifically social analytics. At the most simple level, a search engine chooses what content we should read through its ranking algorithms. How many of us ever scroll to the second or third page of search results? But thats only the beginning and is the least concerning since we still end up with a document that we can choose to read or not. Its when the content starts to become secondary that we really need to think about the importance of evidence. For example:

Influence algorithms assign values that could potentially have a significant impact on someones reputation (if we let that happen, which I sincerely hope we dont). If you were to ask them to

show the evidence backing up their opinion you would likely be given some mumbo jumbo about algorithms, reach, amplification, blah blah blah, followed by an argument about patent protection. Sounds a bit like the incomprehensible financial instruments that got us into our current economic crisis :-( Recommender systems such as expertise location similarly apply analytics to large volumes of content in order to identify that person best suited to help you. Again, what evidence do they present to help you understand if their hypothesis is correct or not? Q&A systems are also increasingly using analytics to derive an answer from large volumes of content. In this case we are frequently not going to the original source(s) but just taking the answer at face value. The same with the new generation of decision support systems, driven by analytics of large volumes of data, which advise you on how best to treat a patient, handle a client problem, or improve brand sentiment.

Now, I am an analytics person so I am clearly NOT suggesting that we kill all analytics. I luv the stuff. BUT what I am suggesting is that we need to ensure that there is a greater level of transparency around the algorithms and that the underlying evidence is made available in a digestible way. Thankfully most of the respected analytics vendors are taking this provision of evidence as extremely serious and are actively integrating such traceability into their applications. My colleagues in IBM Research are building this into the social analytics machinery that underpins IBM Connections, see Recommending Strangers in the Enterprise. IBM Watson is another example of a signficant breakthrough in analytics which places a premium on the gathering, tracking, and presenting of evidence as part of its comprehensive analysis system. So to wrap up this post I would call out to my analytics brethren and ask that as we increasingly integrate analytics into the fabric of society we stop treating the consumers of that analytics as dummies and proactively look to ways through which we can make our algorithms transparent and share the evidence underpinning our analysis results.

You might also like