Evidence Has To Do With Trust
Evidence Has To Do With Trust
Evidence Has To Do With Trust
to believe without evidence would be innocence. innocence is the lost of trust? innocence is oblivious. oblvious is happy. i think an evidence complex is rather common in many adults and those who are always unsatisfied. it comes down to a real flaw in character. assuming that the plain enjoyment of life is an ideal characteristic. forgive me if i sound like an imbecile. http://www.down-syndrome.org/editorials/2032/
out across treatment groups (for language and cognitive development these could be how stimulating the childs environment is, the number of brother and sisters, the educational levels or wealth of parents, quality of local early intervention services). These other factors may also be measured and their influence actually investigated if the study groups are large enough to allow this.
Measuring effectiveness
Outcome measures need to be objective and robust. Ideally, the researchers measuring outcomes should not know which group each child is in. This was the case with the supplementation study[1] parents of the children and the research team were blind they did not know which treatment the child was receiving. This is very important as, when any treatment begins, everyone wants to see progress and just the additional attention a child is getting may improve their progress. If everyone knows that the research is evaluating a treatment or therapy which may improve spoken language, then both treatment group and comparison group are likely to pay much more attention to the childrens language learning. In clinical trials, patients taking a placebo which has no known benefits (though they do not know whether they are on placebo or the drug being tested) will still report improvements. There have been recent examples of open (not blinded) pilot studies, where everyone knew which treatment group a person was in, that have shown positive effects but when, at the next stage, the treatment has been subjected to a blind trial, no effects have been found. For example, the use of donepezil hydrochloride (Aricept) to improve language and cognitive outcomes in children with Down syndrome looked promising in an open trial[2] but when subjected recently to blinded clinical trials, these were terminated due to insufficient evidence of benefit see http://www.clinicaltrials.gov/ct2/results?term=Down+syndrome
Safety
Even if a treatment is effective and improves development or symptoms of illness, demonstrating safety is a longer term issue. Trials for medicines in most countries go through rigorous steps. The research may start with laboratory research with animals suggesting that a treatment might work. Any such animal studies then need to be replicated to ensure the evidence is reliable. The next step may be treatment of another animal species before human trials. Once we get to human trials, there are typically 4 required phases, 3 phases before the drug is licensed for use and 1 after. Phase 1 is a trial with a small number of volunteers (around 30) to test for safe dosages and any immediate side-effects, Phase 2 assesses effectiveness, safety and what dose to give for effect usually a trial with up to 200 people comparing the new treatment with an alternative treatment or a placebo and Phase 3 tests for effectiveness on a larger number (1000s) to get information on more unusual side effects. If these phases produce evidence of effectiveness and safety a license may be granted for general use. In Phase 4 the initial use of the drug in clinical practice is monitored and evaluated and data is collected systematically on side-effects. These clinical trials take time and this can be very frustrating for those who might benefit if the treatment is effective. This is especially the case when the very first animal studies are reported
in newspapers with headlines that imply cures or breakthroughs, which is often the case. In a careful review Kathleen Gardiner describes the progress that has been made in research laboratories using the mouse models bred for Trisomy 21 research. Several of the studies she has described have been hailed as having potentially dramatic effects on memory function and learning, in the press, and even at conferences. However, most have not yet even been replicated in further mouse model studies. The next step is replication in animal studies. Some of the effective treatments may not have safe equivalents for human use if these can be found then clinical trials need to begin. Treatments that seem to work therapeutically in mice may not have the same effects in humans. For these reasons, a group of scientists (researchers, practitioners and clinicians) and Down syndrome associations recently worked together to issue the caution on the protocol being recommended by the Changing Minds Foundation.
other slow learners and children with ADHD. In order to make progress with good research adequate funding is essential. 2. Smaller, less well-controlled studies may provide some degree of confidence in a method and are better than no evidence. We have done some of these in the past, for example with memory training, and shown positive effects but much more needs to be done and work in this area is rarely replicated. 3. Where no evaluation studies exist, then my approach is to look at what we know about how all children learn for example, how they learn to read from the scientific literature and what current best practice advice seems to be based on that work. In other words, does a new teaching approach being recommended make sense given what we already know about how children learn is it based on sound hypotheses? 4. For children with Down syndrome - we would then consider what we know about their learning strengths and weaknesses from research before suggesting how we might adapt the way we would teach typical children to make the learning more effective for children with Down syndrome e.g., as we know they tend to have a verbal working memory weakness, we adapt using all visual cues and supports for memory that we can think of.
If a therapy or teaching approach does not meet the above criteria then we may be moving into the realms of quackery. My approach is supported by Stephen Barretts definition of quackery on the Quackwatch site:
"All things considered, I find it most useful to define quackery as the promotion of unsubstantiated methods that lack a scientifically plausible rationale. Promotion usually involves a profit motive. Unsubstantiated means either unproven or disproven. Implausible means that it either clashes with well-established facts or makes so little sense that it is not worth testing". http://www.quackwatch.org/01QuackeryRelatedTopics/quackdef.html
Stephen notes the profit motive and when talking with parents, I point out that if they are paying for treatments that have not been objectively evaluated they should ask why not? Proof that a treatment works would lead to increased profits so why are the promoters not conducting rigorous evaluations? Unfortunately, it is not easy to change practices; even when no evidence exists to support them, unproven treatments often still flourish. They may even be taught in professional training programmes.
References
1. Ellis JM, Tan HK, Gilbert RE, Muller DPR, Henley W, Moy R, Pumphrey R, Ani C, Davies S, Edwards V, Green H, Salt A, Logan S. Supplementation with antioxidants and folinic acid for children with Downs syndrome: randomised control trial. British Medical Journal. 2008; 336: 594-597. doi:10.1136/bmj.39465.544028 2. Heller JH, Spiridigliozzi GA, Doraiswamy PM, Sullivan JA, Crissman BG, Kishnani PS. Donepezil effects on language in children with Down syndrome: Results of the first 22-week pilot clinical trial. American Journal of Medical Genetics part A. 2004;130A:325-326. 3. Prussing E, Sobo EJ, Walker E, Kurtin, PS. Between desperation and disability rights: a narrative analysis of complementary/alternative medicine use by parents for children with Down syndrome. Social Science and Medicine. 2005;60:587-598. http://allthingsanalytics.com/2011/12/06/prove-it-the-importance-of-evidence/
Influence algorithms assign values that could potentially have a significant impact on someones reputation (if we let that happen, which I sincerely hope we dont). If you were to ask them to
show the evidence backing up their opinion you would likely be given some mumbo jumbo about algorithms, reach, amplification, blah blah blah, followed by an argument about patent protection. Sounds a bit like the incomprehensible financial instruments that got us into our current economic crisis :-( Recommender systems such as expertise location similarly apply analytics to large volumes of content in order to identify that person best suited to help you. Again, what evidence do they present to help you understand if their hypothesis is correct or not? Q&A systems are also increasingly using analytics to derive an answer from large volumes of content. In this case we are frequently not going to the original source(s) but just taking the answer at face value. The same with the new generation of decision support systems, driven by analytics of large volumes of data, which advise you on how best to treat a patient, handle a client problem, or improve brand sentiment.
Now, I am an analytics person so I am clearly NOT suggesting that we kill all analytics. I luv the stuff. BUT what I am suggesting is that we need to ensure that there is a greater level of transparency around the algorithms and that the underlying evidence is made available in a digestible way. Thankfully most of the respected analytics vendors are taking this provision of evidence as extremely serious and are actively integrating such traceability into their applications. My colleagues in IBM Research are building this into the social analytics machinery that underpins IBM Connections, see Recommending Strangers in the Enterprise. IBM Watson is another example of a signficant breakthrough in analytics which places a premium on the gathering, tracking, and presenting of evidence as part of its comprehensive analysis system. So to wrap up this post I would call out to my analytics brethren and ask that as we increasingly integrate analytics into the fabric of society we stop treating the consumers of that analytics as dummies and proactively look to ways through which we can make our algorithms transparent and share the evidence underpinning our analysis results.