What to do if we find insignificant results (Some realities and facts):
What is non-significant?
The non-significance result shows that the effect or performance of these treatments is same. It is in itself a good information for scientific and policy planner community. The other issues may also be discussed for comparison in terms of resources such as operationalization, cost, manpower requirement, time etc.
Ø In any research as important is to detect significant differences in a particular comparison, as it is the finding of no statistically significant results, and therefore should be discussed. For example, a nonsignificant difference can be practical and useful its application to society. That may mean for example that you can use a simpler and economic practice based on the nonsignificant results of your research.
Ø Not statistically significant (relationship, the difference in means, or difference in proportion) is one of the two possible outcomes of any study. Hence if prior studies were significant your study refutes or does not concur with them, this will call for newer studies.
The "layman’s “meaning of not statistically significant is that the strength of relationship or magnitude of difference observed in your SAMPLE, would more likely NOT BE OBSERVED IN the POPULATION your sample purports to represent. Therefore if "difference" is the indicator of effectiveness, this means that while your intervention MAY BE EFFECTIVE in your SAMPLE, such intervention WOULD MORE LIKELY NOT BE EFFECTIVE in your POPULATION, The difference observed in the sample cannot be made a basis for making an inference of effectiveness towards the population.
Ø These statistical tests were originally designed for normally distributed quantities of experimental results. To be realistic you can't reach this desired high number for many experimental approaches. If you cannot fulfill this requirement what will this number tell you? There were nice contributions about statistics for life science in the previous (or that one before) Nature Methods issue.
You should take into account what are possible sources of uncertainties in your experimental approach and if it is realistic to diversify small differences e.g. between two differently treated types of cells while many parameters cannot be 100% adjusted.
Do not try to over interpret your data and speak about differences if this is not clearly shown in your data. Do you have the right controls?
Be honest even if this is hard. I know it was much work you had in your Ph.D. work but it won't give you any further knowledge if you speak about something where you cannot be sure that you can rely on a save data basis. What would be the consequence if the differences you talk about were only due to random events? There are a lot of papers available where they show data nobody was able to reproduce.
I have the same opinion like Marina and Millie that you should ascertain yourself if your amount of independent experiments is high enough. Afterwards, you may speculate if further experiments can support your hypothesis. And you should try to discuss possible sources for errors.
Ø The significance of any comparison depends on two things: a) The magnitude of the difference between the means, and b ) the standard error of the difference. One difference may be small, but if the standard error is also small, you most likely have to declare the difference as significant. On the other hand, the difference may be large but also with a large standard error, that would lead you most likely to report the difference as non-significant.
This is a problem of sample size and the use of a good technique to reduce experimental error. Two means may have a little difference that difference is clearly mathematically different from zero, but statistically, it is equal to zero.
If you have realistic estimates of the variance and the magnitude of the difference which is important for you, by setting the type I error and type II at specific levels, you can plan your experiment so that the difference found in your experiment may be declared as significant. But you still have the problem that for reasons of cost and limitations of the experiment, you cannot meet that sample size.
Ø If you have such results which were higher but not significantly different so you have just used it as a percentage, the results of the control group were attributed the max i.e 100% and the other was determined following a simple mathematic rule: (C-T/C*100). So you can compare the evolution rate of a parameter with the control.
Ø We can talk about a trend, although trends without significance are often a source of criticism; no significance means any difference at a given significance value, and that's statistics for.
You can say that those are preliminary results and you are getting a higher "n".
However, which test are you using? Which is the "n" in these results? Sometimes, a rough means comparison (i.e. by t-test) is not representative of the changes in your data distribution. For other types of comparisons is important that n values are close for the different samples. Perhaps you can try with non-parametric tests like a K-S test.
No comments:
Post a Comment