Some basic aspects of statistical methods and sample size determination in health science research
power depends upon the values of intraclass correlations, sample sizes at the standardized average treatment effect (effect size), the multiple correlation between effectiveness studies in which the object is to determine whether the . Practically, the power analysis depends on the relationship between 6 variables: the significance . The flat rectangular objects could represent glass slides. A chapter has been added for power analysis in set correlation and larger the ES posited, other things (significance criterion, sample size).
The main objective of the study The primary outcome variable The study design used The sampling technique used and The summary statistics and the type of statistical analysis estimation or testing of hypothesis used for the main objective in the study. There is an abundance of literature available regarding the choice and principles of sample size estimation.
However, the researchers must provide the relevant information in order to get the correct number and to do that a researcher should have complete conceptual understanding of determining minimum sample size. Conclusion Statistical methods are applied in almost all stages of a research study from the planning and design stage to the final reporting of findings.
Utmost care must be taken in choosing the right statistical method at each stage of a study. Sample size estimation is very important and choosing the right formula is crucial in all types of research studies.
The selection of appropriate sample size formula depends on the primary objective of the study, the type of outcome variable, study design used, the statistical analysis planned, number of groups in the study and sampling technique to be adopted.
The number of subjects to be recruited in a study depends on the power of the study, the precision of the estimated value, level of significance or confidence level, clinically significant difference and also other constraints such as money, manpower, availability of subjects and time in particular and feasibility in general.
There are many statistical tests and the choice of appropriate analysis depends on the objective of the study, types of the outcome variable, number of variables, sample size, number of groups in the study, whether the groups are related or not and also on distributional assumptions. Footnotes Conflict of Interest: John Wiley and Sons Pte.
A Foundation for Analysis in Health Sciences. Oxford University Press; An Introduction to Medical Statistics. Descriptive statistics and graphical displays. Probability and statistical tests. Tips for learners of evidence-based medicine: Measures of precision confidence intervals CMAJ. Confidence interval or P-value?: Part 4 of a series on evaluation of scientific publications.
Confidence intervals rather than P-values: Estimation rather than hypothesis testing.
Statistics review 4: Sample size calculations
Low P-values or narrow confidence intervals: Which are more durable? Getting past the statistical referee: Moving away from P-values and towards interval estimation. Significance level and confidence interval. The end of the P value? Why should researchers report the confidence interval in modern research? Middle East Fertil Soc J. Sample Size Calculation in Clinical Research. Sample size calculations for randomized controlled trials. Endacott R, Botti M.
How many do I need? Basic principles of sample size estimation. Columb OM, Stevens A. Power analysis and sample size calculations. Curr Anaesth Crit Care. Sample size calculations in randomized trials: Sample size calculations in randomised trials: An introduction to power and sample size estimation.
Approaches to sample size estimation in the design of clinical trials — A review.
Note that the sample size needs to be rounded up to a whole number. Sixty-eight dogs per group in total is a lot of dogs and using such animals would be time-consuming. An alternative In the same journal an investigator was working with male Beagles weighing kg. These had a mean BP of mm Hg. Assume a 20mm difference between groups would be of clinical importance as before.
The table below summarises the situation. This poses a problem. And is there ever any case for using genetically heterogeneous animals if all it does is increase noise and reduce the power of the experiment, leading to false negative results?
Alternative approaches It would make no sense to go ahead and do the experiment simply using the heterogeneous dogs.
But there are some obvious alternatives. If each dog could be given both anaesthetics say in random order on different daysthen it would be possible to use small numbers of even quite heterogeneous dogs, assuming that there are no important breed differences in response. Technically, this would be a randomised block design discussed later 2. As far as possible there should be equal numbers in each group.
Sample size estimation and power analysis for clinical research studies
This would indicate whether the two anesthetics differ over-all and whether breed differences need to be taken into account when choosing one of these anesthetics. If lots of characters are being measured it may not be clear which one is the most important There may be no estimate of the standard deviation if the character has not previously been measured In fundamental research it may be impossible to specify an effect size likely to be of scientific importance A power analysis is difficult with complex experiments involving many treatment groups and possible interactions.
This depends on the law of diminishing returns. It needs an estimate of E: There may be a case for E being higher if it leads to a more balanced design, the likely cost of a Type II error is high, the procedures are very mild or it is an in-vitro experiment with no ethical implications E is the number of degrees of freedom in an analysis of variance ANOVA.
It is based on the need to obtain an adequate estimate of the standard deviation. The most common aim is probably that of determining some difference between two groups, and it is this scenario that will be used as the basis for the remainder of the present review. However, the ideas underlying the methods described are equally applicable to all settings. Power The difference between two groups in a study will usually be explored in terms of an estimate of effect, appropriate confidence interval and P value.
The confidence interval indicates the likely range of values for the true effect in the population, while the P value determines how likely it is that the observed effect in the sample is due to chance.
A related quantity is the statistical power of the study. Put simply, this is the probability of correctly identifying a difference between the two groups in the study sample when one genuinely exists in the populations from which the samples were drawn.Power & Effect Size
The ideal study for the researcher is one in which the power is high. This means that the study has a high chance of detecting a difference between groups if one exists; consequently, if the study demonstrates no difference between groups the researcher can be reasonably confident in concluding that none exists in reality.
The power of a study depends on several factors see belowbut as a general rule higher power is achieved by increasing the sample size. It is important to be aware of this because all too often studies are reported that are simply too small to have adequate power to detect the hypothesized effect. In other words, even when a difference exists in reality it may be that too few study subjects have been recruited. The result of this is that P values are higher and confidence intervals wider than would be the case in a larger study, and the erroneous conclusion may be drawn that there is no difference between the groups.