2. M&A lecture 2

winniesmith2's version from 2018-02-14 11:43

Section 1

Question Answer
Is there such a thing as a perfect measurement?no, a certain degree of error exists in all measurement scenarios.
The aim is for tests to provide data accurate withinacceptable limits of measurement error.
without validity or reliability there can be little faith in the accuracy of the data collected. The data is of dubious value and any decisions made on the basis of the data are highly questionable.
What is validity The degree of truthfulness of a test score. (does the test score accurately measure what it reports to measure)
What is reliability the consistency or repeatability of an observation
What is accuracy – conforming to fact. The ability of a measurement to match the actual (true) value of the quantity being measured
When the attribute of interest is difficult to measure, we may accept measurements with low but acceptable validity
When an attribute is easier to measure, a higher degree of validity should be expected

Section 2

Question Answer
what is content validity Simplest evidence of validity, also termed face validity or logical validity Depends on professional judgement using logic and comparison Does the test measure what it is designed to measure?
What is criterion validity (gold standard) Based on having a ‘true’ criterion measure available
What is a criterion can be perceived as the most truthful measure of what you are attempting to measure - a measure that you have a high level of confidence in, in terms of its accuracy
Accuracy of pedometers studies on slides 15-21
Criterion validity can be categorised asconcurrent validity and predictive validity
Concurrent validity involves (here and now) is the same individuals conducting the new test and an established (the criterion) test at the same time. The results of the two tests are correlated. Higher the correlation, the stronger the rationale of the validity of the new measure.
Convergent validity is the extent to which the output of an instrument is associated with that of another instrument which intends to measure the same exposure of interest Often used in the absence of a criterion measure
Validity of two self-report measures of sitting time pages 26-30
Construct validity refers to to the degree that results of a test accurately measure an attribute or trait that cannot be directly measured (e.g. anxiety).It can also refer to relationships between a particular measure (e.g. physical activity level) and theoretical concepts (constructs) relating to that measure (e.g. BMI [less active people tend to have higher BMIs]

Section 3

Question Answer
What is reliability “the consistency or repeatability of an observation; the degree to which repeated measurements of the same trait are reproducible under the same conditions.”
What 3 things must reliability haveconsistency, dependability, stability.
Factors affecting reliability Individual variability, Instrument inaccuracy, Choice of instrument ,Cheating, Testing conditions: Time of day, The way the measurements are taken, Experiences before the measurements are taken
Score/output is obtained bythe sum of the 'true score' and the error score.
The true score iswithout error, and is 100% reliable- however the true score is usually impossible to measure.
The error score results from anything that causes your observed score to be different to the true score.
Reliability is usually scored on a scale of 0 to 1. (If the observed score is made up of only variation in the true score, reliability is said to be 1 If the observed score is made up of no true score variation (only error), reliability is said to be 0).
2 broad types of reliability coefficients:• Interclass reliability – based on 2 trials • Intraclass reliability – based on more than 2 trials (internal consistency)

Section 4

Question Answer
What are the 3 interclass reliability methods, based on the assumption that there are no significant differences between trials:Test-retest reliability. Equivalence reliability. Split-halves reliability.
What is test-retest reliability Simplest way of determining whether a test is reliable. Test is administered to the same group of participants on two separate occasions, correlation coefficient calculated between the 2 observations
What is equivalence reliability The same group of participants take two different tests designed to measure the same construct, correlation coefficients calculated between the 2 test scores
What is split-halves reliability Calculation of the correlation coefficient between two halves of a test
Interclass reliability allows you to calculate the reliability coefficient between two sets of measures/scores.
Intraclass reliability allows you to calculate the consistency between measurements or scores taken on more than 2 occasions
Intraclass reliability models include Cronbach’s alpha and ANOVA reliabilities. The total variance in scores is partitioned into 3 sources: people (observed score variance between participants), trials (variance across trials) and people-by-trials (assuming not all participants perform equally differently across trials).
Variance isthe fact or quality of being different, divergent, or inconsistent.
Standard deviation isa quantity expressing by how much the members of a group differ from the mean value for the group.

Section 5

Question Answer
What is objectivity the type of reliability that concerns the administration of the tests. Third criterion for a good test (with R and V). Behaviour of the administrator can affect the test results. If the same test is administered to the same participants by different administrators, variation in test scores could be attributable to the test administrators
Inter-rater reliability sometimes referred to as the objectivity coefficient
Critical reviewmethods section- analyse, probably not objective, valid or reliable (if not she wouldn't choose paper)so pick out these. WILL NOT BE PERFECT. Find the flaws in the paper using these methods.