nominal - male vs. female/ frequencies , percentages (non-parametric)
ordinal - e.g. Likert scale / first,second, third (non-parametric)
interval - discrete, parametric , continuous (eg temperature)
Ratio level - usually interval data, zero point reflects absence of characteristic
Discrete - adult/ non adult
Continuous - angry to super angry
Test Statistic = Systematic Variance / Unsystematic variance
We are comparing the amount of variance created by an experimental effect against the amount of variance due to random factors (such as differences in motivation, or intelligence)
t-value
what is the probability that our samples are from the same population . You basically compare the means of two or more samples
it is a measure of unsystematic variance or variance not caused by the experiment
r-value (Effect Size)
is simply an objective and standardized measure of the magnitude of the observed effect.
Pearson Correlation Coefficient
r = .1 (weak effect) 1% of variance between variables is explained
r = .3 (medium effect). 9% of variance between variables is explained
r = .5 (strong effect) 25% of variance is explainedp-value
Significance - Chance of Error (being wrong), in other words the chance of a finding being due ot error
The chance of the null hypothesis to be rejected where it is actually true.
in Business this is accepted
p < .05
z-value
are standard scores. it states the position of a raw score in relation to the mean of the distribution, using the standard deviation as the unit of measurement
z = raw score - mean / standard deviation
Standard Error
the standard deviation (or variability) of sample means. The higher the SE, the more the sample means differ from each other
The lower it is the more it accurately reflects the entire population
Mean: Sum / n
Median: right in the middle of samples
Mode: the most occuring
Standard Deviation
Average distance of the values from the mean
Variance Extracted
Summary measure of convergence among a set of items representing a latent construct.
It is the average % of variation explained among items
Type 1 Error (False Positive)
Accepting effects that are in reality untrue
Type 2 Error (False Negative)
Rejecting effcécts that are in reality true
Construct Validity (relationship betweeb measurement instrument and the construct)
Discriminant, Convergent, nomological validity
Discriminant Validity
Eg how good do the items of the construct of innovation differentiate from frome the construct of strategic validity
Convergent Validity
How good are the items for the innovation construct converging ?
If they do not converge the are likely not measuring the same phenomenon
- Cronbach Alpha, cut-off value > .70
- Composite reliability, cut-off value > .60
- AVE Average variance extracted, cut-off value AVE > .50
(AVE = average squared factor loading)
Indicator reliability / validity
- significant factor loadings of items >.70, t-values > 1.645
Multicollinearity
phenomenon in which two or more predictor variables in a multiple regression model are highly correlated, meaning that one can be linearly predicted from the others with a substantial degree of accuracy
Solution:
Variance inflation factors (VIF) measure how much the variance of the estimated regression coefficients are inflated as compared to when the predictor variables are not linearly related.
Use to describe how much multicollinearity (correlation between predictors) exists in a regression analysis. Multicollinearity is problematic because it can increase the variance of the regression coefficients, making them unstable and difficult to interpret.
Parametric Tests
Kolmogorov Smirnov Test
if p > .05 distribution is probably normal
Levene Test
tests hypothesis that variances of two samples are equal
if p > .05 variances are more or less equal
Anova
Main Effect
A “main effect” is the effect of one of your independent variables on the dependent variable, ignoring the effects of all other independent variables
Interaction Effect
A statistical interaction occurs when the effect of one independent variable on the dependent variable changes depending on the level of another independent variable
Parametric Tests
Kolmogorov Smirnov Test
if p > .05 distribution is probably normal
Levene Test
tests hypothesis that variances of two samples are equal
if p > .05 variances are more or less equal
Anova
Main Effect
A “main effect” is the effect of one of your independent variables on the dependent variable, ignoring the effects of all other independent variables
Interaction Effect
A statistical interaction occurs when the effect of one independent variable on the dependent variable changes depending on the level of another independent variable
Independent T-Test of two samples
Taken from https://statistics.laerd.com/statistical-guides/independent-t-test-statistical-guide.php
Independent t-test for two samples
Introduction
The independent t-test, also called the two sample t-test, independent-samples t-test or student's t-test, is an inferential statistical test that determines whether there is a statistically significant difference (variance in means for instance) between the means in two unrelated groups.
Null and alternative hypotheses for the independent t-test
The null hypothesis for the independent t-test is that the population means from the two unrelated groups are equal:
H0: u1 = u2
In most cases, we are looking to see if we can show that we can reject the null hypothesis and accept the alternative hypothesis, which is that the population means are not equal:
HA: u1 ≠ u2
To do this, we need to set a significance level (also called alpha) that allows us to either reject or accept the alternative hypothesis. Most commonly, this value is set at 0.05.
The concept of falsification is based on Popper's falsification theory. You cannot know scientific laws with absolute certainty. you can only falsify them --> Null hypothesis
Touche. Outstanding arguments. Keep up the great work. yahoo login
ReplyDelete