The sample size was in the tens of thousands (39K total cases according to the original EUSEM article) so it would be extremely surprising if there were no real difference. You could easily say it’s within margin of error if there were only a few hundred cases examined, but we’re talking about tens of thousands here.
Important to note though that the data only accounted for Canada and the US.
Another important caveat is that we’re assuming the data collection process was not flawed or biased, which is maybe a legitimate concern. But it’s a separate issue entirely.
Having a larger sample size doesn’t necessarily decrease the margin of error. It’s impossible to say if the difference is statistically significant without crunching the numbers.
Meh… Even without seeing the data collection methodology, or the analysis, I’m calling shenanigans. Thats an almost non-existent difference - how do we know the cases where women didn’t get support are primarily women-only spaces (say women’s gym, yoga, etc)?
Someone’s using this slight difference to push a narrative.
It is still a sample, which is therefore subject to a margin of error. Unless you think this data accounts for all CPR given anywhere to anyone, ever.
For example, if they’d only sampled one man and one woman, and the man reported receiving CPR and the woman reported not, the “study” would show 100% of men and 0% of women receive CPR. Staggering “real-life numbers”!
I’m aware. My point is that “real life numbers” still have margins of error. The person to whom I’m responding implied that “real life numbers” aren’t subject to a margin of error.
To add to your point with a very clear example: If I did a study to check the average age of people in a country where I mainly checked the age of people living in retirement homes, the margin of error would be huge even if I got the age from hundreds of thousands of people.
In more general terms: there can be systemic errors due to methodology that no increasing of the number of samples will remove.
Thank you, that’s an important point to make. There’s this belief that big samples are more relevant than small samples, but that is far from the truth.
The methodology is what’s vital to the data’s significance.
52% versus 55%. 61% vs 68% in public places. Not a lot of difference, within margin of error even.
The sample size was in the tens of thousands (39K total cases according to the original EUSEM article) so it would be extremely surprising if there were no real difference. You could easily say it’s within margin of error if there were only a few hundred cases examined, but we’re talking about tens of thousands here.
Important to note though that the data only accounted for Canada and the US.
Another important caveat is that we’re assuming the data collection process was not flawed or biased, which is maybe a legitimate concern. But it’s a separate issue entirely.
Having a larger sample size doesn’t necessarily decrease the margin of error. It’s impossible to say if the difference is statistically significant without crunching the numbers.
Meh… Even without seeing the data collection methodology, or the analysis, I’m calling shenanigans. Thats an almost non-existent difference - how do we know the cases where women didn’t get support are primarily women-only spaces (say women’s gym, yoga, etc)?
Someone’s using this slight difference to push a narrative.
What do you mean by “margin of error”?
https://en.m.wikipedia.org/wiki/Margin_of_error
This isn’t a pole. This isn’t self reported numbers. Those are real life numbers
It is still a sample, which is therefore subject to a margin of error. Unless you think this data accounts for all CPR given anywhere to anyone, ever.
For example, if they’d only sampled one man and one woman, and the man reported receiving CPR and the woman reported not, the “study” would show 100% of men and 0% of women receive CPR. Staggering “real-life numbers”!
All of science is just a sample. Population trends can be observed in smaller subsets.
I’m aware. My point is that “real life numbers” still have margins of error. The person to whom I’m responding implied that “real life numbers” aren’t subject to a margin of error.
Pretty much all data has margins of error, including “real life data”. The margin of error just often doesn’t matter.
But is it a poll?
It doesn’t matter, a margin of error exists regardless of the data source.
To add to your point with a very clear example: If I did a study to check the average age of people in a country where I mainly checked the age of people living in retirement homes, the margin of error would be huge even if I got the age from hundreds of thousands of people.
In more general terms: there can be systemic errors due to methodology that no increasing of the number of samples will remove.
Thank you, that’s an important point to make. There’s this belief that big samples are more relevant than small samples, but that is far from the truth.
The methodology is what’s vital to the data’s significance.