# MATH 225N Discussion Confidence Intervals

## MATH 225N Discussion Confidence Intervals

MATH 225N Discussion Confidence Intervals

I took a data set of n = 35 values for Height measured in Inches and I used our Week 6 Excel spread sheet Calculator ( please see Week 6 Files area after first clicking Files along the left of the computer screen ) and I calculated 4 confidence intervals with the same sample mean and same sample size and same population standard deviation.

The 4 confidence levels were ( respectively ) 99% 95% 90% 85%

Notice that the 4 margins of error were ( respectively ) 1.57 inches 1.19 inches 1.00 inches 0.88 inches

If this is not a coincidence – that is – if this trend / pattern holds up in general ( at least for the confidence intervals for one population mean that we study and learn about in this course ) then how would you put that pattern / trend into words ??

In other words, all other things being equal ( the same or “fixed” ) , what happens to the margin of error when the confidence level is increased ?? ( or decreased ?? )

Thanks Friends and please see attached ( see the tab along the bottom for the z confidence interval – sample size larger than 30 – the left most tab along the bottom of the spread sheet ) 😉

The original height data used are attached to the next Post

That is where the sample mean and population standard deviation came from.

I used population standard deviation rather than sample standard deviation because the sample size of n = 35 was larger than 30

**Click here to ORDER an A++ paper from our Verified MASTERS and DOCTORATE WRITERS MATH 225N Discussion Confidence Intervals :**

I tried another experiment with confidence intervals.

I calculated 4 95% confidence intervals all using the same sample mean of 65.8857 inches and the same population standard deviation of 3.5998 inches.

The only difference between the 4 95% confidence intervals is that I based the calculations of the confidence interval endpoints ( left endpoint and right endpoint ) on 4 different sample sizes of n = 35 and then n = 70 and then n = 105 and then n = 140 .

The 4 ( respective ) margins of error were 1.19 inches 0.84 inches 0.69 inches 0.60 inches

If this is not a coincidence – if the confidence interval for one population mean that we study and learn about in this course always behaves according to this trend / pattern – how can you put this trend / pattern into words ??

That is, all other things being equal ( the same or “fixed” ) then what happens to the margin of error as the sample size is increased ??

Another way of thinking about this is – if you use a larger sample size you are incorporating “more information” into the analysis – so what is the “reward” for doing that in this case ??

😉

The n = 35 spread sheet is attached to this Post and the spread sheets for ( respectively ) n = 70 and then n = 105 and then n = 140 are attached to 3 further Posts just below here.

Thanks Friends and please see attached ( see the tab along the bottom for the z confidence interval – sample size larger than 30 – the left most tab along the bottom of the spread sheet ) 😉

Be sure to focus on just the 95% confidence interval and its margin of error in each of the 4 spread sheets

Thanks Friends and Very Best Wishes and Good Luck during this Week 6 !!

This Week is a busy one because we have to complete the Week 6 Knewton Quiz and also get off to a great start on the Week 7 lab turn in assignment in the SAME Week !!

Good Luck and THANK YOU so much for your hard work !!

Intuition tells me that bigger sampling increases your odd of getting closer to the true value, therefore, error is reduced.

The law of large numbers says as your samples get larger, the mean of the sample gets closer to the true mean and the standard deviation gets smaller (Holmes et al., 2018). I’m not clear yet how that relates to margin of error as I said before, they seem kind of similar to me. Please clarify that some more.

Thanks,

Holmes, A., Illowsky, B., & Dean, S. (2018). *Introductory business statistics*. OpenStax.

The margin of error gets larger with the confidence level because if we want to be more confident that the population mean we are looking for is in the interval, we need to make the interval as wide as possible.

I’m still a little confused about the relationship between the margin of error and the standard deviation, though, if there is one.

This gets me too. What I have read is they seem to be the same, but what I took from everything I looked up, and that was multiple sights, is that the standard deviation and the margin of error work together? Not sure if that is right, but it is what I took from my research of the two. Per Glen (2020). “The margin of error is the range of values below and above the sample statistic in a confidence interval. The confidence interval is a way to show what the uncertainty is with a certain statistic (i.e. from a poll or survey). With Standard Deviation, you must know the population parameters to calculate it. The margin of error can be calculated in two ways, depending on whether you have parameters from a population or statistics from a sample: Margin of error = Critical value x Standard deviation* for* the population. Margin of error = Critical value x Standard error of the sample” (p. 1).

### Reference:

Glen, S. (2020). “Margin of Error: Definition, How to Calculate in Easy Steps” From StatisticsHowTo.com.Links to an external site.* Elementary Statistics for the rest of us!* https://www.statisticshowto.com/probability-and-statistics/hypothesis-testing/margin-of-error/Links to an external site.

This Week 6 our confidence intervals are

( point estimate – margin of error , point estimate + margin of error )

which by the way means that the entire width / length of the confidence interval is two times the margin of error.

Also the midpoint of the confidence interval is the point estimate here in Week 6 .

Meaning that if we add the confidence interval endpoints together and divide by two, we get the point estimate back.

The point estimate for the confidence interval in the mean context is the sample mean xbar

The point estimate for the confidence interval in the proportion context is the sample proportion phat

It is important to note that within Knewton sometimes they use the symbol p’ for the sample proportion rather than the symbol phat

The formula for the margin of error contains several elements / pieces / ingredients that don’t change no matter what confidence level is used to calculate the confidence interval ( to calculate the confidence interval endpoints ) .

The only piece / element / ingredient in the formula for the margin of error that changes with different confidence levels is the critical value.

In some cases this Week 6 we are talking about a critical z value here and in some cases this Week 6 we are talking about a critical t value here.

But focusing on the cases where we use a critical z value during this Week 6 , the critical z value changes from 1.645 to 1.96 to 2.575 as the confidence level goes from 90% to 95% to 99%

So in calculating the margin of error for the confidence interval for each of these confidence levels, we multiply some fixed quantity times an ever increasing number ( ever increasing critical z value ) as the confidence level goes higher and higher.

That is why “all other things being equal,” the margin of error increases with increasing confidence level.

**All this stuff mentioned here recently would be a really good answer to one of the important questions near the end of the Week 7 lab assignment next Week 7 !!**

Thanks Barb and Elaine and Everyone have a Terrific day and a Fantastic Week 6 !!

One of the things I track is door to electrocardiogram time (D2EKGT) and door to balloon time (D2BT) for ST elevation myocardial infarction (STEMI) patients. EKG, taken from the German spelling, is often used because ECG can be confused with echocardiograms. Studies have shown that getting the EKG is done in 10 minutes or less translates to quicker reperfusion times. One such study was done in the National Taiwan University Hospital in 2014 (Lee et al., 2019) and showed that the D2EKGT was the most critical interval of delay in getting patients to the cath lab and reperfused. Therefore, it is something my department keeps a watchful eye on with every STEMI.

In that study, they created an EKG station in the triage area so they wouldn’t have to transport the patients to another area to do an EKG. Patients who arrived after the new EKG station was created were the intervention group, and their D2EKG and D2B times were compared to a somewhat equal number of patients that came to the emergency department in the months before the EKG station was put in the triage area; those were the control group (Lee et al., 2019). The required sample size was 62 patients in each group (intervention and control) with 80% power and a type 1 error of 0.05 (Lee et al., 2019). This isn’t covered until Chapter 9 in our text, and since I found Chapter 8 confounding enough, I will worry about what that means later. However, the following is something easy to understand. One example in the study compared the D2EGKT of walk-in patients and how it affected the D2BT before and after the change in the triage area (Lee et al., 2019): 93.3% of walk-in patients got their EKGs in <10 mins after the change vs 79.8% before the EKG station was set up in the triage area. This translated to 91.1% of those patients having a D2BT <90 mins vs 76.2% before (Lee et al., 2019).

If I wanted to reproduce that in my hospital, I think I’d have to apply the confidence interval for the population mean. As per this week’s online lesson (Chamberlain University, 2021), interpreting confidence interval is based on repeated sampling. A 95% confidence interval means that if I had 100 different samples each with a different mean, 95 would contain the population value and 5 would not. I hope my understanding is correct that they mean sets of 100 values, so you can get varying means. For the 93.3% of walk-in patients who got their EKG in <10 minutes in the Taiwan study, to replicate that with 95% confidence interval, and assuming 100 patients, the calculated lower limit would be 89 patients and the upper limit would be 98 patients. The excel calculations show fractions, but as per our lesson, we cannot go below a minimum and we cannot sample a fraction of a person, so we round up, not down (Chamberlain University, 2021).

Lowering the confidence interval to 90% changes the lower and upper limits to 90 and 97 respectively. Raising the confidence interval to 99% changes the lower and upper limits to 87 and 99. The CI of 90% gives us a narrower interval range but we’d rather have the higher confidence of 95%, right? That is the “trade off” described on page 340 of our text and demonstrated by the curve Figure 8.6 (Holmes et al., 2018). Increasing the CI to 99% makes the interval even wider.

So, if my understanding is correct, and I wanted to replicate the success seen in Taiwan with 95% confidence, I’d need 89 to 98 patients of 100 to get their D2EKG times to <10 minutes. Is that correct?

Thanks,

Elaine

Chamberlain University. (2021). MATH225. *Week 6: Confidence intervals *(Online lesson). Downers Grove, IL: Adtalem.

Holmes, A., Illowsky, B., & Dean, S. (2018). *Introductory business statistics*. OpenStax.

Lee, C., Meng, S., Lee, M., Chen, H., Wang, C., Wang, H., Liao, M., Hsieh, M., Huang, Y., Huang, E., Wu, C. (2019). The impact of door-to-electrocardiogram time on door-to-balloon time after achieving the guideline-recommended target rate. *PLoS ONE 14 *(9), 1-14. https://chamberlainuniversity.idm.oclc.org/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=138516728&site=eds-live&scope=siteLinks to an external site.

You mentioned a couple of things in your Post related to hypothesis testing and so I wanted to address those things here since next Week 7 we will all find ourselves immersed into studying and learning about hypothesis testing, which is also called significance testing, which is also called null hypothesis significance testing. 😉

At a certain part of the hypothesis testing process we make what is called the “decision.”

The decision is one of two things: either “Do Not Reject the null hypothesis” OR “Reject the null hypothesis.”

When we reject the null hypothesis, we say that “the sample data were statistically significant” or that in the study “statistical significance was achieved.”

Incorrect decisions occur in hypothesis testing in the so-called real world even when quantitative studies are set up perfectly and are conducted perfectly and even when the data and results are analyzed perfectly ( more about all that in Week 7 ) .

So we codify these possibilities for incorrect decisions by giving them names – one incorrect decision is called a Type I Error and one incorrect decision is called a Type II Error.

The probability of a Type I Error occurring is alpha

The probability of a Type II Error occurring is beta

“Statistical Power” is

So when you wrote in your Post that the probability of a Type I Error was 0.05, you were writing that the probability of rejecting a true null hypothesis is 0.05

Rejecting a true null hypothesis is an incorrect decision.

The probability of a Type I Error is called the alpha significance level.

And when you wrote in your Post that the “statistical power” was 0.80 , what that means is that the probability of rejecting a false null hypothesis was 0.80

Rejecting a false null hypothesis is a correct decision.

By the way, a Type I Error is also called a “false positive.”

And a Type II Error is called a “false negative.”

Thanks Elaine for your Terrific Post here and for your hard work and effort and Super results all throughout the course !

I really appreciate it very much !

Have a Wonderful day Elaine and Everyone !!