Step 1 :IQ scores based on the Stanford-Binet IQ test are normally distributed with a mean of 100 and standard deviation 15. If you were to obtain 100 different simple random samples of size 20 from the population of all adult humans and determine 95% confidence intervals for each of them, the question asks how many of the intervals would you expect to include 100.
Step 2 :We can calculate the expected number of intervals that include the mean by multiplying the total number of intervals by the percentage of confidence.
Step 3 :Let's denote the total number of intervals as \(I\) and the percentage of confidence as \(P\). The expected number of intervals that include the mean can be calculated as \(E = I \times P\).
Step 4 :Substituting the given values, \(I = 100\) and \(P = 0.95\), into the formula, we get \(E = 100 \times 0.95 = 95\).
Step 5 :Final Answer: One would expect \(\boxed{95}\) of the 100 intervals to include the mean 100.