**Moderators:** MelissaB, kgudger, Ray Trent, Moderators

13 posts
• Page **1** of **1**

Hello Science Buddies,

My question has five parts and it begins on your "Sample Size: How Many Survey Participants Do I Need?" page. The last line above the table states, "For the most part though, the 1/√ N approach is sufficient."

1) Is the symbol before the variable a square root sign (maybe called a radical)?

2) If so, have I completed these problems correctly?

Located the square root of the variable and then divided one by the result. (Parenthesis show the #'s the % applies too.)

1/√63 = 0.126 or 12.6% Margin of Error (Year 1 & 2 = rate of words per minute 105, 121, --, --)

1/√47 = 0.146 or 14.6% Margin of Error (Year 2 = rate of words per minute 104, 123, --, 93)

1/ √16 = 0.25 or 25% Margin of Error (Year 1 = rate of words per minute 106, 119, 93, --)

3) How should an experiment address unusable trials? (For instance, several of the tests were interrupted, in a couple the background noise interfered with the audio recording, one reader had no college education, and two readers were not wearing their glasses during the experiment so the student was unable to use those trials.) She noted it in the log notes and on the data chart (47 /54 trials) in her log book, but I didn't know if it should be recorded elsewhere.

4) Generally speaking, what Margin of Error would be considered appropriate for a 6th grade student project at a competitive level?

5) Over two years, four different reading methods were tested on three different adult populations. The first two methods were repeated each year, giving larger overall trial sizes. The last method had 47 trials, which seems a decent number, but the third method only had 16 trials. Next year, she is planning on testing all four methods on two different child populations (she is forced to combine two of the populations) and had planned on comparing the results to prior years. Should the third trial be included in the comparison, or should it be dropped because the trial number is so small. Or does it matter at all because even with 63 trials, the Margin of Error means the results are too close to state with reasonable certainty that method two provides the best results. (Her dependent variables were TIME and ERROR--and the Error rates were even closer!)

Method: 1 2 3 4

Year One: 106 119 93 --

Year Two: 104 123 -- 93

= Trial #s: 63 63 16 47

Thank you so much for your help!

Laurie

My question has five parts and it begins on your "Sample Size: How Many Survey Participants Do I Need?" page. The last line above the table states, "For the most part though, the 1/√ N approach is sufficient."

1) Is the symbol before the variable a square root sign (maybe called a radical)?

2) If so, have I completed these problems correctly?

Located the square root of the variable and then divided one by the result. (Parenthesis show the #'s the % applies too.)

1/√63 = 0.126 or 12.6% Margin of Error (Year 1 & 2 = rate of words per minute 105, 121, --, --)

1/√47 = 0.146 or 14.6% Margin of Error (Year 2 = rate of words per minute 104, 123, --, 93)

1/ √16 = 0.25 or 25% Margin of Error (Year 1 = rate of words per minute 106, 119, 93, --)

3) How should an experiment address unusable trials? (For instance, several of the tests were interrupted, in a couple the background noise interfered with the audio recording, one reader had no college education, and two readers were not wearing their glasses during the experiment so the student was unable to use those trials.) She noted it in the log notes and on the data chart (47 /54 trials) in her log book, but I didn't know if it should be recorded elsewhere.

4) Generally speaking, what Margin of Error would be considered appropriate for a 6th grade student project at a competitive level?

5) Over two years, four different reading methods were tested on three different adult populations. The first two methods were repeated each year, giving larger overall trial sizes. The last method had 47 trials, which seems a decent number, but the third method only had 16 trials. Next year, she is planning on testing all four methods on two different child populations (she is forced to combine two of the populations) and had planned on comparing the results to prior years. Should the third trial be included in the comparison, or should it be dropped because the trial number is so small. Or does it matter at all because even with 63 trials, the Margin of Error means the results are too close to state with reasonable certainty that method two provides the best results. (Her dependent variables were TIME and ERROR--and the Error rates were even closer!)

Method: 1 2 3 4

Year One: 106 119 93 --

Year Two: 104 123 -- 93

= Trial #s: 63 63 16 47

Thank you so much for your help!

Laurie

- Laurie
**Posts:**15**Joined:**Fri Jul 08, 2011 1:20 am**Occupation:**Teacher**Project Question:**n/a**Project Due Date:**n/a**Project Status:**Not applicable

Hi Laurie,

1) yes

2) yes

3) If you want to be on the safe side, you should exclude trials that are clearly flawed. If this gives you very little data, you can report both the results with the flawed trials and those without to see if there is much of a difference. This often results in some hand-waving, which people don't like so much.

4) The grade level isn't that important, although one might argue that having a 6th grader do 500 trials is a bit much. The more trials the better, so the student should do as much as she can and report as much as she can, explaining any issues that come up. The trials with N = 47 and 63 are good, the margin of error is narrow enough that the student might make some credible conclusions. N = 16 gives too wide a margin, but it can still be reported in comparison to the others, and the student might claim that it reflects a similar trend, albeit roughly.

5) If I read it correctly, it looks like in years one and two there were 63 trials for each of methods 1 and 2. In year one method 1 was 106 and method 2 was 119. 119 - 106 = 13, which is right about at the edge of the margin of error (106 +/- 13% and 119 +/- 13%). It would be dicey to argue that there was a statistically significant difference. Year two, 123 - 104 = 19 looks better (104 +/- 13% and 123 +/- 13%), and would support the argument of a slight but significant difference. The missing data for year one method 4 and year two method 3 make those comparisons impossible if I am understanding things correctly. Have you tried doing a Student's t-Test? That gives a confidence metric named p, where p < 0.1 is passable and p < 0.05 is good.

I get the impression that the student did find a slight but significant difference. Comparing differently structured groups across time (i.e. combining groups) may lead to some hand-waving, but if the combined groups are similar and you get a larger value of N, you will probably get more significant results which could be argued as an improvement in the study. This is as far as I can get with what I can glean from your message. If the student could get N above 100, that would be fantastic. This is pretty common in science, though. It's sometimes pretty hard to obtain clear statistical significance with the resources at hand, so don't let that get you down.

Good luck, and keep up the good work!

1) yes

2) yes

3) If you want to be on the safe side, you should exclude trials that are clearly flawed. If this gives you very little data, you can report both the results with the flawed trials and those without to see if there is much of a difference. This often results in some hand-waving, which people don't like so much.

4) The grade level isn't that important, although one might argue that having a 6th grader do 500 trials is a bit much. The more trials the better, so the student should do as much as she can and report as much as she can, explaining any issues that come up. The trials with N = 47 and 63 are good, the margin of error is narrow enough that the student might make some credible conclusions. N = 16 gives too wide a margin, but it can still be reported in comparison to the others, and the student might claim that it reflects a similar trend, albeit roughly.

5) If I read it correctly, it looks like in years one and two there were 63 trials for each of methods 1 and 2. In year one method 1 was 106 and method 2 was 119. 119 - 106 = 13, which is right about at the edge of the margin of error (106 +/- 13% and 119 +/- 13%). It would be dicey to argue that there was a statistically significant difference. Year two, 123 - 104 = 19 looks better (104 +/- 13% and 123 +/- 13%), and would support the argument of a slight but significant difference. The missing data for year one method 4 and year two method 3 make those comparisons impossible if I am understanding things correctly. Have you tried doing a Student's t-Test? That gives a confidence metric named p, where p < 0.1 is passable and p < 0.05 is good.

I get the impression that the student did find a slight but significant difference. Comparing differently structured groups across time (i.e. combining groups) may lead to some hand-waving, but if the combined groups are similar and you get a larger value of N, you will probably get more significant results which could be argued as an improvement in the study. This is as far as I can get with what I can glean from your message. If the student could get N above 100, that would be fantastic. This is pretty common in science, though. It's sometimes pretty hard to obtain clear statistical significance with the resources at hand, so don't let that get you down.

Good luck, and keep up the good work!

Heinz Hemken

Mentor

Science Buddies Expert Forum

Mentor

Science Buddies Expert Forum

- hhemken
- Expert
**Posts:**264**Joined:**Mon Oct 03, 2005 3:16 pm

Hello Heinz Hemken,

Thank you for walking me through my questions. I am clear on 1 - 4.

5. Additional Info: The tests completed in both years were applied using the same controls except during the 1st year she used a set of three methods and the 2nd year she changed one of the set--which left her with a small trial number of 16 for one of the reading methods. The numbers you received are the averaged numbers showing the results across ALL stress levels (no stress, moderate stress, & mild stress) within a reading method. Test 2, Method 2 was her target group. The differences between methods on Test 2 were remarkably consistent 29 & 30 and 69 & 69 between years. (The other test results were also very similar.)

Year 2

Test 1, no stress: (method 1)145, (method 2)157, (method 4)128

Test 2, moderate stress: (method 1)29, (method 2)69, (method 4)40

Test 3, mild stress: (method 1)137, (method 2)143, (method 4)111

I don't know what a Student's t-Test is or how to do one. Will you please work it out on one of the tests? Then I will do so on the other two and check back with you to see that I understand the process at a basic level.

Thank you for your patience.

Laurie

Thank you for walking me through my questions. I am clear on 1 - 4.

5. Additional Info: The tests completed in both years were applied using the same controls except during the 1st year she used a set of three methods and the 2nd year she changed one of the set--which left her with a small trial number of 16 for one of the reading methods. The numbers you received are the averaged numbers showing the results across ALL stress levels (no stress, moderate stress, & mild stress) within a reading method. Test 2, Method 2 was her target group. The differences between methods on Test 2 were remarkably consistent 29 & 30 and 69 & 69 between years. (The other test results were also very similar.)

Year 2

Test 1, no stress: (method 1)145, (method 2)157, (method 4)128

Test 2, moderate stress: (method 1)29, (method 2)69, (method 4)40

Test 3, mild stress: (method 1)137, (method 2)143, (method 4)111

I don't know what a Student's t-Test is or how to do one. Will you please work it out on one of the tests? Then I will do so on the other two and check back with you to see that I understand the process at a basic level.

Thank you for your patience.

Laurie

- Laurie
**Posts:**15**Joined:**Fri Jul 08, 2011 1:20 am**Occupation:**Teacher**Project Question:**n/a**Project Due Date:**n/a**Project Status:**Not applicable

Laurie,

Forgive m for invoking the "exercise left for the reader" trick. Here are some tips on how to do a Student's t-Test, it isn't all that hard:

http://en.wikipedia.org/wiki/Student%27 ... d_examples

http://www.ehow.co.uk/how_8167949_calcu ... ttest.html

http://wn.com/Student%27s_t-test

http://wiki.services.openoffice.org/wik ... T_function (OpenOffice is a free suite of programs much like MS Office; you can download it at OpenOffice.org)

http://www.lulu.com/items/volume_66/386 ... atpdf2.pdf (free statistics textbook! see p. 118, it shows how to do it in OpenOffice)

Please let me know if you still have problems.

Good luck!

Forgive m for invoking the "exercise left for the reader" trick. Here are some tips on how to do a Student's t-Test, it isn't all that hard:

http://en.wikipedia.org/wiki/Student%27 ... d_examples

http://www.ehow.co.uk/how_8167949_calcu ... ttest.html

http://wn.com/Student%27s_t-test

http://wiki.services.openoffice.org/wik ... T_function (OpenOffice is a free suite of programs much like MS Office; you can download it at OpenOffice.org)

http://www.lulu.com/items/volume_66/386 ... atpdf2.pdf (free statistics textbook! see p. 118, it shows how to do it in OpenOffice)

Please let me know if you still have problems.

Good luck!

Heinz Hemken

Mentor

Science Buddies Expert Forum

Mentor

Science Buddies Expert Forum

- hhemken
- Expert
**Posts:**264**Joined:**Mon Oct 03, 2005 3:16 pm

Hello Heinz,

My 6th grade students learned mode, median, range, mean and very basic percentage in 5th grade--they will not be able to find the Student's t-test. However, for my knowledge, I've looked over the sites and think I may be able to work it out later this afternoon.

Question: The average number I had given you had two additional steps applied. First, I had her find the mean. Second, I had her divide the number of words per passage by the average number of seconds to find the average number of words read per second. And 3rd, I had her multiply the answer by 60 to find the average number of words read per minute. (There is probably a better way, but it is the only way I could think of to neutralize the difference in the number of words per passage.)

EXAMPLE:

2nd step

23 = avg number of words in passage

47.8 = avg number of seconds it took to read the passage

0.48 = words read per second

23÷47.8= 0.48

3rd step

29 = avg number of words read per minute

0.48 ∙60=28.80

I am out of my field looking for the p value so questions are becoming vague and foolish. If I work with the mean without allowing for the different length of passages (23, 33, 22 words), won't that affect the outcome? I will no longer be comparing the "same thing" as each result is for the time it took to read a different length passage. How do I handle this?

Laurie

My 6th grade students learned mode, median, range, mean and very basic percentage in 5th grade--they will not be able to find the Student's t-test. However, for my knowledge, I've looked over the sites and think I may be able to work it out later this afternoon.

Question: The average number I had given you had two additional steps applied. First, I had her find the mean. Second, I had her divide the number of words per passage by the average number of seconds to find the average number of words read per second. And 3rd, I had her multiply the answer by 60 to find the average number of words read per minute. (There is probably a better way, but it is the only way I could think of to neutralize the difference in the number of words per passage.)

EXAMPLE:

2nd step

23 = avg number of words in passage

47.8 = avg number of seconds it took to read the passage

0.48 = words read per second

23÷47.8= 0.48

3rd step

29 = avg number of words read per minute

0.48 ∙60=28.80

I am out of my field looking for the p value so questions are becoming vague and foolish. If I work with the mean without allowing for the different length of passages (23, 33, 22 words), won't that affect the outcome? I will no longer be comparing the "same thing" as each result is for the time it took to read a different length passage. How do I handle this?

Laurie

- Laurie
**Posts:**15**Joined:**Fri Jul 08, 2011 1:20 am**Occupation:**Teacher**Project Question:**n/a**Project Due Date:**n/a**Project Status:**Not applicable

Laurie,

It is OK to calculate the number of words per minute, this standardizes the data and makes it comparable across test cases and categories.

As to the different lengths of passages, what should have been done was to randomly assign passages to test subjects so that all groups had roughly the same mix of test passages and any skew due to a specific passage would have been present in all test groups. Since this is a 6th grade experiment, I think you should ignore it. If the test groups each had their own passage to read, then in some sense you are comparing apples to oranges unless the passages are all very similar. If that is the case and someone asks about it, you'll have to wave your hands a bit and say that in the future you'll randomize text assignment blah blah blah, but you don't think it was very important since vocabulary and sentence structure are similar (or whatever) across the passages. That assumes I am actually understanding things correctly, which may not be the case.

If Student's t-Test is too advanced, then you should probably stick with the margin of error measurement you were already using. I think it's more important that the students use and understand a simple technique instead of using a more sophisticated black box that they do not understand. The object of the game is for them to learn and understand a little secret of the universe that will incrementally increase their insight about the world. Keep it simple, don't go overboard.

If you want, send the raw data and I'll put it in a spreadsheet so that I can see if I understand what's going on or not. Do you already have it in a spreadsheet?

Thanks!

It is OK to calculate the number of words per minute, this standardizes the data and makes it comparable across test cases and categories.

As to the different lengths of passages, what should have been done was to randomly assign passages to test subjects so that all groups had roughly the same mix of test passages and any skew due to a specific passage would have been present in all test groups. Since this is a 6th grade experiment, I think you should ignore it. If the test groups each had their own passage to read, then in some sense you are comparing apples to oranges unless the passages are all very similar. If that is the case and someone asks about it, you'll have to wave your hands a bit and say that in the future you'll randomize text assignment blah blah blah, but you don't think it was very important since vocabulary and sentence structure are similar (or whatever) across the passages. That assumes I am actually understanding things correctly, which may not be the case.

If Student's t-Test is too advanced, then you should probably stick with the margin of error measurement you were already using. I think it's more important that the students use and understand a simple technique instead of using a more sophisticated black box that they do not understand. The object of the game is for them to learn and understand a little secret of the universe that will incrementally increase their insight about the world. Keep it simple, don't go overboard.

If you want, send the raw data and I'll put it in a spreadsheet so that I can see if I understand what's going on or not. Do you already have it in a spreadsheet?

Thanks!

Heinz Hemken

Mentor

Science Buddies Expert Forum

Mentor

Science Buddies Expert Forum

- hhemken
- Expert
**Posts:**264**Joined:**Mon Oct 03, 2005 3:16 pm

Sorry, submitted 2x, please go to next message re: Zero Affect.

Last edited by Laurie on Thu Mar 22, 2012 6:45 pm, edited 1 time in total.

- Laurie
**Posts:**15**Joined:**Fri Jul 08, 2011 1:20 am**Occupation:**Teacher**Project Question:**n/a**Project Due Date:**n/a**Project Status:**Not applicable

Hello Science Buddies,

We requested and received constructive criticism from the judges at the county science fair, however we aren't sure what "Talk more about the zero affect" means or how it relates to the team's project. Their hypothesis is as follows: The objective is to determine if the use of a slant board and blocking methods improve reading fluency in 162 7-14 year old students? Please let me know what other info you require to answer the question.

They need the answer before April 30th.

Thank you.

Laurie

We requested and received constructive criticism from the judges at the county science fair, however we aren't sure what "Talk more about the zero affect" means or how it relates to the team's project. Their hypothesis is as follows: The objective is to determine if the use of a slant board and blocking methods improve reading fluency in 162 7-14 year old students? Please let me know what other info you require to answer the question.

They need the answer before April 30th.

Thank you.

Laurie

- Laurie
**Posts:**15**Joined:**Fri Jul 08, 2011 1:20 am**Occupation:**Teacher**Project Question:**n/a**Project Due Date:**n/a**Project Status:**Not applicable

Laurie,

I googled "zero effect" and it apparently is whether your confidence interval includes the "zero effect" case (http://yatani.jp/HCIstats/EffectSize). I suggest you try some focused googling, e.g.:

https://www.google.com/#hl=en&output=se ... =psy-ab&q="the+zero+effect"+"confidence+interval"

It may be that they want you to talk about whether having the case with no experimental difference within your confidence interval points to a lack of significant difference or not. Search for "zero effect", for example:

http://www.ec.gc.ca/inre-nwri/default.a ... 8&toc=show

www.rbsd.de/PDF/ConfCurves.pdf

The second one is fairly meaty, but you might get the drift from the paragraph with "zero effect" in it. Basically it seems to say that although the zero point may fall within the confidence interval, the width of the interval (and maybe that the zero point was near one of the ends: "because the interval [0.49, 1.04] included the zero effect of HR 1⁄4 1") means that there can be but doesn't necessarily have to be a measurable difference between the experimental groups. You might argue something similar if your zero effect point is towards one end of the confidence interval. If your zero effect point is right in the middle of the confidence interval, then it would be pretty hard to argue that there was a statistical difference.

I hope this helps a bit. I'm not a statistician, so you may want to get a few more opinions.

Heinz

I googled "zero effect" and it apparently is whether your confidence interval includes the "zero effect" case (http://yatani.jp/HCIstats/EffectSize). I suggest you try some focused googling, e.g.:

https://www.google.com/#hl=en&output=se ... =psy-ab&q="the+zero+effect"+"confidence+interval"

It may be that they want you to talk about whether having the case with no experimental difference within your confidence interval points to a lack of significant difference or not. Search for "zero effect", for example:

http://www.ec.gc.ca/inre-nwri/default.a ... 8&toc=show

www.rbsd.de/PDF/ConfCurves.pdf

The second one is fairly meaty, but you might get the drift from the paragraph with "zero effect" in it. Basically it seems to say that although the zero point may fall within the confidence interval, the width of the interval (and maybe that the zero point was near one of the ends: "because the interval [0.49, 1.04] included the zero effect of HR 1⁄4 1") means that there can be but doesn't necessarily have to be a measurable difference between the experimental groups. You might argue something similar if your zero effect point is towards one end of the confidence interval. If your zero effect point is right in the middle of the confidence interval, then it would be pretty hard to argue that there was a statistical difference.

I hope this helps a bit. I'm not a statistician, so you may want to get a few more opinions.

Heinz

Heinz Hemken

Mentor

Science Buddies Expert Forum

Mentor

Science Buddies Expert Forum

- hhemken
- Expert
**Posts:**264**Joined:**Mon Oct 03, 2005 3:16 pm

Dear Heinz,

I read the first article and it is beyond the 6th graders math in this project. They only worked with the basics: range, outliers, mode, median, and mean. Their margin of error knowledge is reading the page and using the chart on Science Buddies--not much more.

They used averages on their graphs and had planned on adding a highlighted section showing the margin or error when comparing the data from the Slant Board and the Table. (median results were close to or the same as the mean) It will overlap on 2 out of 8 points.

I would rather have them discuss their project knowledgeably than try to memorize something they don't understand--unless you have a suggestion on how to teach this fully in a few weeks.

Thank you,

Laurie

I read the first article and it is beyond the 6th graders math in this project. They only worked with the basics: range, outliers, mode, median, and mean. Their margin of error knowledge is reading the page and using the chart on Science Buddies--not much more.

They used averages on their graphs and had planned on adding a highlighted section showing the margin or error when comparing the data from the Slant Board and the Table. (median results were close to or the same as the mean) It will overlap on 2 out of 8 points.

I would rather have them discuss their project knowledgeably than try to memorize something they don't understand--unless you have a suggestion on how to teach this fully in a few weeks.

Thank you,

Laurie

- Laurie
**Posts:**15**Joined:**Fri Jul 08, 2011 1:20 am**Occupation:**Teacher**Project Question:**n/a**Project Due Date:**n/a**Project Status:**Not applicable

Laurie,

I included those articles for your benefit only. The students have no reason to see them, I'm sure they would be frightening to them. My summary recommendation is:

If the zero point falls within the confidence interval and the zero point is near one of the ends of the confidence interval, it means that there can be but doesn't necessarily have to be a measurable difference between the experimental groups. If your zero effect point is right in the middle of the confidence interval, then it would be pretty hard to argue that there was a statistical difference. If the zero point is outside of the confidence interval, then that means there was a significant difference between the test case and the control.

I suspect your students can understand these as rules of thumb, especially in combination with the diagram in this page: http://www.ec.gc.ca/inre-nwri/default.a ... 8&toc=show

I understand and agree that your students should definitely not end up regurgitating all sorts of weird stuff they don't understand. I think they should be able to learn a few rules of thumb, though. You're in a much better position than I to determine where the sweet spot is, though. It might help you to make a diagram like the one in the web page with your data. It's been a while since I've looked at your data, though, so I don't know how feasible it is. Please let me know if you have difficulty doing so.

Cheers,

Heinz

I included those articles for your benefit only. The students have no reason to see them, I'm sure they would be frightening to them. My summary recommendation is:

If the zero point falls within the confidence interval and the zero point is near one of the ends of the confidence interval, it means that there can be but doesn't necessarily have to be a measurable difference between the experimental groups. If your zero effect point is right in the middle of the confidence interval, then it would be pretty hard to argue that there was a statistical difference. If the zero point is outside of the confidence interval, then that means there was a significant difference between the test case and the control.

I suspect your students can understand these as rules of thumb, especially in combination with the diagram in this page: http://www.ec.gc.ca/inre-nwri/default.a ... 8&toc=show

I understand and agree that your students should definitely not end up regurgitating all sorts of weird stuff they don't understand. I think they should be able to learn a few rules of thumb, though. You're in a much better position than I to determine where the sweet spot is, though. It might help you to make a diagram like the one in the web page with your data. It's been a while since I've looked at your data, though, so I don't know how feasible it is. Please let me know if you have difficulty doing so.

Cheers,

Heinz

Heinz Hemken

Mentor

Science Buddies Expert Forum

Mentor

Science Buddies Expert Forum

- hhemken
- Expert
**Posts:**264**Joined:**Mon Oct 03, 2005 3:16 pm

Dear Heinz,

The second paragraph in your response was clear, but I do not know how to figure it out.

Sadly, I must admit that after reading all three papers, I do not understand the math. I do not even recognize many of the symbols. I may understand a small portion of the text. Using a 95% confidence level, and a two tailed curve, it is a mathematical way to determine if the difference noted in results is meaningful or not. It also said that smaller trial sizes can use a single tail and larger ones can use two tails.

I can't help my students. This is depressing.

Laurie

The second paragraph in your response was clear, but I do not know how to figure it out.

Sadly, I must admit that after reading all three papers, I do not understand the math. I do not even recognize many of the symbols. I may understand a small portion of the text. Using a 95% confidence level, and a two tailed curve, it is a mathematical way to determine if the difference noted in results is meaningful or not. It also said that smaller trial sizes can use a single tail and larger ones can use two tails.

I can't help my students. This is depressing.

Laurie

- Laurie
**Posts:**15**Joined:**Fri Jul 08, 2011 1:20 am**Occupation:**Teacher**Project Question:**n/a**Project Due Date:**n/a**Project Status:**Not applicable

Laurie,

Try to draw a diagram of your confidence intervals like the one in http://www.ec.gc.ca/inre-nwri/default.a ... 8&toc=show

Make sure your zero effect point is the vertical line, and see where it crosses your horizontal confidence interval lines. I think that's all you or your students need to do. Don't get distracted by all the more complicated stuff. The judges are almost certainly only looking for a reasonable rule-of-thumb interpretation of the data, not some heavy-duty mathematical treatment.

If you don't think you can draw the diagram, let me know.

Thanks!

Heinz

Try to draw a diagram of your confidence intervals like the one in http://www.ec.gc.ca/inre-nwri/default.a ... 8&toc=show

Make sure your zero effect point is the vertical line, and see where it crosses your horizontal confidence interval lines. I think that's all you or your students need to do. Don't get distracted by all the more complicated stuff. The judges are almost certainly only looking for a reasonable rule-of-thumb interpretation of the data, not some heavy-duty mathematical treatment.

If you don't think you can draw the diagram, let me know.

Thanks!

Heinz

Heinz Hemken

Mentor

Science Buddies Expert Forum

Mentor

Science Buddies Expert Forum

- hhemken
- Expert
**Posts:**264**Joined:**Mon Oct 03, 2005 3:16 pm

13 posts
• Page **1** of **1**

Return to Grades 6-8: Math and Computer Science

Users browsing this forum: No registered users and 1 guest