Graphs!!!Help!

AFTER you've done your research and concluded your experiments, it is time to prepare for the science fair. Ask specific questions about preparing for a science fair, including how to set up your display board, how to prepare a presentation, etc. (Please post questions about selecting a project or conducting your experiment by posting in the appropriate "area of science" forum.)

Moderators: kgudger, Moderators

sciencefairlover3
Posts: 14
Joined: Sat Mar 06, 2010 9:17 am
Occupation: Student
Project Question: n/a
Project Due Date: n/a
Project Status: Not applicable

Graphs!!!Help!

Post by sciencefairlover3 »

Hi! I had a few questions about my graphs, I have bar graphs and someone told me to add the lines of error. On my Microsoft, I have some option like error line with SD , 5%, etc. What would you advise? Why do I need lines of errors? And one of my judges mentioned that I need to explain the p-value better so I learned all about the No and N1( Null Hypothesis and Alternative Hypothesis. I had a p-value of 0.02 and is that really show that there is a significance between the control and experimental? What does 0.02 mean? Someone told me there could be a 2% chance of of it being a random number??? What was she talking about? I had my confidence level set at 95% for my Student's distribution t-test which I performed on Microsoft Excel. What is the difference between a two-tailed and single-tailed my program did not specify which kind of t-test it was? Is there a difference? Is there any other statistical tests I should perform besides the ones I already did? Is there a book I could read about statistics that would be not be over my head??? Is it true that Standard Deviation tells you how close or how consistent your colony count( in my case) is to the mean? The smaller the SD number is the closer your colony count is to the mean the more inconsistent your colony count is? Do I have that right? Is there anything else I should know about SD? Is that all it tells us??? Thanks so much. I am in 9th grade and I need help with graphs:) I have always done the standurd graphs bars, tables, pies, line graphs, but never Statisticul annylasis and I am trying to advance to Intel and I want to make sure I understand everything that I put on my bourd.
MelissaB
Moderator
Posts: 1055
Joined: Mon Oct 16, 2006 11:47 am

Re: Graphs!!!Help!

Post by MelissaB »

Okay! I am going to try to answer all of your questions, but there are a lot so I apologize in advance if I miss any :).

Typically, for error bars, you either put +/- one standard deviation, or one standard error. It sounds like you have already calculated standard deviation, so you should go ahead and use that. It's a bit tricky to do so in Excel, though! What you will need to do is make another column next to your column where you have the averages that you graphed. Then (do you have Office 2003 or 2007? They are very different) you need to tell Excel to use those values for error bars. Let us know which Excel version you have and we will try to help you more on this.

When you show averages, it's also good to show measurements of how your samples varied--e.g. standard deviation or standard error. Here's the reason: Let's say you had two sets of measurements, one with an average of 10 and one with an average of 11. That looks different, right? But now pretend that the standard deviations for both of them are 5. If you put error bars, it is clear that the measurements overlapped quite a bit and that there's probably no real difference. But, if the standard deviation was 0.1, then you have a very significant difference!

Speaking of which, let's talk about significance and p-values. Generally, scientists say that they have found a difference if their p-value is less than 0.05, so your p-value of 0.02 would be considered 'significant' and you can say that you have a statistically significant difference. But you are probably asking what that actually -means-, right?

The goal of statistics is to determine what the probability is that your results were due to random chance alone. I'm not sure exactly what you did for your project, but you mention colonies, so let's say you counted bacterial colonies under two different disinfectants. Let's pretend that the two disinfectants work equally well. However, when you swipe the countertop (or whatever) and then transfer that to an agar plate, it's going to give you slightly different results every time. There's a very small chance that even if the disinfectants work equally well, you might (just by chance) find that there were fewer colonies with one than the other. Of course, the more plates you swipe, the smaller and smaller this chance will get.

The p-value (short for probability-value, actually) measures this chance. So, your friend is sort of right--in your case, there is a 2% chance that the difference you measured was not real and was just due to chance. Most scientists use a cut-off of a 5% chance (or, as you say, a 95% confidence interval), although in cases where it matters much more (such as human health), something may not be considered significant unless there is a 1% or even a 0.1% 'chance' that it is due to chance alone!

Does this make sense?

Excel automatically does a two-tailed test, which is the test you want (it's more conservative). The difference between a one-tailed test and a two-tailed test involves some mathematics, but the essence is that in one case you're only testing whether measurement group A is bigger than OR smaller than group B, whereas with a two-tailed test, you're testing both scenarios (e.g. is A different from B). Even though many hypotheses in science are actually more like the second example (we expect something to be better than/worse than something else), in practice all scientists use two-tailed tests all the time because, mathematically, it is more conservative (there is less of a chance of random chance messing things up!).

Can you tell us more about what you did? If you just measured numbers of colonies on two different groups of plates, then a t-test is exactly what you need to do. If you measured more than two groups, though, then you should consider an ANOVA...which, unfortunately, Excel won't do.

I am afraid I don't know of a good stats reference book that's aimed at the high school level, sorry :(.

Okay, last set of questions: Standard deviation is a measurement of the variability in your samples. So, yes, the bigger it is, the more inconsistent the numbers of colonies were from plate to plate. One simple rule of thumb is that 95% of your measurements will be within about two standard deviations on either side of the mean. So, it tells you something about how variable your sample is.

That was a lot--I hope it all makes sense, but if it doesn't, please post back and I will try to clarify anything you still have questions on.
sciencefairlover3
Posts: 14
Joined: Sat Mar 06, 2010 9:17 am
Occupation: Student
Project Question: n/a
Project Due Date: n/a
Project Status: Not applicable

Re: Graphs!!!Help!

Post by sciencefairlover3 »

Dear MelissaB, Thank you so much for helping me in answering all my questions. I do have some more questions for you about Stats:) I am not a huge fan of math and science is math all over but I do love annalyzing my data and looking at the results!

Okay yesterday I watched a video about Hypothesis Testing and it talked about a Type 1 and Type 2 error. I tested two different groups control treated and the Experimental group ( treated) now in order to do a t-test do I need to make a Ho and H1? Nully & Alternative? My p-value on Day 5 was 0.02 and for my other group it was 0.32? I had two different groups with both experimental & control since I was using two different bacteria to test with. Did I comitt a Type 1 or Type 2 error? What is probability is it a number that can be measured? I am having a hard time understanding the random chance in p-value:( What does 0.02 mean I know you said it is a 2% of it being totally random??? Why does my confidence level mean? Is there a certin percent in which I can see that my results will not be random? I had my confidence level set at 0.05 or ( 95%) according excel? Why choose that confidence level? Are there others out there? What does that level tell me:)


I have microsoft excel 2007 ( students version) I have an average colony graph and an average percent reduction graph? Can error bars be placed on both graphs? On which graphs should error bars be placed? How can you tell by looking at the error bar how varied it is from the mean? Is there a rule in stats? I already have SD but I have it in my tables not as error bars which kind of error bar should I choose in excel? Does this have a lot to do with p-value because after reading this it seems like they are brother and sister:):)

( But, if the standard deviation was 0.1, then you have a very significant difference! ) What do you mean by that? What is a confidence interval? Is this the foundation for error bars or for Standard Devation itself? Does it have to deal with the 95% level of confidence in the p-value? Can you explain that to me?

Thanks again for helping me:) I cannot tell you what it means to me. My science fair is in 3 weeks and I am still trying to study and understand Statistics;) It is truly a different way at looking at data:)
Last edited by sciencefairlover3 on Wed Mar 10, 2010 7:13 am, edited 1 time in total.
sciencefairlover3
Posts: 14
Joined: Sat Mar 06, 2010 9:17 am
Occupation: Student
Project Question: n/a
Project Due Date: n/a
Project Status: Not applicable

Re: Graphs!!!Help!

Post by sciencefairlover3 »

Hi! I have been looking at my error bars and there are three different types of SD ( Standard error barr, Standard Deveation with percent, and Standard Deveation?) What do each of them mean. I am really going nuts about stats and I am not sure if I talk to the judges that I should now about all three different types of error bars if I only use one? Thanks again
Craig_Bridge
Former Expert
Posts: 1297
Joined: Mon Oct 16, 2006 11:47 am

Re: Graphs!!!Help!

Post by Craig_Bridge »

I personally have found that students find statistics hard, primarily because it introduces a lot jargon that is confusing if you try and understand it all at once. Unfortuantely, you've got three different things with at least 6 different pieces of jargon to wade through.

Standard Error in statistics is usually defined as the estimated standard deviation or error of [in] of a series of measurements. Nice, define something we don't understand with other things we don't understand yet! Effectively you come up with Standard Error by taking the average of a series of identical experiments and ASSUME that the average IS THE CORRECT or EXPECTED answer (VALUE). You then you calculate how different each of the individual trials was from that GUESTIMATED EXPECTED VALUE (average) in a form called the standard deviation which is the sum of the squares of the individual "deviations" (difference between measurement and expected value, aka. the average value) with a divisor that "normalizes" or makes it independent of how many samples and what the unit of measurement used was. See http://en.wikipedia.org/wiki/Standard_e ... tatistics) for a place to start if you want to try and figure out more of this.

The "Standard Deviation with percent[ages]" is often called the "Relative Standard Deviation". From an engineer's perspective, measuring anything with absolute accuracy is impossible. If you are attempting to measure to within one inch in a mile, that is 1 part of 63360 or to less than 0.00157 percent. In other words, you are looking for at least 5 significant digits in the measurement. That would be impossible without some very highly technical measurement apparatus design. This form (Relative Standard Deviation) simply divides the previously caluculated standard deviation (or "normalizes" it) by the expected value and multiplies by 100 to convert it into a percentage. See http://en.wikipedia.org/wiki/Relative_s ... _deviation for a place to start if you want to try and figure out more of the specifics.

In your case, Standard Error Bars are a Microsoft Excel thingy that we really need to have the precise "Help About" information from YOUR SPECIFIC Microsoft Excel program and then do some research to figure out what possible settings there are to control which "Error Bar" variation they present. Error Bars in general are a graphical representation of the expected error derived from ASSUMING some variance (oops, I introduced yet another term, typically the assumption is one standard deviation) and the plot of the error bars for different things is then some crude representation of whether there is any statistical significance to the deviations or whether it is just measurement deviations. There are far better statistial significance tests that provide a better basis for declaring statistical significance, insignificance, or indeterminance.
-Craig
MelissaB
Moderator
Posts: 1055
Joined: Mon Oct 16, 2006 11:47 am

Re: Graphs!!!Help!

Post by MelissaB »

Hi,

Okay, let's start by backing up a bit--I'm somewhat confused that you got two very different p-values for 'different groups'. Can you start by telling us exactly what you counted/measured/are trying to graph? This will help me explain things to you.

Let's start with statistical hypotheses. The good news is, for t-tests they're pretty simple. The null hypothesis is usually that there is no difference between the two groups, and the alternative hypothesis is that there IS a difference between the two groups. Simple, right?

Once you have your hypotheses, it's a bit easier to define a type I and a type II error. Type I error is the probability of rejecting the null hypothesis even if it is true. So, in this case, it is the probability of saying that there IS a difference, even though there IS NOT a difference. Type II error is the probability of failing to reject the null hypothesis even though it is false. So, in this case, it is the probability of NOT FINDING a difference even though there REALLY IS a difference.

What do I mean by 'really is/is not' a difference? Here we need to back up a bit. Let's say you're comparing the bacterial colonies that result from washing a cutting board with a disinfectant to washing it with just water. If you did this experiment an infinite number of times, you would get the 'true' result. Or, to put it another way, if you measured the height of every single man and every single woman on earth, you would be able to measure the exact average difference in height between men and women. But, it's not practical to repeat an experiment infinity times or to measure every single person. So, we take a SAMPLE of all of the men and women on the planet, preferably randomly (though you will realize right away that 'random' can be very hard to actually achieve!).

Let's take the men and women example for a moment. We know that men are taller, on average, than women--this is the REAL result, what we are trying to estimate with our sample. Let's hypothetically say that the real difference in height is 4 inches. Now, let's say we take a random sample of 5 men and 5 women. Maybe, just by chance, you got a couple of really short men and a couple of really tall women in there, and when you do the t-test, it is not significant at the usual cut-off of 0.05. This means you have committed a type II error--you say there is no difference even though there is! On the other hand, if we pretend for a moment that men and women have the same height, we could also randomly just get some tall men and some short women in our sample...and then we would say there is a difference, even though there isn't--a type I error. The probability of a type-1 error is also your p-value.

So, how can you reduce error? Well, you can start by having a LARGE sample. If you measured 500 men and 500 women instead of 5 men and 5 women, randomly selecting one or two short men wouldn't make much of a difference, would it? It would also help if the difference between men and women were very large--let's say a foot instead of four inches. Of course, you cannot change this difference--but the smaller the difference you expect, the larger your sample should be.

Okay, back to random chance. Let's go back to our sample of 5 men and 5 women. If you took samples of 5 men and 5 women over and over and over, you would find that a certain percentage of the samples showed that men were shorter than women, a certain percentage of these samples would show that there was no difference in height, and a certain percentage would show that men were taller than women. If there was really no difference in height, these percentages would be different from the percentages you would get if there were a difference, right? For example, if there were really no difference in height, we might expect a 25% chance of a result that men were shorter than women, a 50% chance of a result that there was no difference, and a 25% chance of a result that men were taller than women. On the other hand, if there was a 5-inch difference, let's say we might get the result that men were shorter than women only 5% of the time now, a no-difference result 30% of the time, and the other 65% of the time that there is a difference (note: I am just making these numbers up, I didn't look up the actual probabilities!). Remember, we're talking about the results of many, many samples--as if you were to roll a six-sided die many times, you would expect a 1/6 chance of getting a 1.

What statistics does is compare your result from your sample to the hypothetical distribution of samples under the null hypothesis. As long as there is variation in the trait you are measuring, there will always be a random chance of getting an 'extreme' sample even if there is no difference. This is a type I error--the probability of getting a 'significant' result even if there is no difference, or your p-value. It may get very, very small: 0.00000000001, but it will never, ever be zero. Even if there were no difference in height between men and women, there would always be a very, very small probability that you randomly chose the women's basketball team and the men's horse jockey team to sample height from.

As I said before, scientists usually use a 5% chance of a type I error as a cut-off, but some people do use different cut-offs. It is entirely arbitrary that we have decided as a group to use a 5% chance; we could just as easily use a 10% chance or a 2.3456% chance. But, unless you have a good reason for using something else, you should probably go ahead and use what we call an alpha-value (more jargon, I know, I'm sorry) of 0.05 as a cut-off.

I am going to stop this here and address your questions about the standard deviation in another post, because this one is getting really, really long. I hope I have not just confused you further...with statistics, I really like to be able to draw pictures to help you understand things, but on the board I cannot :(.
MelissaB
Moderator
Posts: 1055
Joined: Mon Oct 16, 2006 11:47 am

Re: Graphs!!!Help!

Post by MelissaB »

Now let's talk about standard deviation--Craig has already talked a little bit about this, but I'll give you a concrete example.

Let's say you have two samples, one with a mean of 10 and one with a mean of 11.

Here is one set of samples that will give a mean of 10: 1 19 10 5 15 and 11: 2 20 11 6 16
Here's another set: 9.8 9.9 10 10.1 10.2; 10.9 10.8 11.0 11.1 11.2

I encourage you to plot these numbers quickly in Excel or by hand so you can see the difference in variation! The second set of samples varies a lot less than the first sample, right? Which do you think is more likely to represent a 'real' difference, the first group or the second group?

The standard deviations for the first set of samples are 7.28; for the second sample they are 0.158. Approximately 95% of your points should be within 2 standard deviations of your mean if you have calculated standard deviation properly. This is essentially a 95% confidence interval comes from (if you ignore some jargon).

I'm not sure how you have your graphs set up in Excel, so it's a little difficult to tell you what to do. But, assuming you have the averages for the groups in one column or row, make a column or row next to that with the standard deviation for that average. Then graph the average. Once you have the average, click on the chart somewhere and then go to the 'layout' tab up at the top. Click on the 'error bars' button and then click on 'more error bars options' at the bottom of the menu. Under the 'vertical error bars' tab, down at the bottom there will be a tab called 'custom'. Click on it, then click on 'specify value' and highlight the row or column with your standard deviations in it. Presto, your graph will have error bars!

You should have error bars on any graph showing anything where you show the mean of multiple measurements. Excel lets you put an error bar on anything (you saw the options for 5% or whatever), but it does not actually *mean* anything unless it is showing you the standard deviation or standard error that you have calculated.

I also suggest you start looking at the Wikipedia entries for the various bits of statistical jargon that you are learning. For example, here: http://en.wikipedia.org/wiki/P-value. They are saying the same things we are saying, but with pictures :). So, take a look at those and then post back if you still have questions.
sciencefairlover3
Posts: 14
Joined: Sat Mar 06, 2010 9:17 am
Occupation: Student
Project Question: n/a
Project Due Date: n/a
Project Status: Not applicable

Re: Graphs!!!Help!

Post by sciencefairlover3 »

Melissa, thank you so much!
I have done some studies and many things are coming together, thanks to explanations provided.
I have seccefully generated bar graphs and followed your instruction about error bars. Do I display both minus and plus error bars or just plus? I have also read that you can tell if there is a diffrence between groups by looking at the bars. If they overlap it means that there was no real diffrence but if they do not overlap then there is a diffrence in data results. Am I correct?

I have used an herb oil and tested it for antibacterial effects on two seprate days day 1 and day 5( after bactera was exposed to oil for 5 days). I had one control group with ten samples and one experimental group with ten samples. I calculated SD, means, SEM, mode, range, for each group andf then I performed T-test using Excell. I tested both of my groups on day 1 and on day 5( counted colonies). I did t-test separetly on day one and on day 5. I was trying to compare if there was a diffrence between control and experimental on day 1 and then on day 5. My p values are for day 1 is 0.125 and p value for day 5 is 0.026. I would appreciet if you can explain to me the right way of explaing these p values to the judges. What they really mean in my case. From what I have read, I understand that om day one there was no diffrence between two groups and I am accepting the null, on day 5, the was a diffrence. Control performed better that experimental and I am rejecting the null and excepting the alternate. Do I need to tell the judges about froming the null and alternate? What I am looking is how my presentation of statistics shoudl sound to them. Should I display my values on board?
Please advise.
Thank you so much again.
MelissaB
Moderator
Posts: 1055
Joined: Mon Oct 16, 2006 11:47 am

Re: Graphs!!!Help!

Post by MelissaB »

You're very welcome.

You should display both positive and negative error bars on your graphs. And yes, again ignoring a lot of jargon, if they overlap there's not likely to be a statistical difference, whereas if they don't overlap, there's likely to be a difference between the samples.

Okay, now I understand what you've done. That helps :). I am a little confused, though--your control performed better than your experimental on day 5? So, your control group had fewer colonies? Or more colonies?

Here's what I would say about day one: On day one, I performed a t-test and there was no statistically significant difference between the two groups; in other words, I cannot reject the hypothesis that they had the same number of colonies.

Let me know more about your results on day 5 and I'll help you out with that :).

You should definitely show your p-value on your board! Most people put it in a little box in a corner of their graph, something simple like "t-test, p = 0.026". Sometimes people also put asterisks over groups that are statistically different from one another (so you would have an asterisk over day 5 but not day 1), but I'm not sure that everyone looking at your board will know what that means, so I would go with the actual test name and the p-value (I just realized it can't really go in a corner since you probably have both day 1 and 5 on the same graph--so put each p-value directly over the day they correspond to.
sciencefairlover3
Posts: 14
Joined: Sat Mar 06, 2010 9:17 am
Occupation: Student
Project Question: n/a
Project Due Date: n/a
Project Status: Not applicable

Re: Graphs!!!Help!

Post by sciencefairlover3 »

Melissa, thank you so much again. I am so relieved that ,finally, I am starting to understand math behind my project. On day 5 my control group had less colonies than experimental. It was quite dissapointing. I made a delution of an herb oil, 1/16, and it did not work, but at 100% it worked great and I had no colonies, which ment I did not have any data. I might try next year to improve,. But, yes, the control did better, so it means there is diffrence but in favor of control? Is that right?
Looking froward to hearing rom you.
MelissaB
Moderator
Posts: 1055
Joined: Mon Oct 16, 2006 11:47 am

Re: Graphs!!!Help!

Post by MelissaB »

Yes, you are right--it sounds as if the diluted herb oil actually encouraged bacterial growth and that there was a statistically significant increase in colony growth following application of the diluted herb oil compared to the control. But that's okay! Some of the most important discoveries in science were accidents--the discovery of antibiotics, for example!

However, why do you say you have no data from the non-diluted herb oil? If you had no colonies, that's a result! Or, do you think something else happened, like you didn't manage to transfer any colonies to the plates?

I am glad you are understanding the math! I find statistics to be really fun, kind of like reading a mystery...is my p-value going to be less than 0.05? OR ISN'T IT?!?!

Anyway, joking aside, let me know if you have any more questions.
sciencefairlover3
Posts: 14
Joined: Sat Mar 06, 2010 9:17 am
Occupation: Student
Project Question: n/a
Project Due Date: n/a
Project Status: Not applicable

Re: Graphs!!!Help!

Post by sciencefairlover3 »

Melissa,
my scientists said I would not be able to do any statistics with 0 colonies. That is why I decided to dilute my oil to get at least some growth. You think I was wrong in doing it? With 100% concentartion I had no growth at all. It meant that i had nothing really to show to the judges statistically. Please correct me if I am on a worng path. I feel really awful now that I might not have done the right thing and went to do the experiment again but with the dilution. What do you do in the case when you do have 100% success?
Thanks a million.
MelissaB
Moderator
Posts: 1055
Joined: Mon Oct 16, 2006 11:47 am

Re: Graphs!!!Help!

Post by MelissaB »

Well, then I've got some good news :). First, though--don't worry about doing something wrong! By diluting the herb oil, you just did a second experiment, and I think the judges will really like that.

By scientist, do you mean a mentor? Or a science teacher? Zeros are hard to deal with statistically, but not impossible. Check it out; enter a column of zeros in Excel next to your control data and do a t-test on it. The standard deviation of that sample will be zero since there IS no variance, but it is still defined.

I would suggest presenting both on your board--give the 100% first and then say you wanted to see if it worked as well diluted, and then present that test :).

The only exception is if your science teacher has specifically said not to do that...if so, you should listen to them.
sciencefairlover3
Posts: 14
Joined: Sat Mar 06, 2010 9:17 am
Occupation: Student
Project Question: n/a
Project Due Date: n/a
Project Status: Not applicable

Re: Graphs!!!Help!

Post by sciencefairlover3 »

Mellissa,
I just wanted to expreess my many thanks for all the help you provided! I wish I found the site earlier :( But I am really happy I found it before the state competitions to do all the neccessary corrections.
I have one more question, it is about the p-value. You mentioned earlier about the probability and p-value. I have been reading that if p-value is so and so, there is a chance of the same results happening again. I am confused about this . If my p-value is 0.02 between control and experemantal groups ( control did better), I know for sure that there is satistical difference between two groups but what about this probabily factor. That is one thing I can not understand.
I am pasting of what you said earlier about the p-value. Please explain to me what does it mean that my diffrence was due to chance and it was not real. I hope, I am not confusing you.Thank you very much.
The p-value (short for probability-value, actually) measures this chance. So, your friend is sort of right--in your case, there is a 2% chance that the difference you measured was not real and was just due to chance. Most scientists use a cut-off of a 5% chance (or, as you say, a 95% confidence interval), although in cases where it matters much more (such as human health), something may not be considered significant unless there is a 1% or even a 0.1% 'chance' that it is due to chance alone!
MelissaB
Moderator
Posts: 1055
Joined: Mon Oct 16, 2006 11:47 am

Re: Graphs!!!Help!

Post by MelissaB »

I'm sorry I'm not being clear. This is one of the toughest points in statistics for students to understand, and really hard to teach without being in front of a chalkboard!

Let's say you're flipping a coin 10 times. On average, you'll get 5 heads and 5 tails, right? But, if you repeat the experiment 100 times (e.g. flip a coin 10 x 100 times), at least one of those times you'll probably get all 10 heads or all 10 tails. The percentage of times you got that if you repeated the experiment an infinite number of times would be the probability of getting that result. In fact, in this experiment, the probability of getting all 10 heads is 1/2*1/2*1/2*1/2*1/2*1/2*1/2*1/2*1/2*1/2, or 0.00097 (if I've punched the buttons on the calculator right. This means that you have only a 0.097% chance of getting all heads (or all tails)--you would only be expected to get the result about once every thousand repetitions of the 10 flips!

Now let's consider your experiment. Let's say there was no difference between the control and the experiment. On average, if you repeated the experiment a bunch of times, the number of colonies would be the same (just like you would get 5 heads and 5 tails). However, if you did the experiment a number of times, you would find lots of results where one of the two groups has an average of 1-10 colonies more, other results where one group has an average of 20-30 colonies more, etc. You might even, even if there were no difference between the control and the experiment, have no colonies on one group of plates and lots of colonies on the other. HOWEVER, this is very very unlikely--it's like flipping a coin 100 times and having all heads come up.

But, rather than just saying that something is unlikely, we actually calculate the probability of it happening. So, the p-value is the probability that you would get your results (for example, no colonies on the experimental plates, lots on the control) *if there was no difference between the experiment and controls*.

So, what a 0.02 p-value tells you is that there is only a two percent chance that there is no difference between your control and experimental groups. It's like flipping a coin 100 times and having 98 heads and 2 tails come up...you'd probably think the coin was rigged if you did that, right? That's exactly what we do as scientists--say that if there less than a five percent chance that there is no difference between the two groups, that it is 'statistically significantly different', which is just a bit of jargon saying that we believe the result is not due to chance alone.

Does this make more sense? If not, maybe I can find someone else to try to explain it in different words.
Locked

Return to “Grades 9-12: Getting Ready for the Science Fair”