Saturday, November 23, 2019
Bell Curve and Normal Distribution Definition
Bell Curve and Normal Distribution Definition The term bell curve is used to describe the mathematical concept called normal distribution, sometimes referred to as Gaussian distribution. Bell curve refers to the shape that is created when a line is plotted using the data points for an item that meets the criteria of normal distribution. The center contains the greatest number of a value and, therefore, would be the highest point on the arc of the line. This point is referred to the mean, but in simple terms, it is the highest number of occurrences of an element (in statistical terms, the mode). Normal Distribution The important thing to note about a normal distribution is the curve is concentrated in the center and decreases on either side. This is significant in that the data has less of a tendency to produce unusually extreme values, called outliers, as compared to other distributions. Also, the bell curve signifies that the data is symmetrical. This means that you can create reasonable expectations as to the possibility that an outcome will lie within a range to the left or right of the center, once you have measured the amount of deviation contained in the data.This is measured in terms of standard deviations. A bell curve graph depends on two factors: the mean and the standard deviation. The mean identifies the position of the center and the standard deviation determines the height and width of the bell. For example, a large standard deviation creates a bell that is short and wide while a small standard deviation creates a tall and narrow curve. Bell Curve Probability and Standard Deviation To understand the probability factors of a normal distribution, you need to understand the following rules: The total area under the curve is equal to 1 (100 percent)About 68 percent of the area under the curve falls within one standard deviation.About 95 percent of the area under the curve falls within two standard deviations.About 99.7 percent of the area under the curve falls within three standard deviations. Item Nos. 2,3 and 4 are sometimes referred to as the empirical rule or the 68-95-99.7 rule. Once you determine that the data is normally distributed (bell curved) and calculate the mean and standard deviation, you can determine the probability that a single data point will fall within a given range of possibilities. Bell Curve Example A good example of a bell curve or normal distribution is the roll of two dice. The distribution is centered around the number seven and the probability decreases as you move away from the center. Here is the percent chance of the various outcomes when you roll two dice. Two: 2.78 percentThree: percentFour: 8.33 percentFive: 11.11 percentSix: 13.89 percentSeven: 16.67 percentEight: 13.89 percentNine: 11.11 percentTen: 8.33 percentEleven: 5.56 percentTwelve: 2.78 percent Normal distributions have many convenient properties, so in many cases, especially in physics and astronomy, random variations with unknown distributions are often assumed to be normal to allow for probability calculations. Although this can be a dangerous assumption, it is often a good approximation due to a surprising result known as the central limit theorem. This theorem states that the mean of any set of variants with any distribution having a finite mean and variance tends to the normal distribution. Many common attributes such as test scores or height follow roughly normal distributions, with few members at the high and low ends and many in the middle. When You Shouldn't Use the Bell Curve There are some types of data that dont follow a normal distribution pattern. These data sets shouldnt be forced to try to fit a bell curve. A classic example would be student grades, which often have two modes. Other types of data that dont follow the curve include income, population growth, and mechanical failures.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.