Goodness-of-Fit Test utilizing Python Code- Guide – Analytics India Magazine

An excellent Data Scientist is aware of learn how to deal with the uncooked information appropriately. She/he by no means makes improper assumptions whereas performing information analytics or machine studying modeling. This is among the secrets and techniques with which a Data Scientist succeeds in a race. For occasion, the ANOVA take a look at commences with an assumption that the information is generally distributed. Maximum Likelihood Estimation makes an a-priori assumption concerning the information distribution and tries to search out out the more than likely parameters. What if the assumptions about information distribution within the above circumstances are incorrect? We may bounce to flawed conclusions and proceed with additional data analysis or machine studying modeling within the flawed course. With sudden outcomes, we would attempt to fine-tune the hyper-parameters of the mannequin to enhance efficiency, whereas the error has been with the idea of knowledge distribution.

One of the standard statistical approaches, the Goodness-of-Fit take a look at, provides an answer to validate our theoretical assumptions about information distributions. This article discusses the Goodness-of-Fit take a look at with some frequent information distributions utilizing Python code. Let’s dive deep with examples.

Import needed libraries and modules to create the Python setting.

 # create the setting
 import numpy as np
 import pandas as pd
 import matplotlib.pyplot as plt
 import seaborn as sns
 from scipy import stats  

Uniform Distribution

Let us assume we’ve got cube in our hand. A cube has six faces and 6 distinct potential outcomes starting from 1 to six if we toss it as soon as. An unbiased cube has equal possibilities for all potential outcomes. To test whether or not the cube in our hand is unbiased, we toss them 90 occasions (extra trials be sure that the outcomes are statistically important) and notice down the counts of outcomes.

 cube = pd.read_csv(path)


Since every face of the cube is assumed to have equal possibilities, the outcomes have to be uniformly distributed. Hence we will specific the null hypothesis at 5% level of significance as follows:

The cube is unbiased and its outcomes observe uniform distribution

Following an excellent uniform distribution, anticipated frequencies might be derived by giving equal weightage to every end result.

 # Total frequency
 total_freq = cube['observed'].sum()
 print('Total Frequency : ', total_freq)
 # Expected frequency
 expected_freq = total_freq / 6
 print('Expected Frequency : ', expected_freq) 


 # construct up dataframe with anticipated frequency
 cube['expected'] = expected_freq


Let us visualize the information distribution.

 plt.determine(figsize = (6,6))
 # plot noticed frequency
 plt.subplot(211)['face'], cube['observed'])
 plt.ylabel('Observed Frequency')
 # plot anticipated frequency
 plt.subplot(212)['face'], cube['expected'])
 plt.ylabel('Expected Frequency')
 plt.xlabel('Face of cube')


Uniform distribution

It is the fitting time for us to debate how the Goodness-of-Fit take a look at works. Under supreme situations, the outcomes’ frequency ought to be an identical to the anticipated frequency. But, the noticed frequency differs somewhat from the anticipated frequency. Goodness-of-Fit take a look at evaluates whether or not this variation is considerably acceptable. In different phrases, it exams how far the noticed information matches to the anticipated distribution.

This closeness in match (goodness-of-fit) is calculated with a parameter referred to as Chi-Square. Mathematically, it’s expressed as:

chi-square distribution

If there’s extra deviation between the noticed and anticipated frequencies, the worth of Chi-Square will likely be extra. If the noticed frequencies match the anticipated frequencies precisely, its worth will likely be zero. subsequently, a worth near zero denotes extra closeness within the match.

We can outline a helper perform to calculate the Chi-Square worth.

 # a helper perform to calculate the Chi-Square worth
 def Chi_Square(obs_freq, exp_freq):
   depend = len(obs_freq)
   chi_sq = 0
   for i in depend:
     x = (obs_freq[i] - exp_freq[i]) ** 2
     x = x / exp_freq[i]
     chi_sq += x
   return chi_sq 

The Chi-Square worth for our instance is calculated as follows.

 # calculate utilizing the helper perform
 Chi_Square(cube['observed'], cube['expected']) 


chi-square goodness-of-fit

It ought to be famous that SciPy’s stats module can calculate the identical as under.

 # calculate utilizing the stats module of SciPy library
 stats.chisquare(cube['observed'], cube['expected']) 


chi-square goodness-of-fit

To conclude the null speculation, we’ve got to check the calculated Chi-Square worth with the vital Chi-Square worth. The vital Chi-Square worth might be calculated utilizing SciPy’s stats module. It takes as arguments (1 – level-of-significance, levels of freedom). Degrees of freedom for Chi-Square is calculated as:

DOF = Number of outcomes - p - 1

Here, p refers back to the variety of parameters that the distribution has. For uniform distribution, p=0; for poisson distribution, p=1; for regular distribution, p=2.

Critical Chi-Square worth is decided utilizing the code,

 # vital Chi-Square - p.c level perform
 p = 0
 DOF = len(cube['observed']) - p - 1
 stats.chi2.ppf(0.95, DOF) 


chi-square goodness-of-fit

If the calculated Chi-Square worth is greater than or equal to the vital worth, the null speculation ought to be rejected. On the opposite hand, if the calculated Chi-Square worth is lower than the vital worth, the null speculation shouldn’t be rejected.

Here, for our downside, the calculated worth of two.Eight is far lesser than the vital worth of 11.07. Hence, we can not reject the null speculation, i.e., the noticed distribution considerably follows a uniform distribution.

An vital situation imposed by the Goodness-of-Fit take a look at is that the anticipated frequency of any end result ought to be greater than or equal to five. If any end result has an anticipated frequency lower than 5, it ought to be mixed (added) with its adjoining end result to have significance within the frequency.

Normal Distribution

A bulb producer needs to know whether or not the lifetime of the bulbs follows the conventional distribution. Forty bulbs are randomly sampled, and their life, in months, are noticed.

 information = pd.read_csv(path)


We can visualize the information utilizing Seaborn’s histplot methodology.

 sns.histplot(information=information, x='life', bins=8)


visualization of distribution

The information can’t be assured, with naked eyes, to be usually distributed. We know {that a} random variable that follows regular distribution is steady. Hence, we will simply outline bin intervals such that every bin ought to have at the least 5 as its anticipated frequency. Here, in our downside there are 40 pattern bulbs. To have 5 anticipated samples in every bin, we should always have precisely 40/5 = Eight bins in whole.

Find the bin interval to have 5 anticipated frequencies per bin.

 # imply and customary deviation of given information
 imply = np.imply(information['life'])
 std = np.std(information['life'])
 bins = 8
 interval = []
 for i in vary(1,9):
   val = stats.norm.ppf(i/bins, imply, std)


The distribution ranges from unfavourable infinity to optimistic infinity. Include unfavourable infinity within the above checklist.

 interval.insert(0, -np.inf)


normal distribution intervals

To calculate the noticed frequency, we will simply depend the variety of outcomes in these intervals. First, create an information body with Eight intervals as under.

 df = pd.DataBody('lower_limit':interval[:-1], 'upper_limit':interval[1:])



Create two columns every for noticed and anticipated frequency. Use Pandas’ apply methodology to calculate the noticed frequency between intervals.

 life_values = checklist(sorted(information['life']))
 df['obs_freq'] = df.apply(lambda x:sum([i>x['lower_limit'] and that i<=x['upper_limit'] for i in life_values]), axis=1)
 df['exp_freq'] = 5



We at the moment are able to carry out the Goodness-of-Fit take a look at. We can state our null speculation at a 5% degree of significance as:

The bulb life follows regular distribution.

Calculate the precise Chi-Square worth utilizing the chisquare methodology out there in SciPy’s stats module.

stats.chisquare(df['obs_freq'], df['exp_freq'])


chi-square goodness-of-fit

Calculate the vital Chi-Square worth utilizing the chi2.ppf methodology out there in SciPy’s stats module.

 p = 2    # variety of parameters
 DOF = len(df['obs_freq']) - p -1
 stats.chi2.ppf(0.95, DOF) 


chi-square goodness-of-fit

It is noticed that the calculated Chi-Square worth 6.four is lower than the vital worth 11.07. Hence, the null speculation can’t be rejected. In different phrases, the lifetime of bulbs are usually distributed.

Find the Colab Notebook with the above code implementation here.

Find the above used CSV datasets here

Wrapping Up

The goodness-of-Fit take a look at is a useful method to reach at a statistical resolution concerning the information distribution. It might be utilized for any type of distribution and random variable (whether or not steady or discrete). This article mentioned two sensible examples from two completely different distributions. In these circumstances, the assumed distribution grew to become true as per the Goodness-of-Fit take a look at. In the case of failure of assumption, the idea about distribution ought to be modified suitably and be proceeded once more with the Goodness-of-Fit take a look at. 

It is your flip to search out the true distribution of your information!


  1. Probability and Statistics for Engineers and Scientists
  2. SciPy’s stats module – Official documentation
  3. Read on Wikipedia
  4. Watch on YouTube

Subscribe to our Newsletter

Get the newest updates and related provides by sharing your electronic mail.

Join Our Telegram Group. Be a part of an attractive on-line group. Join Here.


Please enter your comment!
Please enter your name here