Food for thought…

i_120411_282_m3k_r_4k_lan-as-Smart-Object-1

Why is sampling such an issue in particle sizing?

Why can’t I find a small amount of large material in my sample especially by microscope or image analysis? 

Why do I get sample-to-sample variation especially in the large end of the particle size distribution?

Take an example.  We have 20 mL of sample containing 2 large particles of 100 µm and 1million particles of 1 µm.

This is expensive so we want to supply only 1 mL to a company for particle size analysis.

Is this reasonable?

No.  It is better to save the material and not to submit it for particle size (or other) analysis. Even if we divided the sample into 2 X 10 mL portions, then consider what would happen.  The first 10 mL portion would only be representative of the whole if one particle was extracted in the first 10 mL and the second particle in the second 10 mL.  What’s the likelihood of this happening?    We can see that segregation or other issues give a good probability that we’ll have 2 particles in the first and zero in the second or zero large particles in the first and 2 in the second.  With a spinning riffler we are likely to be able to divide the sample such that there is one large particle is in the first 10 mL and the second in the secnd 10 mL.

The table below shows the huge effect that these small numbers of large particles have on the volume or mass distribution.

Now 1 particle of 100 µm = the mass of 1 million 1 µm particles.

To all intents and purposes, the total distribution in our example has a number mean  for the sample and any sub-samples of 1 µm.  The mass, volume or value ($) distribution has the following possibilities:

Table Food for thought

However, if we divided the sample into 20 X 1 mL portions none of the samples could be representative of the whole.  Most (at least 18) would have none of the 2 large particles in them and would therefore be unrepresentative.  The samples or sample (if the 2 particles find themself in one vial) that did contain the large particles would not be representative of the whole.  This type of scenario (small sample from large sample mass) is the standard practice in most industries.

It’s the largest particles (and the small numbers there of) that contribute to the mass statistics.  Mass or volume is usually the same as value ($) as most products are sold or processed on the basis of mass or volume not number.  The standard error (estimate of the ‘trueness’ of the sampled mean in relation to the true population mean) is proportional to 1/√n where n is the number of particles sampled and the closer we get to the top end of the distribution (the largest particle is the x100) then the smaller the number of particles and hence the greater unreliability of prediction in this top end area.

The importance of the largest particles in sampling statistics has been recognized since the 1880’s and the webinars on Brunton, Reed, and Richards indicate the impact these fascinating individuals had in sampling theory where those small numbers of large particles become very important.