In pursuit of perfect data: DLS correlation function

Welcome to the first installment of the blog series following on from our C&EN webinar on the qualitative and quantitative analyses of proteins.

In this series, we are addressing some of the interesting questions posed by audience members.  Today we will take a closer look at correlation functions resulting from dynamic light scattering (DLS) measurements.  Specifically, we will look at the reasons why the correlation functions at t=0 (or intercept of cumulants and distribution fits) is lower than 1 and what values define acceptable data.

In the DLS experiment we monitor the pattern of scattering intensity over time.  Essentially, the correlator within the Zetasizer instrument compares the similarity of the initial signal and the signal at another point in time, t.  If the signals can be superimposed exactly, then we have perfect correlation i.e. a correlation function of 1.  In order to achieve perfect correlation we must measure in a system with zero noise and at t=0; both of which are impossible from a practical standpoint.  In practice, then, it is never quite possible to achieve a value of 1.

Now that we have established that it is experimentally very difficult to achieve perfect correlation, what values of correlation function at t=0 are acceptable and how can we ensure that we get the best possible data from an experiment?  Ideally, one would want the intercept to be as close to 1 as possible; so, values greater than 0.8 indicate excellent data quality and those below 0.8 would be of decreasing quality but could still provide perfectly acceptable results.  In order to assess the limit of acceptable data, it is important to understand that the two main factors influencing the correlation of signals are baseline noise and signal intensity.

At intercept values close to 1, we have a situation where the signal intensity from the analyte particles is large and the background noise is negligible.  As signal intensity decreases with respect to noise or the noise increases, the intercept values tend towards 0.  So, does this render these data unusable?  In short, not always!  It is possible to envisage a scenario where even a low intercept could result in good size measurements.  For example, samples with low signal – those with low concentrations or very small molecules – can be measured in systems where the background noise is very low by increasing the correlation time.  A longer correlation time means more data, which in turn improves signal to noise.  Therefore, even with very low intercepts – even as low as 0.1 – reliable results can still be generated.

In summary, the key in the quest for optimal data is in improving analyte signal (e.g. increasing concentration), reducing background noise and correlating for a sufficient amount of time.  Common sources of noise include optical flare, variations in laser power and dirt in the dispersant or on cell windows; all of which will cause a reduction in intercept value and data quality.  Some of these will be minimized through detector design and the remainder can be minimized by following good laboratory practice, such as cleaning the cell, filtering buffers and so on.  In very specific cases where the sample absorbs or fluoresces light, the sample itself can cause a reduction in the intercept value.  Changing the laser wavelength or using a narrow band filter can improve measurements of such samples.

Coming soon…

Watch out for the next installment in this blog series where we will shed light on the differences between hydrodynamic radius (Rh) and radius of gyration (Rg).