Mountains

Mountains

Monday, April 4, 2011

Paging Jodi

I have been building a little box that uses industrial temperature controllers to measure temperatures using thermocouples (Type K, Aluminum and Chrome). Thermocouples use the energy difference between the electrons in two dissimilar wires to measure the temperature. The gap gets bigger as temperature increases, and thus, there is a corresponding increasing in measured voltage. This is called the Thermoelectric, neƩ Seebeck effect. The problem is that that voltage difference is in the microvolt (your average DVM is probably capable of volts to millivolts) range at room temperature. That makes the measurement hardware tricky and expensive.

The box I made can also trigger some big mother SSRs when it feels like it, but I won't worry you're little mind about it.

The box can communicate with a computer over EIA-232, responding to commands about every 500 ms. After I figured out that I had swapped the ground and the data Tx leads on the DB-9 connectors (Why is the output inverted? OH! *headslap*), it was pretty trivial to glue some Labview together to log temperature data. I set one thermocouple in the window, another on my desk, and let it collect data all weekend.

480,609 samples later, I came back from an amazing weekend of rock climbing, and copied the data to my flash drive, where it sat for few weeks and rotted.
Working with big datasets is fun. High time resolution data tends to show things that broader measurements miss. Frequently, those things aren't very important, but sometimes, the wierdest moles pop up.

Inspecting the raw data, I was surprised to see that there was strong aliasing of the data reported by the temperature controllers. The aliasing is the horizontal lines that the measurements lay on. The analog to digital converters (ADC) that the controllers use cannot perceive the a difference of more than 0.1 degree °C, that's the measurement resolution. An astute reader would also note that value is identical to the one found in the controller manual. The manual also lists the precision, the uncertainty in the measurements as 0.1°C. The measurement accuracy (how far off from the "true" temperature) is 0.5°C.

Hey, wow. the measurements are obviously not offset some constant. That's probably because the thermocouples were sitting in different places in the room with different albedos and heat capacities. The one in the window gets colder at night and warmer during the day. Classic sign of a lower heat capacity than the desk. Their temperatures really were different... remember, we're splitting hairs by 0.1°C here!

Since I'm already being a dork and analyzing pointless data, why don't I check the precision rating too?

I used a 12.5 second long (25 measurements) binomial smooth to average the data. This trades time resolution for increased precision of measurement of the temperature at a given point in time. The binomial smooth is nicer to work with than a boxcar or moving average, though it's less analytically rigorous. It does ignore changes at the edges of the subsample and stay centered around the sample time.

As a quick sanity check, we can just add error bands that are the size of the stated precision to the smooth. If there are lots of data points outside of the bands we drew, than the precision of the controllers is not factory spec.



The result of this quick test is shockingly good. There are a few points that stick out here and there, but mostly they lay in the shaded area.

Since I'm flying the keyboard, I can also take a harder approach. We can also subtract the measurements from the smooth values. That gives the residual. In an ideal world, these will be neatly clustered around zero, signifying that there is no bias in the data. The standard deviation of the residuals should also yield the measurement precision.


That appears to be the case here. If take the standard deviation of residuals, I get 0.05, twice the standard deviation is 0.10, which would encompass 95% of the measurements made. Slick. Additionally, the data does not appear to have a skew... adding all the residual values yields a paltry 2.4°, indicating that they are neatly centered on zero.

However, the banding pattern in the residuals in intriguing. Where does that come from?

Never to be one afraid of a wild goose, I did a Fourier transform on the residuals.
OMG THERE'S A SIGNAL!!!




Not really.

See, aren't you glad you read this?


There is a signal, of sorts. a very weak one. I'm not sure several of the small spikes are, but the 0.6 Hz might be an overtone from the power line. That's guessing. I think it's most likely that the signals are the frequency of the average temperature crossing measurement gradations during a steady increase or decreasing (like in the mornings @0.6, 0.7, and 0.9 Hz, and at night at 0.1 Hz).

I threw you a read hearing.

Another way to look at the residuals is to use a histogram.


The residuals aren't neatly grouped around zero! They're clustered around 0.0, 0.1, and -0.1. Sound familiar? The banding in the residuals is due to the measurement precision. As the true temperature crosses between values the ADC can measure, the ADC oscillates between the values it reports (dithering), resulting in the observed patterns in the residuals.

No comments:

Post a Comment

Leave a message after the tone...