I have been logging the output of the NetworkTimeProtocol function on a CR6. Currently it is connected to the internet via a 3G modem. I observe a lot of jitter over this connection, and it seems like the built-in function just does a single NTP query followed by a single-step adjustment on the clock.
Is this correct?
Yes, currently the command uses a simple SNTP type of correction and applies a delay correction based upon the transmit and receive times or a single time check, which can lead to some jitter on connections with variable latency.
We are reveiwing whether we might optionally support some filtering or additionally verification of the delays in the future.
One step in the right direction might be to provide a simple function to advances the logger clock by a given number of milliseconds. Then I can keep a running average of the ntp errors and apply the correction when I want.
Is that possible?
There currently isn't a simple function to change the clock. It could possibly be done in a few lines of code.
You can change the NetworkTimeProtocol() NTPMaxMSec parameter and set it to something large, like an hour, so it won't reset the logger's clock. You could keep the running average of the measured clock error in milliseconds.
Then do a RealTime() instruction to read the current dataloggers time. You then adjust the appropriate millisecond element in RealTIme's destination array. Use this new adjusted time array and do a SetClock.
I though about doing that at one point, but then I read:
If RealTime is run within a scan, the time reflected by the instruction will be the time of the datalogger's clock when the scan was started. If RealTime is run outside of a scan (for instance, within a Do Loop), the time reflected by the instruction will be based on the system clock, which has a 1 msec resolution.
Also, is there a way to prevent the scheduler from separating the code segment with the RealTime and ClockSet instructions?