Recherche dans le site/ Search this Blog:

Login



Stratum 1 myth, use closer server network wise instead ! Imprimer

There is a myth that prevails in the ntp community; it is better to connect to low stratum (e.g. stratum 1) servers in order to get more precise time. This is false quite often. Reasons to connect to busy stratum 1 server like the US Naval Observatory ones we hear are :

  • It is more reliable
  • It gives more accurate time, it's the USNO!
  • I distribute time to many clients so I feel OK to connect to many stratum 1 servers
  • Worse, some connect to many stratum 1 servers because they figure that by doing an average, ntpd will keep more precise time !

In fact, it is more important to connect to close by servers network wise that do the same than to connect to stratum 1 servers to keep your clock as accurate as possible.

Ntp and most remarkably ntpd were designed to give accurate time even when connecting to higher stratum servers as long as all the servers up the chain are close to each other network wise. What really kills the accuracy of ntpd is high network delays when it gets its time from far away servers. Of course, ntpd tries to compensate by evaluating the delay it takes to communicate to a remote server and adding it to the time value returned but when the server is too far away, that computation becomes imprecise. Ntpd wasn't designed to handle such cases well, it can't handle asymmetrical delays well (different delays for the request and the reply to get to their destinations) and the more hops there is between 2 machines, the more chances for the delays to be asymmetrical.

Graphic 1; Ntp done right ( S = stratum ) :

NTP Done right

If one follows what is illustrated in the above graphic, stratum 4 servers will return just as accurate time as the stratum 1 servers :

ntpq -pn

      remote           refid      st t when poll reach   delay   offset  jitter
======================================
+128.4.1.1       .PPS.            1 u  185 137m  377   24.115   -0.048   3.079
+132.246.168.9 .PPS. 1 u 182 68m 375 15.931 0.010 10.944
+64.230.242.45 64.230.242.33 4 u 378 1024 377 25.059 0.305 0.594
-69.156.254.2 132.246.168.164 3 u 394 1024 377 27.438 0.532 0.350
-69.156.254.38 64.26.173.192 4 u 383 1024 377 20.380 0.155 0.437
*64.230.159.74 .GPS. 1 u 339 1024 377 20.087 0.390 0.477

As we can see in the above ntpq program output here is the offset (difference with our local computer clock) of six servers and their stratum :

  • Stratum 1 -0.048 ms
  • Stratum 1 0.010 ms
  • Stratum 4 0.155 ms
  • Stratum 4 0.305 ms
  • Stratum 1 0.390 ms
  • Stratum 3 0.532 ms

Higher stratum servers will consistently return as accurate time as the stratum 1 servers if we follow what is depicted in graphic 1.

Here is some examples of properly configured machines (according to graphic 1) taken from the ntp pool.

Note: Having multiple servers which in turn have the same time source is not poor configuration contrarily to popular belief. The important thing is to have many stratum 1 time source up the chain. One can have 8 servers configured which in the end get their time from only 3 different stratum 1 servers. This is called redundancy, not misconfiguration ! ;-)

 ntpq -pn 91.121.13.62
remote refid st t when poll reach delay offset jitter
===========================
-81.169.136.18   192.53.103.104   2 u  287 1024  377   26.553   -3.998   0.830
81.169.141.30 81.169.172.219 3 u 285 1024 377 26.351 -0.532 0.865
*81.169.172.219 192.53.103.108 2 u 261 1024 377 26.277 -1.022 0.847
-213.186.33.99 134.214.100.6 3 u 389 1024 377 4.156 7.660 0.726
+195.13.23.5 195.13.23.6 2 u 484 1024 377 19.811 2.623 0.828
+62.173.184.58 193.204.114.233 2 u 264 1024 377 35.356 1.517 3.272

ntpq -pn 208.113.193.10
remote refid st t when poll reach delay offset jitter
================
+192.83.249.28   216.218.254.202  2 u  410 1024  377    9.803   -1.226   0.040
*132.239.1.6 .GPS. 1 u 954 1024 367 5.035 0.148 0.017
66.90.121.136 .INIT. 16 u - 1024 0 0.000 0.000 0.000
+131.216.22.15 207.200.81.113 2 u 349 1024 377 18.987 4.933 1.449

ntpq -pn 193.138.215.60
remote refid st t when poll reach delay offset jitter
================
-193.247.72.14   130.149.17.21    2 u  756 1024  377    3.458    0.008   0.221
+195.216.64.208 193.204.114.232 2 u 789 1024 377 2.977 -1.313 0.072
217.147.223.78 129.132.2.21 3 u 7 1024 377 3.263 -0.676 0.185
-194.97.156.5 192.53.103.104 2 u 1024 1024 377 19.356 1.252 0.454
+193.192.51.156 193.62.22.74 2 u 1012 1024 377 20.402 -0.812 1.575
-193.228.143.12 192.36.144.23 2 u 774 1024 377 38.051 3.585 0.194
193.138.215.196 129.132.2.21 3 u 751 1024 377 1.841 -0.239 0.083
212.103.65.133 129.132.2.21 3 u 737 1024 377 1.106 -0.869 0.001
-162.23.41.34 162.23.3.171 3 u 751 1024 377 2.736 2.208 0.116
*129.132.2.21 129.132.2.22 2 u 151 1024 377 2.992 -0.711 0.069
 

Graphic 2; Ntp done wrong ( S = stratum ):

Ntp done wrong

Note: In graphic 2, there is the same amount of lines illustrating connections between machines than in graphic 1. Machines have only been moved around to illustrate connecting to machines outside of your network, which involves more hops, thus higher chances of asymmetrical delay, thus less accurate time.

In this scenario, a lower stratum server could well be less accurate than a higher stratum server that follows the correct network path.

Here is some examples of poorly configured machines taken from the ntp pool:

 
Example of querying multiple stratum 1 servers with big 
delays to get a better average time !
Ntpd knows better and it has picked the closest
server to synchronize with (underlined)... 

ntpq -pn 216.234.161.11
remote refid st t when poll reach delay offset jitter
=============
-129.6.15.28     .ACTS.           1 u    5  512  377   74.435    4.415   0.095
+129.6.15.29 .ACTS. 1 u 376 512 377 82.130 3.361 4.083
-132.163.4.101 .ACTS. 1 u 356 512 377 47.576 2.227 0.121
+132.163.4.102 .ACTS. 1 u 9 512 377 47.312 2.628 0.087
-132.163.4.103 .ACTS. 1 u 282 512 377 47.521 -3.884 1.047
+128.138.140.44 .ACTS. 1 u 54 512 367 47.706 3.041 0.017
-192.43.244.18 .ACTS. 1 u 313 512 277 47.590 -0.740 2.947
*131.107.1.10 .ACTS. 1 u 291 512 377 21.493 2.983 0.532
-69.25.96.13 .ACTS. 1 u 309 512 377 43.534 -8.693 0.203
-206.246.118.250 .ACTS. 1 u 357 512 377 69.417 1.793 0.189
-208.184.49.9 .ACTS. 1 u 305 512 377 83.015 13.937 0.070
-64.125.78.85 .ACTS. 1 u 2 512 377 38.721 -1.264 4.113
-207.200.81.113 .ACTS. 1 u 267 512 377 43.690 -0.326 0.066
-64.236.96.53 .ACTS. 1 u 373 512 377 91.296 10.303 0.954
-68.216.79.113 .ACTS. 1 u 2 512 377 65.773 1.538 0.125
69.222.103.98 .INIT. 16 u 42d 1024 0 0.000 0.000 4000.00

2 useless connections to stratum 1 far away,
with expected results (big offset)

ntpq -pn 211.51.221.130
remote refid st t when poll reach delay offset jitter
========================
-211.51.221.196  132.163.4.102    2 u  939 1024  377    0.477    0.247   3.266
222.122.156.27 .INIT. 16 u - 1024 0 0.000 0.000 0.000
*210.98.16.100 .PPS. 1 u 204 1024 313 10.632 0.772 0.536
210.98.16.101 .PPS. 1 u 133 1024 1 8.503 1.179 0.001
+58.73.137.250 .GPS. 1 u 1007 1024 377 14.494 1.340 1.971
123.214.172.15 .RMOT. 16 u - 1024 0 0.000 0.000 0.000
-211.115.194.21 192.168.18.6 2 u 116 1024 337 7.037 0.349 0.413
-211.115.194.22 192.168.18.10 2 u 172 1024 353 6.562 0.602 0.217
207.46.130.100 18.26.4.105 2 u 35m 1024 4 177.714 12.724 0.001
-17.254.0.26 17.72.133.54 2 u 164 1024 313 155.963 0.426 0.532
-66.187.224.4 .CDMA. 1 u 180 1024 353 231.988 15.985 6.420
211.51.221.130 .INIT. 16 u - 1024 0 0.000 0.000 0.000
+141.223.182.106 .GPS. 1 u 157 1024 333 40.751 13.208 0.863
220.94.243.15 .INIT. 16 u - 1024 0 0.000 0.000 0.000

This is the ugliest waste of resources that I have ever seen !
But hey ! I am sure this guy is going to get a 
"better average time" this way ;-) :
Again, ntpd will pick up the closest servers (underlined)... 

ntpq -pn 211.51.221.196
remote refid st t when poll reach delay offset jitter
=================
*210.98.16.100   .PPS.            1 u 1248 1024  352    9.604    0.677   1.086
210.98.16.101 .PPS. 1 u 102m 1024 240 7.938 -1.194 1.773
-133.100.9.2 .GPS. 1 u 930 1024 357 75.566 -6.994 3.049
-133.100.9.4 .GPS. 1 u 698 1024 337 85.294 -9.143 4.782
-128.9.176.30 .GPS. 1 u 185 1024 313 200.542 17.062 0.617
-131.107.1.10 .ACTS. 1 u 289 1024 317 197.005 8.113 0.606
76.168.23.87 .GPS. 1 u 30d 1024 0 176.688 -4.201 0.000
+76.169.239.34 .GPS. 1 u 194 1024 357 162.475 1.416 1.891
-76.169.237.141 .GPS. 1 u 129 1024 373 168.477 4.312 5.029
+66.187.233.4 .CDMA. 1 u 363 1024 313 208.709 1.560 0.667
-66.187.224.4 .CDMA. 1 u 755 1024 357 232.297 13.422 10.351
192.93.2.20 .GPS. 1 u 9h 1024 0 347.000 16.228 0.000
-193.204.114.232 .UTCI. 1 u 268 1024 317 325.116 -1.847 4.520
69.25.27.173 .STEP. 16 u - 1024 0 0.000 0.000 0.000
+17.254.0.26 17.254.1.239 3 u 330 1024 333 157.036 0.744 0.848
-17.254.0.31 17.254.1.240 3 u 283 1024 357 155.709 0.842 0.222
-131.188.3.221 .DCFp. 1 u 147 1024 333 316.183 -9.737 0.492
-131.188.3.222 .GPS. 1 u 238 1024 377 328.625 -1.942 0.815
+204.152.184.72 204.123.2.5 2 u 422 1024 373 154.793 0.261 1.263
-193.62.22.66 .MSF. 1 u 94 1024 337 337.043 18.213 0.756
-130.69.251.23 .GPS. 1 u 289 1024 353 147.774 52.806 19.615
-206.223.0.15 .GPS. 1 u 234 1024 317 277.920 57.979 240.378
+164.67.62.194 .GPS. 1 u 248 1024 377 159.726 0.699 0.629
-150.99.100.26 .GPS. 1 u 262 1024 317 278.472 17.636 0.354
-150.99.100.27 .GPS. 1 u 141 1024 337 292.899 23.496 0.643
-150.100.2.13 .GPS. 1 u 211 1024 351 300.527 24.468 5.643
-150.100.2.14 .GPS. 1 u 120 1024 337 288.019 18.347 7.011
-128.250.33.242 .GPS. 1 u 791 1024 357 332.064 8.468 0.135
-132.163.4.102 .ACTS. 1 u 1911 1024 356 173.633 7.937 2.580
208.73.212.12 .STEP. 16 u - 1024 0 0.000 0.000 0.000
66.92.68.246 .STEP. 16 u - 1024 0 0.000 0.000 0.000
-198.123.30.132 .GPS. 1 u 201 1024 317 176.657 10.134 10.708
-211.189.50.33 141.223.182.106 2 u 228 1024 377 7.293 -0.521 0.918
124.138.6.4 220.73.142.71 2 u 16d 1024 0 7.336 0.802 0.000
-203.248.240.103 203.238.139.254 3 u 217 1024 317 7.419 25.402 3.675
-211.115.194.21 192.168.18.6 2 u 288 1024 317 5.924 -0.192 0.398
+211.115.194.22 192.168.18.10 2 u 933 1024 377 6.022 0.555 2.353
66.27.60.10 .STEP. 16 u - 1024 0 0.000 0.000 0.000
-207.46.130.100 18.26.4.105 2 u 46m 1024 304 197.536 -0.687 7.585




Aaron Toponce wrote a comment in the comment section below:
"Also, I'm not sure how you're showing so many remote peers in your ntpq(1) output. Maybe this has been a change over the years, but "tos maxclock" in NTP is 10. Anything additional will have a tally code of "#", listed as a backup, and not included in synchronizing the local clock."

Very simple answer: all the ones with a "-" in front of them are discarded by the ntp cluster algorithm anyway because they are out of tolerance, which makes it even more useless to connect to them.

Only the ones with a "+" and the one with a "*" are used by the ntp algorithm and count in the 10 you mention. So in our example above our ntp genius connects to 40 servers but his ntp daemon is only using 7 on this snapshot.

    ” ” – No state indicated for:
        non-communicating remote machines,
        “LOCAL” for this local host,
        (unutilised) high stratum servers,
        remote machines that are themselves using this host as their
        synchronisation reference;
    “x” – Out of tolerance, do not use (discarded by intersection algorithm);
    “-” – Out of tolerance, do not use (discarded by the cluster algorithm);
    “#” – Good remote peer or server but not utilised (not among the 
          first six peers sorted by synchronization distance, ready as 
          a backup source);
    “+” – Good and a preferred remote peer or server 
          (included by the combine algorithm);
    “*” – The remote peer or server presently used as the primary reference;
    “o” – PPS peer (when the prefer peer is valid). The actual system
          synchronization is derived from a pulse-per-second (PPS) signal,
          either indirectly via the PPS reference clock driver or directly 
          via kernel interface.


Commentaires / Comments (9)
Huh?
1 dimanche, 07 octobre 2007 17:04
Matt Wronkiewicz
I think you're barking up the wrong tree with this. All the bad examples were polling at low rates, putting minimal load on the stratum 1 servers. Static NTP configurations are self correcting, as you pointed out in each case, so all the servers got good time. There are reasons to peer with far away servers besides getting more accurate time. I don't think you can post a list of misconfigured servers without getting an explanation from the admins about what they're trying to do. Feel free to poke at my server (72.245.176.6).
RE: Huh?
2 lundi, 08 octobre 2007 04:41
Alain Côté
Hello Matt,

 

First "peering with far away servers" is declaring the hosts as peers in ntpd.conf not as servers and it doesn't make sense to peer with far away servers. Peers are usually on the same network and they usually use broadcast packets. Every group of 3 servers in the article graphics could very well be peers.

Even if you meant server (not peer), it doesn't make sense to get time from far away servers, you didn't state a valid reason to do it anyway. You have to realize that the apparent offset you get from far away servers is erroneous and introduced by asymmetrical network delays. People connect to far away servers, they see differences in the offsets and they figure they must connect to more far away servers to get a better average ! ;-)

This is a mistake, what you see is not realty, the differences in the offsets are introduced by the network, not by the servers keeping different times ! If you follow what is depicted in graphic 1 and experiment a bit, you will come to realize that all properly configured servers network wise (and hardware wise) return the same time or really close at least (few micro seconds).

One more time, the offset you see by connecting to far away servers are false and are introduced by the network, do not trust them. Connecting to far away servers makes your ntpd switch from server to server in a random way depending on the random delays of the internet. When it does that, it slews the clock and your drift value keeps on changing all the time making your ntpd less accurate. It is constantly trying to synchronize to random time that keeps on changing.

Experiment a bit with closer servers please and come back to tell us the results you get. If you find good servers (stratum is not that important) near you, you will see that all the offsets will be approximately between +-5 ms of each other as shown in the article (+-1 ms for our server). You will know that you synchronize to close enough servers when all the servers show consistently the same offset.

If you can't achieve this, it will be because your internet connectivity is not reliable enough, not because the servers really have the bigger offset differences that would show up. If you get larger offsets than that, it's not because the servers are really returning different time, it is just because the network introduces the errors.

So connecting to far away servers is definitely useless because it gives you bad data due to the network, You do not want to feed your ntpd with bad data.

Offsets
3 lundi, 04 novembre 2013 09:38
Aaron Toponce
I think you're missing something here. Then again, this article is 6 years old, so maybe you've come to understand the offset value a bit better.

The offset is the root mean squared phase in the time reported between this local host and the remote peer or server. Not between this local host and the top level stratum 0. The stratum 1 will have an offset from the stratum 0 time source. Whether it's the time it takes the signal to travel over the serial connection, or the time it takes the CPU to schedule the synchronization. Due to small latencies, and the drift of the local clock, the stratum 1 server will have an offset from the time source.

Now, a stratum 2 server that is communicating to a stratum 1 will also have a latency that needs to be calculated, including the network connection. However, it's offset will be determined from the stratum 1 clock, not the stratum 0 clock. So, if the stratum 1 clock is .003 milliseconds fast from the stratum 0, and the stratum 2 clock is 0.025 milliseconds fast from the stratum 1, then the stratum 2 clock is 0.028 milliseconds fast from the stratum 0.

Now work your way down the stratum to stratum 4. Just because it's reporting an offset of "0.115" milliseconds, does not mean that "stratum 4 servers will return just as accurate time as the stratum 1 servers". Not true. Not by a long shot. There's a reason stratum 16 is considered "unsynchronized"- it's because offsets add up.

You do have an efficient topology in the first graphic, no doubt. And, the shorter the delay in milliseconds, the more accurate NTP can be in synchronizing your clock, but we're talking about nanoseconds here. Even with a delay of 400 milliseconds, the NTP cluster algorithm is extremely complex. It's accuracy in determining the right time with a 400 millisecond delay is within nanoseconds compared to one with a 4 millisecond delay.

There's something to be said for efficiency, and your first graphic illustrates it well, but I would not recommend that people synchronize to stratum 3 and stratum 4 computers, or lower. It's trivial to put a stratum 2 time source in each domain or subnet, and have the local workstations or servers be stratum 3 at the lowest. Especially if everything is on the LAN, then we're usually talking about latencies less than a few milliseconds, if the network was rolled out correctly.

Also, I'm not sure how you're showing so many remote peers in your ntpq(1) output. Maybe this has been a change over the years, but "tos maxclock" in NTP is 10. Anything additional will have a tally code of "#", listed as a backup, and not included in synchronizing the local clock.

As a best practice, I would recommend people pick 3-4 low latency stratum 1 or stratum 2 servers, 3-4 geographically disperse stratum 1 or stratum 2 servers, and 3-4 additional servers with different reference clocks (if GPS goes offline, and all your reference clocks are GPS, you're sunk), using the full 10 peers that NTP will actively support.
Re: Offsets
4 lundi, 04 novembre 2013 14:39
alainoc9
Aaron Toponce, you wrote:
"The offset is the root mean squared phase in the time reported between this local host and the remote peer or server. Not between this local host and the top level stratum 0."

I am aware of that.

Now here:
+128.4.1.1 .PPS. 1 u 185 137m 377 24.115 -0.048 3.079
+132.246.168.9 .PPS. 1 u 182 68m 375 15.931 0.010 10.944
+64.230.242.45 64.230.242.33 4 u 378 1024 377 25.059 0.305 0.594
-69.156.254.2 132.246.168.164 3 u 394 1024 377 27.438 0.532 0.350
-69.156.254.38 64.26.173.192 4 u 383 1024 377 20.380 0.155 0.437
*64.230.159.74 .GPS. 1 u 339 1024 377 20.087 0.390 0.477

You can see that my local clock offset is within the same range whether I connect to stratum 3, 4 or 1. So, if I look at the offset of my clock compared to a stratum 1 or a stratum 4, it tends to be the same on average with enough servers as sources. I did extensive testing on this to come to this conclusion, logging the offsets in a database every 15 minutes etc.

I understand perfectly what you are saying although and for time maniacs or mission critical applications that rely on the most accurate time possible to the nanosecond, you are right. You risk less having a large offset due to extreme cases where all stratum 4 servers would add up offsets in the same direction instead of canceling each other out due to the law of averages.

For most applications, I still believe my recommendations aren't so wrong.

For example, say you chose to connect 6 stratum 4 servers that don't get their time from the same stratum 1 server up the chain, I still believe that you won't be off by more than a few miliseconds compared to the setup you suggest.

Due to the law of average, the additional delays you mention should cancel each other out in the end practically.
Re: Offsets
5 lundi, 04 novembre 2013 15:04
alainoc9
Aaron,

I just had a look at your ntp server and just as I described in my article, you connecting to far away, network wise, stratum 1 server seems to have exactly the effect I predicted. You get a large offset from far away servers with large delays. A lot more than I get with my server connecting to stratum 4 servers with a few stratum 1 as a reference point. Note that all my server in the above reply are close network wise.

So I guess my recommendation prevails. The network proximity is more important than the stratum level ;-)



ntpq -pn jikan.ae7.st
remote refid st t when poll reach delay offset jitter
==============================================================================
*198.60.22.240 .GPS. 1 u 540 1024 377 0.489 0.340 0.328
+199.104.120.73 .GPS. 1 u 927 1024 377 1.076 0.226 0.276
-155.98.64.225 .GPS. 1 u 1040 1024 377 2.653 0.683 2.336
-137.190.2.4 .GPS. 1 u 715 1024 377 5.334 0.587 0.359
-131.188.3.221 .DCFp. 1 u 975 1024 377 147.765 -2.789 0.221
-217.34.142.19 .LFa. 1 u 440 1024 377 161.712 -1.849 2.072
-184.22.153.11 .WWVB. 1 u 956 1024 377 65.122 -7.739 0.452
+216.218.192.202 .CDMA. 1 u 825 1024 377 39.080 0.364 0.337
-64.147.116.229 .ACTS. 1 u 1006 1024 377 16.676 3.575 0.092
Re: Offsets
6 lundi, 04 novembre 2013 15:24
alainoc9
Aaron:

Just one more important detail to make things clearer, all the stratum 3 and 4 servers in my above example follow the recommendations in this article. They are all close network wise from their higher stratum server with very short delays:
4-> 10-20 ms -> 3 -> 10-20 ms -> 2 -> 10-20 ms -> 1

That's why they seem more accurate than your stratum 1 servers 140-170 ms away. Again, network proximity is more important. It is exactly what I depicted in my first graphic.
Re: Offsets
7 lundi, 04 novembre 2013 21:46
Aaron Toponce
It would be nice if there were direct reply links, so the conversation had some sort of threading. Meh.

My point is, that you cannot make a 1:1 correlation with offsets and latencies. Time offsets is very complex. So, take that copy and paste that you gave. Notice anything... odd? If latencies had a direct 1:1 correlation to offset, then why does the ACTS refid server have a 16ms latency have a 3ms offset, but the CDMA refid server has a 39ms latency with a 0.364ms offset?t Or, why does the WWVB server, with a 65ms latency have a 7.739ms offset, but the LFa refid server with a 161ms latency has a 1.849ms offset.

So, while latencies ar a factor contributing to offsets, they aren't the only one. And, choosing servers of higher stratum numbers can be close to the "true time", but they can also be very, very far off.

In my experience, higher stratum number servers are much more unreliable for accurate time than lower stratum number servers. My experience has shown, in practice, that the closer you are in strata to the time source, the more reliable your clock will be.
Re: Offsets
8 mardi, 05 novembre 2013 13:02
Alain Côté
Aaron,
you wrote:
"My point is, that you cannot make a 1:1 correlation with offsets and latencies."

I agree with that obviously, I never made a 1:1 correlation.

The whole point of the article is that you are better off with 6 stratum 4 servers like this:
4-> 10-20 ms -> 3 -> 10-20 ms -> 2 -> 10-20 ms -> 1

Than with 6 stratum 1 servers 200 ms away.

As a said, I ran extensive benchmarking for a while back then that proved just that on AVERAGE, not always but say 90% of the time. This is due to asymmetrical delays that the ntp protocol can't deal with and it is pretty well documented in ntp docs. Without the asymmetrical delays problem, ntp would be precise to the pico second even if the servers were 5 whole seconds away! ;-)

The correlation between offsets and latencies is caused by asymmetrical delays and it is almost 1:1, 90% of the time, but not always. If you had a dedicated wire to a server 5 whole seconds away and no other traffic going through that wire then, you wouldn't get any asymmetrical delays and the offset would tend to be 0.

Of course, 6 stratum 1 servers 10-20 ms away are better.

Although ntp doesn't require much bandwidth and resources, more or less negligible, I find that it is some kind of netiquette not to hammer stratum 1 servers.

People should follow my recommendation and use higher stratum servers close by ( 4-> 10-20 ms -> 3 -> 10-20 ms -> 2 -> 10-20 ms -> 1 ) or fallback on using the ntp pool although the ntp pool results in sending packets all over different physical network for nothing ;-)

Take care Aaron.
Re: Offsets
9 mercredi, 06 novembre 2013 07:34
alainoc9
Aaron, you wrote:
"Also, I'm not sure how you're showing so many remote peers in your ntpq(1) output. Maybe this has been a change over the years, but "tos maxclock" in NTP is 10. Anything additional will have a tally code of "#", listed as a backup, and not included in synchronizing the local clock."

Very simple answer: all the ones with a "-" in front of them are discarded by the ntp cluster algorithm anyway because they are out of tolerance, which makes it even more useless to connect to them.

Only the ones with a "+" and the one with a "*" are used by the ntp algorithm and count in the 10 you mention.

So, in the output from your ntpd server above, you are only using 3 remote servers.

Ajouter votre commentaire / Add your comment

Votre nom / Your name:
Sujet / Subject:
Commentaire:
SPAM: Ne pas inclure de lien ou utiliser le stratagème suivant: "yahoo.com slash mapage.html"     Nous utilisons un filtre qui bloque les commentaires suspects avec une erreur 403. De même, du code de programmation ou sql peut provoquer des erreurs 403. Veuillez utiliser un lien vers votre code tel que: "pastebin.com slash jVNqLieD"    Merci!
Comment:
SPAM: Do not include any links in your post or use the following construct: "yahoo.com slash mypage.html"    We are using a filter that denies suspicious posts with a 403 error. Programming language or SQL code may also cause a 403 error. Please provide a link to your code instead like: "pastebin.com slash jVNqLieD"    Thank you! :
  Lettres de vérification; lettres minuscules seulement, pas d
Retaper les lettres affichées / Word verification:
Mis à jour / Last updated ( jeudi, 07 novembre 2013 21:43 )
 



Consultez TOUS nos fils d'actualité ici. / View ALL our newsfeed here.