
Originally Posted by
PsychoFish
I'm assuming it's because they're seeing everything as <1 ms
The time delay of light in a distance fiber can easily be approximated. Since light travels down light in approximately a straight line we can use the formula (D = T * V) where D is the distance the fiber travels, T is the time it takes to travel, and V is the velocity of light. The speed of light inside fiber is close to 2*10^8 m/s. So for example the time delay of light traveling down 2 kilometers (2,000 meters) of fiber will be close to 10 (10*10^-6 seconds).
Which is why I prefer using microseconds for reporting on fiber latency. It's still fun when you go 9μs and people go damn that's slow. When it's 0.009 ms or <1ms if you're using standard ping commands