Print this page

Effects of Round Trip Time and Bandwidth on Performance

Knowledge Article Number 000198667
Description When diagnosing system performance issues the System Performance team will likely ask you for ping and tracert results in order to determine the overall speed of your connection and what path your requests take to and from the data center. In most instances this information is more significant than the bandwidth of your connection when it comes to performance issues. 

To explain why we ask/don't ask for this information, it is helpful to start with a few definitions and analogies.

Round Trip Time (RTT) 
 This is the cumulative time that it takes for any data packet to leave your computer, and then travel to the data center and back. The round trip time has a theoretical minimum time of:

(round trip distance)/(speed of light in the communication medium)
For modern fiber optic cables this is approximately 122,500 miles/sec (about 4.5 times around the planet per second). In reality we will never be able to achieve this speed since the data has to make hops along the way, which induces delays as the signals are processed and relayed.

BandwidthThis refers to the maximum cumulative size of all files being sent through your connection in any given second.

One good way to describe the interaction of the two terms is to think of a traffic system.

In this analogy the Round Trip time is the time it takes you to get from one traffic light to the next. This would account for the time needed for the light to change (similar to the processing time at each relay station) and the speed limit (data transfer speed - normally not a limiting factor). Bandwidth in this analogy is how many lanes the road has. If you imagine that each request you make is a car or cars on the road, a big file would be a car that takes up multiple lanes.

As with a real traffic system, occasionally traffic jams will occur at the lights, forcing you to wait. In a computer network this is what we call "Blocking." Using this analogy, what we are looking for when we ask for a tracert is essentially the route that your car takes and how fast you get through each light, while a ping tells us how long the entire trip takes for the car to leave and come back.

To further build out this analogy, imagine that you are building a house. Your foreman (your computer) is on a job site with a pre made foundation and a set of blueprints (web browser), but he needs lumber to build the frame of this house so he sends his courier (data request packet) to the hardware store ( to find and buy (the time the server spends processing the request) the right lumber (the html page). The foreman ends up waiting for the courier to return before the construction crew can get to work.

During the construction phase the foreman sends the courier out again, but this time to pick up dry wall, siding, and shingles (the css information). Once that is done the foreman again sends the courier out to pick up any paint/finish/carpet/tile (images/animation) that is needed. Since these can be rather bulky your courier may either have to make more than one trip or take a bigger car to pick them up. Once that is done the owners of the house may request to have the yard landscaped and a sprinkler system added (Apex and VisualForce), so again the courier is sent out.

That example, while basic, is fairly accurate.
Most files that sends are rather small so the request doesn't need to take up many "lanes."(seems to be related the first analogy rather than the second) Some requests take longer to process, like dashboards or reports, but the resulting data packets (what is sent back to the job site) are still rather small. Furthermore, if you have a fairly complex internal network (think a large residential community) most of the time spent for each courier trip can be attributed to your courier just trying to get to a main road (on a tracert the intersection to the main road is normally the line that says (XXX.XXX.0.1). Since each page request can take many back and forth trips depending on the complexity of the page, the short time at each hop can add up significantly. For example, if it takes 200ms just to make a round trip to and from the main road and a typical page load can require 5 or more requests, that means that your internal network has added more than a second to your page load time.

Finally, if you and all your coworkers are trying to upholster a number of different houses at the same time, all of your respective couriers will spend more time waiting at stop lights then they will on traveling. It is only in this specific scenario that bandwidth is relevant in the discussion of performance issues.

promote demote