Bandwidth Required for Users
Salesforce is designed to use as little bandwidth as possible, so that the site performs adequately over high-speed, dial-up, and wireless Internet connections.
- While average page size is on the order of 90KB, Salesforce supports compression as defined in the HTTP 1.1 standard to compress the HTML content before it is transmitted as data across the Internet to a user's computer. The compression often reduces the amount of transmitted data to as little as 10KB per page, viewed due to the lack of image content. The site was designed with minimum bandwidth requirements in mind, hence the extensive use of color coding instead of images. Our average user also is known to view roughly 120 pages from our site per day. However, it is best to measure any page that has been customized, especially if Visual Force components have been added to the page, to get an accurate measurement of the page size.
- Our application is stateless, therefore, there are no communication requirements in the background once the page loads like traditional client server applications e.g. Outlook. Therefore once the page loads there are no additional bandwidth requirements until a user queries or writes information to Salesforce.
- In practice we have found the bandwidth requirements for other commonly used programs place a much higher demand on Internet bandwidth. We have also found through working with our customers that email (business & personal), email attachments, News, streaming video, stock update, place a much greater strain on the available bandwidth. Hence we recommend the customer measure all activities to make sure they are evaluating a holistic demand on their network services. An example would be an Account Executive sending a 7MB marketing brochure or PowerPoint presentation to a customer.
- The application of the formula "Peak bandwidth/number of users = average bandwidth per user" does not accurately portray the average bandwidth usage by the average user at Salesforce. Salesforce handles considerably more transactions per second in aggregate from all our customers than any one individual customer would see from their end (since not all users would be actively loading pages simultaneously).
In short, it is difficult to specify customer bandwidth because of the nature of the Internet and individual corporate usage. Network latency, peering issues, bandwidth at upstream providers, users using their Internet connections for other use besides Salesforce.com, etc. all affect the perceived performance of the connection and the amount of bandwidth required to keep performance adequate. Salesforce.com recommends engaging a networking professional to help measure, allocate, and monitor appropriate bandwidth and networking resources.
HTTP 1.0 versus HTTP 1.1
Typical web pages (a HyperText Markup Language (HTML) document), contain many embedded objects - today twenty or more embedded objects are quite common. The large number of embedded objects represents a change from the environment in which the Web transfer protocol, the Hypertext Transfer Protocol (HTTP) was originally designed.
As a result, HTTP/1.0 handles multiple requests from the same server inefficiently, creating a separate TCP connection for each object. Each of these is an independent object and retrieved (or validated for change) separately. The common behavior for a web client, therefore, is to fetch the base HTML document, and then immediately fetch the embedded objects, which are typically located on the same server.
The recently released HTTP/1.1 standard was designed to address this problem by encouraging multiple transfers of objects over one connection.
HTTP/1.0 opens and closes a new TCP connection for each operation. Since most Web objects are small, this practice means a high fraction of packets are simply TCP control packets used to open and close a connection. Furthermore, when a TCP connection is first opened, TCP employs an algorithm known as slow start. Slow start uses the first several data packets to probe the network to determine the optimal transmission rate. Again, because Web objects are small, most objects are transferred before their TCP connection completes the slow start algorithm. In other words, most HTTP/1.0 operations use TCP at its least efficient. The results have been major problems due to resulting congestion and unnecessary overhead.
HTTP/1.1 leaves the TCP connection open between consecutive operations. This technique is called "persistent connections," which both avoids the costs of multiple opens and closes and reduces the impact of slow start. Persistent connections are more efficient than the current HTTP 1.0 practice of running multiple short TCP connections in parallel. NOTE: These persistent connections are only open for the duration of the page load ý they do not remain open in the background.
HTTP/1.1 also enables transport compression of data types so those clients can retrieve HTML (or other) uncompressed documents using data compression; HTTP/1.0 does not have sufficient facilities for transport compression.
The following study showed that aggressive use of additional compression could save almost 40% of the bytes sent via HTTP:
Jeffrey C. Mogul, Fred Douglis, Anja Feldmann, and Balachander Krishnamurthy.
Potential benefits of delta encoding and data compression for HTTP.
In Proc. SIGCOMM '97 Conference, pages 181-194, Cannes, France, September 1997. ACM SIGCOMM.
Therefore we recommend at all times our customers use browsers that adhere to the HTTP 1.1 standard as it creates a number of efficiencies while using our service.