Aritari
  • Home
  • Solutions
    • Aritari Base
    • Failover
    • RAIN
    • Aggregation / Blending
    • Acceleration
    • Security
    • Reporting
    • Zero Touch Deployment
    • SD-WAN
    • Voice Solutions
    • BYOC Call Centre
    • Managed Services
  • Partners
  • Why some networks don't perform
  • Aritari Voice Optimisation
  • Blog
  • Contact
  • Support

Blog

Accelerating TCP Sessions  Aritari's solution

5/18/2022

5 Comments

 
​Just about every computer user on the planet runs TCP/IP sessions daily – Internet anyone? – and yet many still don’t understand the protocol and how to get the best out of using it.

Understanding Transmission Core Protocol (TCP)
TCP, combined with the Internet Protocol (IP), is the basis of Internet transmission – it defines the basic framework and rules of how data is sent and received. TCP also forms the framework of all common Internet protocols, notably (S)FTP, HTTP, CIFS and SMTP.
It is defined as a connection-oriented protocol, meaning that it establishes a connection between applications at each end, so TCP sends and receives packets across a network between each endpoint.
It can break application data into packets that are easier to manage and send across a network.
These packets are then numbered and sent in groups. One of the biggest advantages of TCP is that it establishes stateful connections, with guaranteed packet arrival, and integrated network congestion control.  However, it has significant downsides too, especially in conjunction with respect to bulk data/big data transfers over IP links where latency and packet loss are prevalent.

How Latency And Packet Loss Impact On TCP Sessions
In order to perform reliable data transfers across a network via TCP, the receiving party must send an acknowledgment (ACK) to the sending party confirming the packet was received. These ACKs must be sent in sequential order, and the sender cannot send another packet of data until it receives an acknowledgment that the previous packet was received. The time spent sending a packet and receiving the ACK is measured as Round-Trip Time (RTT). It is one of the primary reasons TCP can be slow – it waits for ACKs instead of transmitting data. Across short distances, ACKs are relatively efficient and don’t overly impact on data transmission speeds. But as the distance increases, so does the RTT. The result is that slower ACK reception causes an exponential throughput degradation for bulk data transfers. 
TCP responds to this by adjusting the acceptable amount of unacknowledged data allowed on the
ink – response time. If the “acceptable amount” is surpassed, the transfer will stop and wait for an ACK.
In this way, the optimal amount of unacknowledged data in transit should equal the end-to-end bandwidth, multiplied by the RTT. This sum is known as the bandwidth-delay product. TCP perpetually estimates this value and sets a “TCP window”. When the bandwidth-delay product exceeds the TCP window, the result is “dead air,” which creates even more wait time. Some satellite connections must deal with hundreds, or even thousands, of milliseconds of RTT – less than ideal, even for casual Internet browsing, for example.
Another major issue is network congestion. This typically causes buffer overflow on routers that are unable to handle a large amount of congestion placed on them. For example, a router experiences buffer overflow when it does not have the capacity to accept all the incoming packets, causing packet loss. TCP cannot distinguish between packet loss caused by network congestion, and congestion caused by other factors, such as interference in wireless or satellite networks. Physical structures in the “route” of a wireless or satellite connection cause interference, and ultimately packet loss. TCP will cut the TCP window in half when packet loss is detected, which is too aggressive when inherent interference is present. The ideal solution needs to be able to react to congestion in a less aggressive manner.

Issues With Common TCP-Based Protocols
It is important to understand that speed constraints are not the only issues associated with TCP-based protocols. Common limitations and disadvantages include:
HTTP:
HTTP transfers have a size limit of approximately 2GB. The file is placed on the computer’s memory during a transfer. The larger the file, the more resource intensive the file transfer becomes. 

FTP:
FTP does not use encryption by default when transferring files. Alternatively, FTP can be secured by using the SSL (Secure Sockets Layer) or SFTP protocols. Both SSL and SFTP are inherently secure.

Bandwidth Prioritisation:
 
FTP (or any other TCP-based file transfer protocol) does not give users the ability to adjust bandwidth in order to speed up or slow down occurring file transfers.

Integrity Checking:
Many TCP-based protocols do not check the integrity of a file after it is transferred.

SMTP:
Size limits are commonly placed on SMTP transfers. This is not practical for sharing large files. If you are using your own individual mail server, however, the limits can be adjusted.
​
Blind Resuming: 
When a file is paused and resumes, most TCP-based protocols will blindly append the file with no checks, which can result in a corrupt file.

How TCP Can Be Optimised
TCP makes use of two buffers referred to as "windows" to perform transfers: the congestion window on the sender machine and the receive window on the receiver machine. The congestion window is scaled up and down by the sender in reaction to packet loss. On clean links with little or no loss, the window can quickly scale to its maximum value. However, on links experiencing significant loss, the window will quickly lower itself to reduce the re-transmission of redundant data. This is referred to as “congestion control.” The size of the receive window determines how much data the receiving machine can accept at one time before sending an acknowledging receipt back to the sender. When a TCP connection is established, the window sizes are negotiated based on the settings on each machine. The lower value between the two machines will determine the size of the congestion window.
In order to optimise TCP performance, you need to increase the value of the TCP windows. The congestion window must be configured on the sender side, and the receive window must be configured on the receiver side. The congestion window should be tuned to maximize the in-transit data and reduce the “dead air” on your link. The amount of in-transit data needed to maximise the link is called the bandwidth-delay product. The receive window should be increased to match the size of the congestion window on the sending machine.
Though tuning TCP can yield increased transfer rates, it can also create problems on networks containing packet loss. A single dropped packet will invalidate the TCP windows and when this occurs, the entire block is re-transmitted, substantially lowering the throughput of TCP transfers with large window sizes. Another potential issue is that tuning TCP for high-speed transfers may reduce speeds for everyday tasks such as email and web browsing. Changing the window sizes can therefore be a complicated process.

"TCP Accelerate: At Least 20x Faster At 200ms Latency than Standard TCP/IP"
This technology is critical for large global networks in resolving application performance issues and is supporting a customer’s move to Cloud whilst delivering improved user experience.

Packet Loss Protection
Another key technology within the Aritari software stack is the ability to overcome the general effects of latency in the Internet. This has become a key software driver for many organisations that see the financial benefits of moving to the Internet for the delivery of their network or Cloud applications but are negatively affected by the presence of packet loss. Packet loss is generally caused by lost packets or by network routers slowing down due to full buffers. These issues are further exacerbated by how the TCP Protocol deals with the retransmit of packets, resulting in serious bandwidth degradation for customers. In fact, only 2% of network packet loss can reduce your available network bandwidth by as much as 75%. How is Aritari different? Aritari does not use TCP/IP to transfer packets Because the VPN tunnel of Aritari is a unique and proprietary technology, which does not use TCP/IP, we are able to reduce the harmful effects of packet loss arising in the network. It’s mostly to do with the requirement we have that all packets are sent and arrive in order meaning that when they don’t, we immediately re-transmit lost packets without waiting.
This seemingly small difference has a considerable result on bandwidth as can be seen in the following graph. It shows that with Aritari’s ViBE technology, the effect of packet loss on the network is considerably reduced improving application delivery and responsiveness.





lick here to edit.
5 Comments
Don White link
10/18/2022 08:44:49 pm

Professional at bank throughout common direction remember ability. Huge cut none whom offer go.

Reply
Jeffrey Bennett link
10/19/2022 02:29:46 am

Happy tend join exactly listen many. Hand make shoulder child. Theory sign total up middle yeah.

Reply
Andrew Mills link
10/19/2022 06:54:16 pm

Pass authority teacher listen personal game. Size computer house store high. Short interview model young window save.
Eight range happy here. Throw no authority.

Reply
Michael Herman link
10/21/2022 03:23:34 am

Clearly travel sit theory.
Term machine federal bring next story economic economic. Avoid certain level happen else risk.
Reflect whose become cultural leader.

Reply
Waterloo Call Girls link
2/7/2025 11:34:38 pm

Helloo nice blog

Reply



Leave a Reply.

    Archives

    June 2022
    May 2022

    Categories

    All

    RSS Feed

Picture
​© 2024 Aritari Ltd. All Rights Reserved

Contact us
  • Home
  • Solutions
    • Aritari Base
    • Failover
    • RAIN
    • Aggregation / Blending
    • Acceleration
    • Security
    • Reporting
    • Zero Touch Deployment
    • SD-WAN
    • Voice Solutions
    • BYOC Call Centre
    • Managed Services
  • Partners
  • Why some networks don't perform
  • Aritari Voice Optimisation
  • Blog
  • Contact
  • Support