First the Pandemic then Train Strikes, WFH is here to stay.
IT is renowned – especially by those outside the industry – for being fast-moving and ever-changing.
In some ways this is true – product and service updates occur daily, in some cases several times a day. And from time to time the world of IT really does make an exponential leap forward, often due to component costs falling dramatically – just think in terms of storage capacity and memory, for example. But IT is equally cyclical in nature; almost like fashion at times, features and functionality that are seen as “essential” at some point in time are then cast aside for years before taking pride of place in the next era of IT.
Often there is a trigger point that sparks new, or re-invention. In 2020 the COVID-19 pandemic took hold, forcing the hand of companies to become totally flexible in the way they provide services to their staff and business partners, not least in terms of where those individuals actually work. In turn, that means supporting homeworking (WFH) more than ever before. In some ways, the reluctance to move to increased homeworking over the decades has been both surprising and frustrating, given the huge number of benefits WFH delivers – less traffic and travelling, more productive daytimes, flexibility in combining work and family life, reduced office costs – the list goes on and on… Against that list is primarily the one barrier – the human resistance to such a fundamental change ; lack of trust from the bosses in their staff when not directly under their noses and lack of faith in the individuals themselves to buckle down and work in an environment seen to have many distractions.
But from a technology perspective, there are no issues and no requirement to reinvent IT in order to support a new, and potentially massive wave of home/remote working. A recently published article in Digitalisation World “Predicting Life After The Virus” substantiated the belief that WFH is the future, stating that “home is where the work will be” and where IT sees “remote work as the norm, not the exception for most businesses”. Meantime, Aritari has been perfecting technologies designed to both simplify and optimise working from home/remotely, optimising Internet performance and cloud-based deployment, reducing costly support and management requirements, all elements that allow IT to be a completely distributed model and fully support remote/homeworkers.
The question is, from an IT management perspective, what do you really need now in order to establish a raft of homeworkers, whether they number dozens or thousands? Moreover, since the changeover is a rapid requirement for many, how do you also keep it simple and easily deployable? However simple and fool-proof your homeworking solution, the reality is that humans tend to panic, so their dependency on technical support and help via remote access will increase, initially at least. And while ever there are some form of centralised offices with workplaces, outgoing access to office-based devices, storage and services is equally important – it is a bidirectional process. Combining reliable and secure remote connectivity with little or no support requirement goes a long way towards easing the WFH move for companies and allowing instant benefits from that move, regardless of whether it was enforced or a chosen path. And, for anyone who still thinks this is “something that will occur in the future” then they had better think again – and quickly! The working world isn’t changing – it has already changed – and companies need to change with it if they haven’t already done so…
Just about every computer user on the planet runs TCP/IP sessions daily – Internet anyone? – and yet many still don’t understand the protocol and how to get the best out of using it.
Understanding Transmission Core Protocol (TCP)
TCP, combined with the Internet Protocol (IP), is the basis of Internet transmission – it defines the basic framework and rules of how data is sent and received. TCP also forms the framework of all common Internet protocols, notably (S)FTP, HTTP, CIFS and SMTP.
It is defined as a connection-oriented protocol, meaning that it establishes a connection between applications at each end, so TCP sends and receives packets across a network between each endpoint.
It can break application data into packets that are easier to manage and send across a network.
These packets are then numbered and sent in groups. One of the biggest advantages of TCP is that it establishes stateful connections, with guaranteed packet arrival, and integrated network congestion control. However, it has significant downsides too, especially in conjunction with respect to bulk data/big data transfers over IP links where latency and packet loss are prevalent.
How Latency And Packet Loss Impact On TCP Sessions
In order to perform reliable data transfers across a network via TCP, the receiving party must send an acknowledgment (ACK) to the sending party confirming the packet was received. These ACKs must be sent in sequential order, and the sender cannot send another packet of data until it receives an acknowledgment that the previous packet was received. The time spent sending a packet and receiving the ACK is measured as Round-Trip Time (RTT). It is one of the primary reasons TCP can be slow – it waits for ACKs instead of transmitting data. Across short distances, ACKs are relatively efficient and don’t overly impact on data transmission speeds. But as the distance increases, so does the RTT. The result is that slower ACK reception causes an exponential throughput degradation for bulk data transfers.
TCP responds to this by adjusting the acceptable amount of unacknowledged data allowed on the
ink – response time. If the “acceptable amount” is surpassed, the transfer will stop and wait for an ACK.
In this way, the optimal amount of unacknowledged data in transit should equal the end-to-end bandwidth, multiplied by the RTT. This sum is known as the bandwidth-delay product. TCP perpetually estimates this value and sets a “TCP window”. When the bandwidth-delay product exceeds the TCP window, the result is “dead air,” which creates even more wait time. Some satellite connections must deal with hundreds, or even thousands, of milliseconds of RTT – less than ideal, even for casual Internet browsing, for example.
Another major issue is network congestion. This typically causes buffer overflow on routers that are unable to handle a large amount of congestion placed on them. For example, a router experiences buffer overflow when it does not have the capacity to accept all the incoming packets, causing packet loss. TCP cannot distinguish between packet loss caused by network congestion, and congestion caused by other factors, such as interference in wireless or satellite networks. Physical structures in the “route” of a wireless or satellite connection cause interference, and ultimately packet loss. TCP will cut the TCP window in half when packet loss is detected, which is too aggressive when inherent interference is present. The ideal solution needs to be able to react to congestion in a less aggressive manner.
Issues With Common TCP-Based Protocols
It is important to understand that speed constraints are not the only issues associated with TCP-based protocols. Common limitations and disadvantages include:
HTTP transfers have a size limit of approximately 2GB. The file is placed on the computer’s memory during a transfer. The larger the file, the more resource intensive the file transfer becomes.
FTP does not use encryption by default when transferring files. Alternatively, FTP can be secured by using the SSL (Secure Sockets Layer) or SFTP protocols. Both SSL and SFTP are inherently secure.
FTP (or any other TCP-based file transfer protocol) does not give users the ability to adjust bandwidth in order to speed up or slow down occurring file transfers.
Many TCP-based protocols do not check the integrity of a file after it is transferred.
Size limits are commonly placed on SMTP transfers. This is not practical for sharing large files. If you are using your own individual mail server, however, the limits can be adjusted.
When a file is paused and resumes, most TCP-based protocols will blindly append the file with no checks, which can result in a corrupt file.
How TCP Can Be Optimised
TCP makes use of two buffers referred to as "windows" to perform transfers: the congestion window on the sender machine and the receive window on the receiver machine. The congestion window is scaled up and down by the sender in reaction to packet loss. On clean links with little or no loss, the window can quickly scale to its maximum value. However, on links experiencing significant loss, the window will quickly lower itself to reduce the re-transmission of redundant data. This is referred to as “congestion control.” The size of the receive window determines how much data the receiving machine can accept at one time before sending an acknowledging receipt back to the sender. When a TCP connection is established, the window sizes are negotiated based on the settings on each machine. The lower value between the two machines will determine the size of the congestion window.
In order to optimise TCP performance, you need to increase the value of the TCP windows. The congestion window must be configured on the sender side, and the receive window must be configured on the receiver side. The congestion window should be tuned to maximize the in-transit data and reduce the “dead air” on your link. The amount of in-transit data needed to maximise the link is called the bandwidth-delay product. The receive window should be increased to match the size of the congestion window on the sending machine.
Though tuning TCP can yield increased transfer rates, it can also create problems on networks containing packet loss. A single dropped packet will invalidate the TCP windows and when this occurs, the entire block is re-transmitted, substantially lowering the throughput of TCP transfers with large window sizes. Another potential issue is that tuning TCP for high-speed transfers may reduce speeds for everyday tasks such as email and web browsing. Changing the window sizes can therefore be a complicated process.
"TCP Accelerate: At Least 20x Faster At 200ms Latency than Standard TCP/IP"
This technology is critical for large global networks in resolving application performance issues and is supporting a customer’s move to Cloud whilst delivering improved user experience.
Packet Loss Protection
Another key technology within the Aritari software stack is the ability to overcome the general effects of latency in the Internet. This has become a key software driver for many organisations that see the financial benefits of moving to the Internet for the delivery of their network or Cloud applications but are negatively affected by the presence of packet loss. Packet loss is generally caused by lost packets or by network routers slowing down due to full buffers. These issues are further exacerbated by how the TCP Protocol deals with the retransmit of packets, resulting in serious bandwidth degradation for customers. In fact, only 2% of network packet loss can reduce your available network bandwidth by as much as 75%. How is Aritari different? Aritari does not use TCP/IP to transfer packets Because the VPN tunnel of Aritari is a unique and proprietary technology, which does not use TCP/IP, we are able to reduce the harmful effects of packet loss arising in the network. It’s mostly to do with the requirement we have that all packets are sent and arrive in order meaning that when they don’t, we immediately re-transmit lost packets without waiting.
This seemingly small difference has a considerable result on bandwidth as can be seen in the following graph. It shows that with Aritari’s ViBE technology, the effect of packet loss on the network is considerably reduced improving application delivery and responsiveness.
lick here to edit.
Aritari’s TCP acceleration is designed to mitigate the performance issues latency and packet loss that impact upon both general internet use and application performance and delivery. Importantly, it has the ability to support any public or private cloud application deployment globally.
How, then, does the technology work? Basically, by removing the inefficiencies in the TCP protocol responsible for determining the amount of bandwidth in a single user session, managing window sizes regardless of latency conditions and packet loss. In this way, performance is both optimised and consistent with the company claiming up to 20x improvement in bandwidth usage and performance. From a security perspective, Aritari replaces the need to create additional overhead in the form of an encrypted tunnel, such as when using IP Sec. In addition to removing a layer of complexity, it also removes the performance hit associated with traditional encryption technologies which, according to Steve Broadhead, director of Broadband-Testing labs, has regularly been measured at up to 50% overhead.
Broadband-Testing recently concluded a series of tests on the Aritari TCP acceleration [insert link to report here] which involved simulating generic web and website access, including dialling in specific latency and packet loss values, in order to simulate different, real-world conditions. While the Aritari technology is designed to optimise all manner of links, including those with extreme latency issues (such as satellite with 500ms+ typical), the report looked more specifically at creating some typical generic Internet environments, based around quiet, regular and peak, with latency values ranging between zero and 300ms, with packet loss similarly set between zero and 2%. Broadband-Testing ran a series of back to back tests, first without the acceleration – native Internet – and then with the Aritari technology engaged to measure the benefits offered by the Aritari technology.
As was observed during the testing, the acceleration benefits were noted equally during general browsing, as well as during more intensive usage, such as file transfers, typical of a contemporary collaborative, online work session. The first test focused on the kind of initial response times that users often find frustrating when using the Internet, whether browsing or using Internet-based applications – the test measuring response time for an Internet connection, bidirectional (send and receive) and then added increasingly challenged conditions and a broad range of simulated user environments.
While at Aritari, we conservatively claim up to 20x acceleration, Broadband-Testing recorded acceleration levels of up to 64x, noting that even these results could be significantly improved upon by testing in more challenging Internet connectivity conditions. In combination with its patented VoIP optimisation, Aritari now offers a genuinely unique offering in the world of Internet optimisation, supporting both traditional optimisation and redundancy methods, but equally designed to optimise the “new normal” world of connectivity, securely and regardless of the network type, so is completely flexible and scalable.
Data and application optimisation is nowadays a given – or at least it should be – for any form of remote connectivity.
Why would you not accelerate traffic and reduce bandwidth consumption if you can? However, much of the development in optimisation in recent years has focused on specific application acceleration; the issue here is that the end-to-end nature of the optimisation means that endpoints need to be able to see and decode that data being transferred. For this reason, each protocol - for example, http, ftp and others – has specific support developed for it. In a world where, ever increasingly, the focus is on secure, encrypted connections, this level of visibility often no longer applies. If it’s encrypted, you can’t see what it is – simple as.
The popular data deduplication relies on data not being random, so that duplicate segments can be compressed into shorter references rather than being sent multiple times, While the Aritari technology can do this if necessary, given that most data is encrypted and therefore appears to be random as far as the network is concerned, this technique is not effective in a modern scenario. The Aritari solution can be deployed on virtual machines, so for example you could spin up a “head end” in Azure and use unencrypted connections within a tunnel, using Aritari to encrypt data as it’s sent over the public network - this way, data deduplication can be used successfully and still securely.
But generic TCP traffic, not least for Internet browsing, is still the primary data traffic for optimisation. However, this requires a different approach in the modern, secure connected world. Aritari has the solution in that our TCP Acceleration (TCPA) will accelerate all such traffic significantly, whether encrypted or not. Aritari’s TCP acceleration is designed to mitigate the performance issues latency and packet loss that impact upon both general internet use and application performance and delivery. It has the ability to support any public or private cloud application deployment globally so, regardless of whether the approach is hybrid or pure-cloud, the solution is in place. Moreover, the increase in remote users globally means many will be using less than perfect Internet connectivity. The Aritari technology is designed to be more effective the more challenging the connections, so as common problems such as latency and packet loss increase, so too does the effectiveness of the Aritari solution.
So, viewed from an Aritari perspective, generic TCP acceleration applies to all TCP traffic – that being most network-based traffic - without needing to be aware of the application or information contained within it. As noted, TCPA will accelerate all such traffic significantly, whether encrypted or not, but especially in certain, common circumstances, such as:
Web browsing speed is further increased because of the fact that the client believes it connects to the server much more quickly than it would otherwise (and so can send the initial request far more quickly). From a user perspective this happens in the order of a few milliseconds, rather than experiencing the entire latency of the link to the server.
So, in combination with its patented VoIP optimisation, Aritari now offers a genuinely unique offering in the world of Internet optimisation, supporting both traditional optimisation and redundancy methods, but equally designed to optimise the “new normal” world of connectivity, securely and regardless of the network type, being completely flexible and scalable.