[UPDATE: We received some interesting feedback from Internode managing director Simon Hackett in response to this article. We've posted an update at the bottom of the article below].
A common criticism of the NBN is that it’s only a solution for the last mile, that it provides a super-fast on-ramp to the internet but once the data hits traffic that it won’t be much faster than what we already get. Or to put it another way, if I can’t max out my ADSL 2+ connection, how is the internet going to handle 100Mbps?
It’s true that there are a lot of places that the data connection between you and the server you’re trying to access can break down or become congested. These include:
The “last mile” link between the premises or user and the exchange/node/cell/point of interconnect.
The link between the exchange/node/cell/POI and the internet service provider.
The ISP’s peering and upstream links with the internet.
The various internet backbones and network routers that carry the data between source and destination. In an Australian context, the most important are the international backbones carrying data to and from the US, Asia and Europe.
The web server, which may be processor or bandwidth constrained.
The NBN handles the first of these admirably, but the other points have to be up to snuff to support it. The thing is, they certainly can be. Below we’ve listed just four of the ways the rest of the internet can keep up with the NBN.
1. Unmetering uploads
Encouraging peer to peer transactions will be a key mechanism for reducing the strain on international links. To this end, ISPs should start unmetering uploads.
Although it often goes unmentioned, a huge proportion of internet traffic is comprised of Peer to Peer transfers using tools like BitTorrent. A report from Envisional last year revealed that BitTorrent alone accounts for roughly 17.9% of global internet traffic. That’s down from previous estimates as cyber locker sites like RapidShare and video sites like YouTube and Netflix take up a bigger chunk of data, but it’s still substantial.
Yet thanks to upload metering and limited upload speeds, most Australian users end up downloading their BitTorrents and other P2P content from international sources, which in turn strains the limited international internet links.
It doesn’t have to be that way. The BitTorrent protocol includes a feature called Local Peer Discovery, which favours local sources over international ones. With the NBN’s increased upload speeds (5Mbps on a 25Mbps download service, 20Mbps on a 50Mbps service and 40Mbps on a 100Mbps account), it’s certainly possible that Australia could largely service its own P2P user base.
But in order for the numbers to work out, people need to seed – that is, continue to upload once their download is complete. Ideally, everybody seeds at least 100% of the file size; which on a 50Mbps or 100Mbps account nominally means they keep uploading for 2.5 times longer than the file takes to download.
But with metered uploads, people tend to choke off their torrents before that happens – because seeding costs money. Remove that restriction, and the internet will be better for it.
Unfortunately, the NBN plans released by the major ISPs so far generally follow whatever the upload model was for ADSL accounts. In most cases that means metered uploads, which in turn weakens local P2P exchange.
2. CDNs and transparent caches
Most readers will probably have never heard of Akamai, EdgeCast Networks and Limelight Networks, but these companies are already responsible for an enormous proportion of internet traffic. They, and companies like them, are the reason that Microsoft’s servers don’t melt down every Patch Tuesday or Mozilla.com doesn’t die the roughly three times a week a new version of Firefox is released.
These content distribution networks (CDNs) mirror major sites in data centres around the world, sparing international and national backbones the burden of carrying thousands or millions of instances of internet content. When you download a Windows Update, you’re probably not downloading it from Microsoft; you’re probably getting it from Level 3, Akamai or Limelight.
CDNs are going to be crucial to maintaining local speeds in the NBN future, especially for media-heavy sites. Web service providers hoping to exploit the power of 100Mbps connections will undoubtedly need to employ these networks; doing so can ensure the delivery of full speeds to end-users. A number of Australia’s larger ISPS – including Telstra, AAPT and iiNet/Internode – offer CDN services.
Of course, with independent CDNs like Akamai and Limelight Networks, the burden is placed largely on the Web service provider, which can disadvantage smaller content providers. Not every content provider can afford to employ CDNs. At the network level ISPs can and should also provide transparent caching of content.
Transparent caches are operated by an ISP and cache popular content (not just content that somebody has paid them to cache). For example, if there’s suddenly a massively popular YouTube video, the ISP can cache the video locally and serve it to all the users who request it, sparing the more congested international links. For the end user, nothing appears amiss (hence the “transparent” part), but in reality their request for the YouTube video is being intercepted and modified by the ISP to deliver the cached version rather than the version from Google’s servers.
As far as we know, Australian ISPs are presently a mixed bag when it comes to transparent caches. They come with a series of technical challenges and potential legal ones as well, since the model may result in an ISP inadvertently hosting illegal content in their caches. But if we’re going to get the best of our 100Mbps connections, such ISP-level caching could be key.
(As an aside, you can test whether your ISP uses a transparent cache using this tool. It’s not 100% reliable, but it’s an interesting way to test something that Australian ISPs don’t often reveal details about).
Thanks to multicasting, the viability of video and audio streaming services is assured on the NBN.
Multicasting allows a content stream (such as a video stream) to be sent across the network in a single instance, replicating and branching only when need to reach subscribing end users. For example, in a normal unicast environment, if there are 100 people wanting a particular stream, the source server has to send out 100 copies of the stream – which can really tax that server’s processing power and broadband link. Multicasting allows it to send a single stream, and the network takes care of branching it only to the specific users who want the stream.
The good news is that the NBN is multicast-capable, meaning that future internet TV and radio services will be more than viable.
The bad news is that it requires a special circuit and special contract with NBN Co, so independent and small-time broadcasters may be locked out unless they can form a cooperative. (There are more details on the NBN’s multicast services here for those that are interested. The current price for media casters is $5 per month for each end user they want to connect to the service, which will give them up to 20Mbps to that customer. That can include multiple SD or HD video and radio streams, with a minimum of 3Mbps per stream. For each additional 10Mbps, it will be an extra $5. Casters also need to purchase a multicast domain for each of the NBN’s 121 points of interconnect that they want to deliver services on.)
Now we don’t know yet what kind of services will be available through NBN multicasting, or how they will jive with the increasing preference for video on demand, but at the very least the NBN is well positioned for future TV services, allowing them to be delivered in a way that won’t break the internet, so to speak.
4. Proactive provisioning and peering
Ultimately, nothing will matter so much to NBN performance as your ISP having sufficient backhaul, upstream and peering arrangements to carry all the extra traffic. It could be a tough time for ISPs – we don’t yet know what kind of usage patterns and volume demands we’ll see from NBN users. It may be that we see higher contention ratios than we’re used to during the early NBN period, and these will certainly be a major differentiator between ISPs when the NBN hits full steam.
Of particular concern in an Australian context is the capacity to carry traffic internationally. The good news is that between the Southern Cross Cable, Telstra’s Endeavour Cable, the Pipe Pacific Cable and the various connections to and through Asia, there is actually a good deal of spare design capacity, with well in excess of six terabits per second possible without laying any new cable. To give an example, Southern Cross Cable, which links Australia and the US, claims that it currently carries around 295Gbps of data, but that it’s capable of at least four times that amount with existing technology. Telstra Endeavour has a design capacity of 1.28Tbps but only currently uses 80Gbps. That’s a lot of headroom to work with, and theoretically should be enough to support NBN speeds for a few years.
Likewise, there is considerable bandwidth available in domestic backhaul for ISP connectivity, enough that network carrying capacity is not really a technical concern.
Of course, it all comes down to costs. And thanks to the availability of new international bandwidth in the past few years as well as increased competition in domestic backhaul costs have come considerably, which is a good part of the reason that monthly quotas have exploded. And that certainly bodes well for the future of the NBN.
[UPDATE] The bandwidth delay product
Simon Hackett, Managing Director of Internode, emailed to let us know we missed a very important factor in NBN speeds: the growth in bandwidth is not being matched by a reduction in latency, which in the short and medium term can have an impact on real-world connection speeds. Hackett explains the Bandwidth Delay Product:
“If the TCP window sizes of your client and the remote server aren't tweaked to be very large, international data transfers exhaust the normal TCP window size and data stops streaming, and reverts to send/wait/send/wait.
The net effect is that an un-optimised transfer from the USA to Australia, even in the presence of infinite bandwidth, maxes out at a few megabits per second until you raise the window sizes properly at both ends.
CDNs work around this really well for popular content, but not for content that really does have to come across the world.
The key way to tell if this is a factor for you in a download? Try downloading multiple files at once from the same place.
If that speeds the total up (e.g. if you can only get 2 megabits per transfer but you can do ten and get a total of 20 megabits per second), then you've got a TCP window size issue, and your problem here can be solved with either accepting it, or fixing TCP window sizes, or more routinely using multiple concurrent transfers...
The point of a 100M link at home isn't going to be about one person at 100Mbps. It’s going to be about ten people at once, all running at 10 megabits per second, without impacting each other.”
Essentially, as Hackett points out, while bandwidth goes up considerably with the NBN, its impact on latency (that is the delay between sending and receiving a given packet of data) is less pronounced. It’s just physics – even fibre cannot transmit data faster than the speed of light, and you still have to perform signal processing and routing.
TCP/IP uses acknowledgement (ACK) packets, sent from the receiver to the sender of data to let them know that packets have arrived. But the high-latency relative to bandwidth of fibre induces stalls in the data stream, as the server has to wait for acknowledgement of previous data packets before sending more.
The upshot of this is that a given point-to-point data connection can be limited: not by the raw bandwidth, but by the latency in sending and receiving acknowledgment packets. The exact maximum speed depends on two things: the latency (which varies by distance and intermediate routing and signalling equipment) and the TCP window size, which dictates how much data can be sent before an ACK is required.
There are several solutions to this problem, including:
Better, properly configured routers and PCs with appropriate TCP window sizes (which is something that will certainly shake out over time).
Local CDNs to reduce latency.
Greater use of concurrent streams, UDP and swarming (like in BitTorrent) for media streaming.