cancel
Showing results for 
Search instead for 
Did you mean: 

File downloads are truncated on Three Broadband

electricworry
New member

I'm looking for some verification about whether the issue I have is isolated to me, my area, or if it's a general Three-wide problem as I think it is.

I use Three 5G broadband and I'm about 50 metres away from the gNodeB so I've got excellent uninterrupted signal. It's not a Layer 1 problem I'm facing. The problem I have is that TCP connections are terminated prematurely (i.e. a RST packet is sent) before all data is received. Here's a simple test to verify if you have the problem or not.
The following command will attempt to download an 8MiB file (all NULs) from a website in AWS. It should work the same on Linux, MacOS, and modern Windows computers just the same. For me, I get the error "curl: (18) transfer closed with XXXXXX bytes remaining to read", which is the problem.

curl -H "Connection: close" https://electricworry.net/test-8 -o test-curl

If you're not comfortable connecting to my server, the following third-party download test should produce the same result (it does for me!):

curl -H "Connection: close" https://files.testfile.org/ZIPC/15MB-Corrupt-Testfile.Org.zip -o test-curl

When I tested, I collected a packet capture at both sides and I can see that my server sends the whole 8MiB file in the TLS session and then terminates the connection with a RST packet at the end (which it does because we sent a "Connection: close" header). However on my client side, only half of the file comes through before the session is impolitely terminated.

Would people on Three 5G broadband mind testing please to help confirm/deny whether this is a general problem or an individual one?

I've done a lot of testing over the past month and I've got a hypothesis.

  • Comparing the server and client packet captures, the packets do not match up; the sequence and ack numbers - though they start the same - end up being very different. It appears that something in the middle is buffering the stream and ACKing the packets on my behalf.
  • The problem only happens when I'm on my Three 5G Broadband service. If I take my laptop into work, the problem is gone. The problem doesn't occur when I use my Giffgaff mobile as a hotspot either.
  • The problem exists on all websites (I suffer *a lot* from Ubuntu APT packages being half-downloaded and rejected on my workstation).
  • Since the times on my server and client are synchronised as good as possible with NTP I can compare progress of the stream at both sides. When my server has finished transmitting (and received the final ACK) it correctly sends a RST packet according to the standard. However, at that same time on the client all of the stream has not been received (we're about half-way) and I certainly haven't sent an ACK for it. then a RST comes in tearing down the session before it's finished and truncating the download.
  • The problem only happens if "Connection: close" header is used. If "Connection: keep-alive" is used, then it's the responsibility of the client to terminate the connection once it's done. In this case, no problem! However, a lot of things don't use that. A web browser generally uses keep-alive for efficiency - hence 99% of users won't encounter or know about the problem - but a lot of systems (e.g. APT, Ansible) will use "close", which is why it's such a problem for me in my work.
  • Changing APN and PDP type in the router has zero impact; it doesn't matter whether I'm using IPv4, IPv6, IPv4v6, APN "3internet", "3secure", or "three.co.uk". The problem for me is general.

Ultimately, my hypothesis is that Three have some sort of connection buffering to optimise the user experience or maybe to prevent wasted re-transmissions, but there's a glaring bug in it whereby it resets the connection and discards the buffer it holds for the session once the server has closed the connection. This would make sense for an ISP based solely on a Radio Area Network because if clients exist in grey spots where the connection can go down momentarily much of the time it is helpful to buffer the lost packets for the clients rather than have the server spamming their link with retries of the unACKd packets (and further polluting the radio waves). So I think Three ACKing the packets on my behalf is by design. Only the implementation is bad and it mistakenly assumes it can throw away the buffer when the server terminates the connection.

Any help/testing/solidarity would be much appreciated because Three technical support have been zero help since I raised it with them over a month ago. I sent over detailed evidence, but all they can muster is a call occasionally to incorrectly restate the problem and ask if I'm still having it. Really awful experience; I've never seen a team so completely unable to escalate to responsible people who might actually be able to help eventually.

0 REPLIES 0