File downloads are truncated on Three Broadband
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Highlight
- Report Inappropriate Content
on 05-03-2025 05:37 PM
I'm looking for some verification about whether the issue I have is isolated to me, my area, or if it's a general Three-wide problem as I think it is.
I use Three 5G broadband and I'm about 50 metres away from the gNodeB so I've got excellent uninterrupted signal. It's not a Layer 1 problem I'm facing. The problem I have is that TCP connections are terminated prematurely (i.e. a RST packet is sent) before all data is received. Here's a simple test to verify if you have the problem or not.
The following command will attempt to download an 8MiB file (all NULs) from a website in AWS. It should work the same on Linux, MacOS, and modern Windows computers just the same. For me, I get the error "curl: (18) transfer closed with XXXXXX bytes remaining to read", which is the problem.
curl -H "Connection: close" https://electricworry.net/test-8 -o test-curl
If you're not comfortable connecting to my server, the following third-party download test should produce the same result (it does for me!):
curl -H "Connection: close" https://files.testfile.org/ZIPC/15MB-Corrupt-Testfile.Org.zip -o test-curl
When I tested, I collected a packet capture at both sides and I can see that my server sends the whole 8MiB file in the TLS session and then terminates the connection with a RST packet at the end (which it does because we sent a "Connection: close" header). However on my client side, only half of the file comes through before the session is impolitely terminated.
Would people on Three 5G broadband mind testing please to help confirm/deny whether this is a general problem or an individual one?
I've done a lot of testing over the past month and I've got a hypothesis.
- Comparing the server and client packet captures, the packets do not match up; the sequence and ack numbers - though they start the same - end up being very different. It appears that something in the middle is buffering the stream and ACKing the packets on my behalf.
- The problem only happens when I'm on my Three 5G Broadband service. If I take my laptop into work, the problem is gone. The problem doesn't occur when I use my Giffgaff mobile as a hotspot either.
- The problem exists on all websites (I suffer *a lot* from Ubuntu APT packages being half-downloaded and rejected on my workstation).
- Since the times on my server and client are synchronised as good as possible with NTP I can compare progress of the stream at both sides. When my server has finished transmitting (and received the final ACK) it correctly sends a RST packet according to the standard. However, at that same time on the client all of the stream has not been received (we're about half-way) and I certainly haven't sent an ACK for it. then a RST comes in tearing down the session before it's finished and truncating the download.
- The problem only happens if "Connection: close" header is used. If "Connection: keep-alive" is used, then it's the responsibility of the client to terminate the connection once it's done. In this case, no problem! However, a lot of things don't use that. A web browser generally uses keep-alive for efficiency - hence 99% of users won't encounter or know about the problem - but a lot of systems (e.g. APT, Ansible) will use "close", which is why it's such a problem for me in my work.
- Changing APN and PDP type in the router has zero impact; it doesn't matter whether I'm using IPv4, IPv6, IPv4v6, APN "3internet", "3secure", or "three.co.uk". The problem for me is general.
Ultimately, my hypothesis is that Three have some sort of connection buffering to optimise the user experience or maybe to prevent wasted re-transmissions, but there's a glaring bug in it whereby it resets the connection and discards the buffer it holds for the session once the server has closed the connection. This would make sense for an ISP based solely on a Radio Area Network because if clients exist in grey spots where the connection can go down momentarily much of the time it is helpful to buffer the lost packets for the clients rather than have the server spamming their link with retries of the unACKd packets (and further polluting the radio waves). So I think Three ACKing the packets on my behalf is by design. Only the implementation is bad and it mistakenly assumes it can throw away the buffer when the server terminates the connection.
Any help/testing/solidarity would be much appreciated because Three technical support have been zero help since I raised it with them over a month ago. I sent over detailed evidence, but all they can muster is a call occasionally to incorrectly restate the problem and ask if I'm still having it. Really awful experience; I've never seen a team so completely unable to escalate to responsible people who might actually be able to help eventually.
- Labels:
-
5G
-
Home Broadband
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Highlight
- Report Inappropriate Content
on 14-03-2025 04:52 PM
Thanks Pete. That's the most promising news yet. It's not a showstopper, so this is the sort of response I was hoping for. Presumably you'll get updates? Please do keep me informed as you hear more.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Highlight
- Report Inappropriate Content
3 weeks ago
Any news on the issue? I had a call from complaints and I was told that it was being worked on (which was good as it was the first time I had heard that over the phone) but that there was no ETA for a fix (fair enough).
@PeteGis that your understanding, that it's being worked on?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Highlight
- Report Inappropriate Content
2 weeks ago
Sorry for the delay in getting back to you.
Information on the issue is still limited, I'm unsure if the 1st line team are referencing the same thing or not, but hopefully they are.
I was wondering if you could do some additional testing for me please. Could you run the same kind of tests again but run it with 2 or more concurrent downloads going at the same time. Have the downloads overlapping in a way that will have one of them finishing while the other is still running, and let me know the results please.
Pete.
Mod tip! The author of a post can hit 'Accept as Solution', to highlight a reply that helped solved their query.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Highlight
- Report Inappropriate Content
2 weeks ago
Sure! So I'm using my 8MiB test for all of these; running "./test.sh" which is doing this:
curl -H "Connection: close" https://electricworry.net/test-8 -o test-curl
When I run multiple tests in series (so only one attempt/failure at a time) it generally fails around 50%, occasionally gets a bit further, rarely completes:
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
50 8192k 50 4132k 0 0 4990k 0 0:00:01 --:--:-- 0:00:01 4985k
curl: (18) transfer closed with 4156646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
54 8192k 54 4460k 0 0 5798k 0 0:00:01 --:--:-- 0:00:01 5793k
curl: (18) transfer closed with 3820646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
50 8192k 50 4148k 0 0 5693k 0 0:00:01 --:--:-- 0:00:01 5690k
curl: (18) transfer closed with 4140646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 8192k 100 8192k 0 0 3726k 0 0:00:02 0:00:02 --:--:-- 3727k
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
50 8192k 50 4132k 0 0 5121k 0 0:00:01 --:--:-- 0:00:01 5127k
curl: (18) transfer closed with 4156646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
51 8192k 51 4195k 0 0 2634k 0 0:00:03 0:00:01 0:00:02 2633k
curl: (18) transfer closed with 4092646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
64 8192k 64 5304k 0 0 6265k 0 0:00:01 --:--:-- 0:00:01 6262k
curl: (18) transfer closed with 2956646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
68 8192k 68 5585k 0 0 3285k 0 0:00:02 0:00:01 0:00:01 3285k
curl: (18) transfer closed with 2668646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
53 8192k 53 4343k 0 0 5844k 0 0:00:01 --:--:-- 0:00:01 5838k
curl: (18) transfer closed with 3940646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
54 8192k 54 4484k 0 0 6000k 0 0:00:01 --:--:-- 0:00:01 5995k
curl: (18) transfer closed with 3796646 bytes remaining to read
If I do the same while also running a tight loop of parallel downloads ("while true; do ./test.sh; done!") here's what I get:
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
50 8192k 50 4156k 0 0 1554k 0 0:00:05 0:00:02 0:00:03 1554k
curl: (18) transfer closed with 4132646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
50 8192k 50 4171k 0 0 3124k 0 0:00:02 0:00:01 0:00:01 3122k
curl: (18) transfer closed with 4116646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
51 8192k 51 4203k 0 0 2450k 0 0:00:03 0:00:01 0:00:02 2450k
curl: (18) transfer closed with 4084646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
65 8192k 65 5382k 0 0 2463k 0 0:00:03 0:00:02 0:00:01 2463k
curl: (18) transfer closed with 2876646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
50 8192k 50 4140k 0 0 2198k 0 0:00:03 0:00:01 0:00:02 2197k
curl: (18) transfer closed with 4148646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
51 8192k 51 4210k 0 0 1613k 0 0:00:05 0:00:02 0:00:03 1613k
curl: (18) transfer closed with 4076646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
50 8192k 50 4140k 0 0 1453k 0 0:00:05 0:00:02 0:00:03 1453k
curl: (18) transfer closed with 4148646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
50 8192k 50 4132k 0 0 5411k 0 0:00:01 --:--:-- 0:00:01 5409k
curl: (18) transfer closed with 4156646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
74 8192k 74 6124k 0 0 4313k 0 0:00:01 0:00:01 --:--:-- 4313k
curl: (18) transfer closed with 2116646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
69 8192k 69 5703k 0 0 2627k 0 0:00:03 0:00:02 0:00:01 2628k
curl: (18) transfer closed with 2548646 bytes remaining to read
electricworry@BOB1:~/projects/download-test$ ./test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
50 8192k 50 4164k 0 0 5486k 0 0:00:01 --:--:-- 0:00:01 5486k
curl: (18) transfer closed with 4124646 bytes remaining to read
So, no significant difference.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Highlight
- Report Inappropriate Content
2 weeks ago
Hello again.
Would you be able to check the rate of packet loss during some downloads? If you can do this with your own download tests, great. But could you also test with a dedicated tool for testing and measuring packet loss, and confirm the results?
Thanks.
Mod tip! The author of a post can hit 'Accept as Solution', to highlight a reply that helped solved their query.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Highlight
- Report Inappropriate Content
a week ago
Hi @PeteG,
I don't really understand the logic of such a test. In all of these tests we're using TCP which by design will have either side retransmit packets for which an ACK has not been received from the peer in a timely fashion. Nothing would cause either side to abandon and reset a connection unless ACKs have ceased for a significant amount of time (e.g. several seconds) and we know from my packet captures that from the perspective of the server (which is sending the data) it *has* sent all of the data and received ACKs for everything sent before it decides to close the connection with a RST. Meanwhile on my client (which is having the connection truncated before all data is received, due to whatever is MITM on the connection) we have ACKed everything that has been delivered (about 50%) before the "remote peer" (the thing doing MITM on the connection) unceremoniously RSTs the connection early.
Nevertheless, here's some info in that regard. Using the website https://packetlosstest.com/, I ran an intensive test:. (Note that the site uses DTLS over UDP so is not a representative test!)
- Packet sizes: 212 and 228 bytes (i.e. high header overhead)
- Frequency: 300 pings/second
- Duration: 10 seconds
- Acceptable delay: 200 ms
Result: Upload packet loss: 0% 0/2999. Download packet loss 0.5% (15/2999). Late packets 6% (180/2999). (A late packet here being one that arrived within 200ms, the lost ones being ones that didn't.)
If I run the same test increasing the the acceptable delay to 1000ms (completely acceptable within TCP) then I get slightly different results:
Result: Upload packet loss: 0% 0/2999. Download packet loss 0.4% (12/2999). Late packets 0% (0/2999). Which all seems pretty normal to me.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Highlight
- Report Inappropriate Content
Friday
Thanks again for the information.
Pete.
Mod tip! The author of a post can hit 'Accept as Solution', to highlight a reply that helped solved their query.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Highlight
- Report Inappropriate Content
2 weeks ago
Thanks for running those tests and being so detailed, Electricworry.
I'll forward the information tonight.
Pete.
Mod tip! The author of a post can hit 'Accept as Solution', to highlight a reply that helped solved their query.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Highlight
- Report Inappropriate Content
12-03-2025 06:16 AM - edited 12-03-2025 06:38 AM
I can replicate the issue with a NR5103e router running dual-stack IPV4/IPV6 by executing the same curl commands (curl version 8.7.1 on macOS Monterey 12.7.6). So it is likely not a router issue. However, if I use the same curl commands to download files hosted on an Apache server (I used the Apache Server Project as an example), then these download correctly i.e. no truncation. I am wondering if the problem might lie with the web server end, perhaps files hosted on nginx and the way buffering is set-up?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Highlight
- Report Inappropriate Content
on 12-03-2025 01:44 PM
good point, and that made me have a peek at nginx latest release notes.
Which URL did you use to test?
but I guess electricworry made up his mind

