"Given carrier reputation and our own iPhone call drops, we were pretty surprised to discover, through careful testing in 12 markets, that AT&T's has pretty consistently the fastest 3G network nationwide, followed closely—in downloads at least—by Verizon Wireless. Let's get this straight right away: We didn't test dropped voice calls, we didn't test customer service, and we didn't test map coverage by wandering around in the boonies. We tested the ability of the networks to deliver 3G data in and around cities, including both concrete canyons and picket-fenced 'burbs. And while every 3G network gave us troubles on occasion, AT&T's wasn't measurably more or less reliable than Verizon's. It was measurably faster, however, download-wise, in 6 of the 12 markets where we tested, and held a significantly higher national average than the other carriers. Only Verizon came close, winning 4 of the 12 markets. For downloads, AT&T and Verizon came in first or second in nine markets, and in whatever location we tested, both AT&T and Verizon 3G were consistently present. If you're wondering about upload speeds, AT&T swept the contest, winning 12 for 12." More: I love this picture from the IPhone blog.
Not surprised as HSPA 7.2 blows EV-DO Rev. A out of the water. It will be interesting to see how LTE speeds benefit AT&T, T-Mobile and Verizon in the future, also how Sprint will fare with WiMAX.
AT&T is getting so much bad publicity lately but it is interesting how test after test keeps showing consistently that they have the fastest 3G service.
It's nice to see some hard facts instead of hot-air that the media is always blowing. I''m not really suprised by the results, mainly because UMTS/HSPA is a newer and more advanced technology than CDMA/EVDO. AT&T should perform better based on that alone. The only thing is Gizmodo really should have made the test more professional. Using a server-side measurement for a speed-test is a big no-no. Install a port monitor and measure the thruput on the laptop. Also, refreshing a 8MB file is not a real test of speed. Normally a constantly repeating FTP download of a 50MB or so file is best. ...anyway, professionalism aside, I think it's not all that bad of a test. In a related piece of news, a professional drive test company recently released the results of their drive tests in Germany/Austria/Switzerland. You can read it here (German only) or see the interesting part in the pic below (data download speed average circled in red) .
RR, While I agree that server-side measurement is not a good test (probably as bad as a a client-side Javascript one) why do you say these are server-side? Speedtest.net uses a flash application to measure speed and that runs in your browser, so it should be client-side. What am I missing? Yes, FTP transfer would definitely be a better way to measure speed or if they insist on HTTP then a command-line tool like wget.
Oh, I actually didn't know that Speedtest.net uses a client-side browser app. . but I still wouldn't trust anything in the browser. If you catch the traffic directly on the NIC there's less chance that something inside the browser is skewing the speed. It's also bad practice to use a public webserver that you don't know where it is or have any control over. It's best to use your own FTP server either inside the network, or very close to it, again to reduce the possibility that something else is skewing the speed. If I was Gizmodo (or anyone wanting to do a speedtest without proper equipment like TEMS Investigation) I'd set up a FTP server that could serve over 10Mbps, and then download via FTP a 50Mb file a few times, and watch the speed with something like NetPerSec
The problem with the "inside the network" when you're using a mobile data card is that you don't necessarily know the network topology that you're connected to. The online speed test sites that try to find the nearest server based on the IP address of your connection routinely picked something in Texas when I was testing the 3G AT&T speed. The DSL reports says that flash-based testing is accurate for DSL & cable, while Java can test even fiber (provided the server has sufficient bandwidth of course). Of course you're not going to get as accurate a result as a hardware device plugged straight into the Ethernet will give you, but it'll be accurate enough for all practical purposes. Their test will be close to what an actual user will experience in their everyday use of the card, while the wire speed you get from the direct testing is probably something a user will never be able to relate to
Why in the heck did the Tmob finish so poorly in the upload tests when they run the same kind of system that bug Blue does and in theory should be dealing with less network congestion?
...but it's still something outside of your control, and you don't know where it is or how it works, or what speeds it can really server, or how many other people are using it at the same time.... It's best to reduce the variables and use your own server that only you are using and you know everything about it... Generally speaking, you're right. For a simple, quick and dirty test, what Gizmodo did is fine and I'm probably just being to picky. It's just that I've made drive tests on-and-off for 10 years for dozens of different operators, and it's always been made with a server that we owned, and measuring the thruput as soon as it enters the laptop (ie: on the NIC) and never in the browser (unless a specific application is being tested). The hardware is always an "off-the-shelf" phone and laptop, but the measurement software /TEMS Investigation) is expensive/proprietary...
Yea, I'm kinda suprised as well. My first guess would be maybe T-Mob doesn't have enough backhaul (T1's) to their sites? ...or maybe they do have congestion and need to roll out a 2nd and 3rd UMTS carrier (AT&T has 2 or 3 carriers in most markets). ...or does T-Mob not have 3.6Mbps HSPA enabled, and uses only 1.8Mbps?
But then eliminating too many variables wouldn't really represent real-world data transfer circumstances, would it? In order to test what the end user should expect under normal use circumstances I think you ought to use real world sources which are susceptible to all kinds of disruptions. That's why those test results are ran several times and then averaged out. I've used Speedtest.net many times and if they can consistently clock my FiOS connection at 15mbps up and down, then I'm sure they have the capacity to test my iPhone 3G connection without any significant skew. In a perfect world where you are eliminating all disruptions and you control your own server and connections, then you are really testing the throughput of the immediate network as purely as possible. You will likely get a higher number, but you aren't really testing what a normal end user will get.
Exactly. It's the Layer 3 radio link that is being measured and compared (not the application layer, which is higer up in the stack). It's showing what the network is capable of up until you exit the network, then everything is out of the control of the network operator. What the normal end user will get will always vary because of all the other variables out there.
There are several different testing approaches and you need to pick the right one for the goal. You're absolutely right about the value of the tests you performed but they're of more interest to the server provider. The hardware tests I was referring to before are invaluable if you're building a piece of hardware to go on the network (for example, a hardware network encryption device that you want to work at practically line speed). However, in both of these cases you do have control over the more or less entire data path from server to client. Gizmodo just doesn't have these resources if they set up their own servers, these will fall outside the providers network and there's still some unknown data path you have no control over. Gizmodo can not ask AT&T, VZW & T-Mobile to set up test servers just for this purpose. In other words, Gizmodo can not set up a proper sandbox for Layer 3 & below testing. So since users are likely to be interested in real world Layer 7 speeds, they picked the right approach for the job. Layer 3 and below have their uses as well, but people reading the Gizmodo article will have no way to relate to that.
You don't really need to have inside access to the network to make a proper measurement. The article I posted about Switzerland/Austria was made with no assistance from any of the operators. But the company performing it looks like they did it the right way, ie: with TEMS and their own server. Of course Gizmodo probably can't afford TEMS, but measuring a FTP download from a nearby server on the NIC would be close enough for a speed test. I still don't like a speed-test measurment made inside a browser. SpeedTest.net is probably the best, but I've seen some really inaccurate results, most recently Charlyee logged +22Mbps on AT&T's network with SpeedTest.net If Gizmodo was using Charlyees setup, then no wonder AT&T won http://forums.wirelessadvisor.com/w...dy-shows-ts-3g-network-boasts.html#post545651 ...I've never seen results like that with a program measuring the speed on the NIC. ...actually, I just ran SpeedTest.net on my home PC and it showed 14.16Mbps, while my NetPerSec measuing the NIC showed 16.2Mbps, and my modem shows 16.5Mbps....
lol, yes I sold them my setup for a fortune - NOT . Seriously I believe that you cannot accurately speed test a BB without tethering. Did you use SpeedTest on your BB? What kind of results did you get? Here are couple from today: On WiFi On 3G BlackBerry9700/5.0.0.405 Profile/MIDP-2.1 Configuration/CLDC-1.1 VendorID/102
The key words are "nearby server". When you're using a wireless data card you just don't know what lies between your wireless card and your server. You can find out using something like traceroute and you can be surprised with the results. For example, when I VPN into work, care to guess how many hops it takes to come back to my own gateway (something that is one hop away when I'm just on the home network). My office is only 7 miles away from my home.
I like the results that you are showing. You can see the data stream shrinking as each layers' encapsulation is stripped off. COtech
Well, the most important thing for a speed test is having a server that can constantly deliver the speeds you request. SpeedTest.net seems to do that very well, but it's still a public 3rd party server thats an unknown variable... you don't know if it's speeds are affected if alot of other people are accessing the same server at the same time. Yea, traceroute is fun. I put it on my RadioRaiders System Check with a location lookup, so you can see the location of all the routers in between my server in Dallas and your PC. Yea, that's the difference between speed testing and application testing. The modem shows the "raw speed" including overhead and everything. The application speed doesn't show re-transmissions, overhead, etc. but only the data requested. And that can vary by application, protocol, etc.. True it shows more the "end user experience" but really, who's to say what application and protocol the end user chooses to use? Fo example: my ISP promises me 16Mbps and that's exactly what is delivered to my modem. SpeedTest.net run in my browser shows 14Mbps, but that's because these other 2Mbps are lost due to application layer overhead which my ISP isn't responsible for.
You're never going to get the full wire speed unless you manage to measure Layer 1 as each additional layer adds overhead. That or you get a cable service which seems to care less about giving you 30Mbps on a 16Mbps account Very cool tool thanks. I remember back in the day there was a whole desktop application that did just that Look at how far you can get now with a browser and some Google maps