Google’s Congestion Algorithm Not Fair and Need Improvements, According to a Research

Google has always been very open about the tests and updates that they launch in their products. Recently, Google designed its Congestion Control Algorithm (CCA) with the aim to improve network import travel between servers and to help resolve congestion on the internet. Even though it is a great thought by Google, but the researchers at Carnegie Mellon University are of the thought that its design is not fair by any chance.

As a result of this test, Google suffered from Bottleneck Bandwidth and RTT (Round-Trip Time). Google is not the first company to design and implement CCA algorithm; however, they are not as open as Google has been.

CCAs are majorly used to treat all traffic equally; however, the results of a new study were revealed in Internet Measurement Conference in Amsterdam stating that BBR is not as fruitful as we think it is. Instead of solving the congestion issue, the BBR connections themselves take up to 40% of the bandwidth, which leaves only 60% for the end-users.

The CCA algorithm is being used in Google and Alphabet, whereas, the researchers have reported that they will cause trouble during heavy periods of congestion.

The researchers have highlighted the point that the flaws in Google’s algorithm are not intentional. Google has already started working on the second version of the algorithm and will run a test to see if the problem is being fixed.


Google has made sure that the algorithm is transparent enough to let the researchers see for any problems in it. Google is expected to work harder on the algorithm to improve network performance in no time.


Photo: AP

Read next: Google clears out the Confusion on Chrome’s Encrypted DNS-Push

No comments:

Post a Comment