HTTP/2 changes the way web developers optimize their websites. In HTTP/1.1, it’s become common practice to eek out an extra 5% of page load speed by hacking away at your TCP connections and HTTP requests with techniques like spriting, inlining, domain sharding, and concatenation.
Life’s a little bit easier in HTTP/2. It gives the typical website a 30% performance gain without a complicated build and deploy process. In this article, we’ll discuss the new best practices for website optimization in HTTP/2.
Most of the website optimization techniques in HTTP/1.1 revolved around minimizing the number of HTTP requests to an origin server. A browser can only open a limited number of simultaneous TCP connections to an origin, and downloading assets over each of those connections is a serial process: the response for one asset has to be returned before the next one can be sent. This is called head-of-line blocking.
As a result, web developers began squeezing as many assets as they could into a single connection and finding other ways to trick browsers into avoiding head-of-line blocking. In HTTP/2, some of these practices can actually hurt page load times.
Nuno wrote an interesting comment to my Stretched Firewalls across L3 DCI blog post:
You're an old school, disciplined networking leader that architects networks based on rock-solid, time-tested designs. But it seems that the prevailing fashion in network design and availability go against your traditional design principles: inter-site firewall clustering, inter-site vMotion, DCI, etc.
Not so fast, my young padawan.
Let’s define prevailing fashion first. You might define it as Kool-Aid id peddled by snake oil salesmen or cool network designs by people who know what they’re doing. If we stick with the first definition, you’re absolutely right.
Now let’s look at the second camp: how people who know what they’re doing build their network (Amazon VPC, Microsoft Azure or Bing, Google, Facebook, a number of other large-scale networks). You’ll find L3 down to ToR switch (or even virtual switch), and absolutely no inter-site vMotion or clustering – because they don’t want to bet their service, ads or likes on the whims of technology that was designed to emulate thick yellow cable.
Want to know how to design an application to work over a stable network? Watch my Designing Active-Active and Disaster Recovery Data Centers webinar.
This isn't the first Continue reading
VeloCloud finds a better use for extra VoIP lines.
Sometimes the phrase ‘working the ticket queue’ is code for ‘doing meaningless work’. If you find yourself playing whack-a-mole with your ticket queue, then this is the post for you. You should strive to do meaningful work and this post discusses … Continue reading
The post Four Trouble Ticket Survival Tips appeared first on The Network Sherpa.
On this week's show -- in addition to covering the latest claims about the true identity of Satoshi Nakamoto -- we're taking a look at a recent deal between a very large bank in Australia and Sydney's University of New South Wales.
Free the MANO! The OPNFV board agrees to let projects expand to higher layers.