Speed at Scale: Web Performance Tips and Tricks from the Trenches (Google I/O ’19)

[Music] hi everyone my name is Katie and Pena's and I'm addy Osmani we work on the probe we work on the chrome team on trying to keep the web fast today we're going to talk about a few web performance tips and tricks from real production sites but first let's talk about buttons now you've probably had to cross the street at some point and may have had to press a pedestrian beg button before now there are three types of people in the world there are people who press the button once there are people who don't press the button and then people who press it a hundred times because of course that makes it go faster the frequency of pushing these buttons increases proportional the user's level of frustration want to know a secret sure Lisa in New York most of these buttons aren't even hooked up so your new goal is to have a better time to interactive than this now this experience is feeling frustrated with you know buttons just not working is actually something that applies to the web as well according to a UX study that was done by Akamai in 2018 users expect experiences to be interactive in about 1.3 times the point when they're visually ready and if they're not people end up rage clicking right it's important for sites you visually already and interactive it's an area where we still have a lot of work to do here we can see page weight percentiles on the web both overall and by resource type if one of these categories is particularly high for a site it typically indicates that there's room for optimization and in case you're wondering what this looks like visually it looks a little bit like this you're sending just way too many resources down to the browser delightful user experiences can be found across the world so today we're at a deep dive into performance learnings from some of the world's largest brands let's start by talking about how sites approach performance this probably looks familiar for many sites maintaining performance is just as difficult if not more difficult than getting fast in the first place in fact an internal study done by Google found that 40% of large brands regress on performance after six months one of the best ways to prevent this from happening is through performance budgets performance budgets at standards for the performance of your site just like how you might commit to delivering a certain level of uptime to your users you commit to delivering a certain level of performance there are a couple different ways that performance budgets can be defined they can be based on time for example having a budget of having less than a two second time to interactive on 4G they can be based on page resources for example having less than 150 kilobytes of JavaScript on a page or they can be based on computed metrics such as lighthouse scores for example having a budget of a 90 or greater lighthouse performance score well there are many ways to set a performance budget the motivation and benefits of doing so remain the same well we talked to companies who use performance budgets we hear the same thing over and over they use performance budgets because it makes it easy to identify and fix performance issues before they ship just as tests catch code issues performance budgets can catch performance issues Walmart grocery does this by running a custom job that checks the size of the builds corresponding to all PRS if a PR the key bundle by more than one percent the PR automatically fails and the issue is escalated to a performance engineer Twitter does this by running a custom-built tracker that they built against all PRS this build tracker comments on the PR with a detailed breakdown of how that PR will affect the various parts of the app engineers then uses information to determine whether a PR should be approved in addition they're working on incorporating this information into automatic checks that could potentially fail a PR both Walmart and Twitter use custom infrastructure that they built themselves to implement performance budgets we realize that not everybody has the resources and time to devote to doing that so today we're really excited to announce light wallet light wallet add support for performance budgets to lighthouse it is available today in the command line version of lighthouse [Music] [Applause] the first and only step required to set up light wallet is to add a budget JSON file in this file you'll define the budgets for your site once that's set up from the newest version of lighthouse from the command-line and make sure to use the budget path flag to indicate the path to your budget file if you've done this correctly you'll now see a budget section within the lighthouse report this section will give you a breakdown of the resources on your page and they've applicable the amount that your budgets were exceeded by light wall was officially released yesterday but some companies have already been using it in production jabong at the online retailer based in india who recently went through a refactor that dropped the size of their app by 80% they didn't want to lose these performance winds so they decided to put performance budgets into place up on the screen you can see the exact budget JSON file that jabong is using jouvens budgeting is based on resource sizes but in addition to that light wallet also supports a resource count based budgets juban use the current size of their app as the basis for determining what their budgets would be this worked well for them because their app is already in a good place but what if your app isn't in a good place how should you set your budgets but one way to approach this problem would be to look at HTTP archive data to see what breakdown of resources correspond with your performance goals but speaking from personal experience that's a lot of sequel code to write so to save you the effort we're making that information directly available today and what we're calling the performance budget calculator simply put the performance budget calculator allows you to forecast time to interactive based on the breakdown of resources on your page in addition it could also generate a budget JSON file for you for example a site with 100 kilobytes of JavaScript and 300 kilobytes of other resources typically has a 4 second time to interactive and for every additional hundred kilobytes of JavaScript that time to interactive increases by one second no two sites are alike so in addition to providing an estimate the calculator also provides a time to interactive range this range represents the 25th to 75th percentile TTI for similar sites so one of the things that can end up impacting your budgets are images so let's talk about images starting off with lazy loading now we currently send down a lot of images to our pages and these aren't the best for limited data plans or particularly slow Network connections at the 90th percentile HP archive says that we're shipping almost five Meg's worth of images down on mobile and desktop and that's not perhaps the best now lazy loading is a strategy of loading resources as they're needed and this applies really well to things like off-screen images there's a really big opportunity here once again looking at HP archive we can see that there's actually an opportunity where at the 90s percentile folks are currently shipping down anywhere up to three megabytes of images that could be lazy loaded and at the median 416 kilobytes now luckily there are plenty of JavaScript libraries available for adding lazy loading to your pages today things like lazy sizes or react lazy load now the way that these usually work is that you specify a data source instead of a source as well as a class and then the library will upgrade your data source to a source as soon as you come into view now you can build on this with patterns like optimizing perceived performance and minimizing reflow just to let your users know that something's happening as these images are being fetched now we're going to walk through some case studies of people who've been able to use lazy loading effectively so chrome comm is our browser consumer site and recently we've been very focused on optimizing its performance we'll cover some of those techniques in more depth soon but these resulted in a 20% improvement in page load times on mobile and a 26% improvement on desktop lazy loading was one of the techniques the team use to get to this place they use an SVG placeholder with image dimensions specified to avoid reflow they're using intersection observers to tell when images are in or near the viewport and a small custom JavaScript lazy loading implementation the win here was 46 percent fewer image bytes on initial page float which is a nice win we can also look at more advanced uses of image lazy loading so here's shoppi shopping or a large e-commerce player in Southeast Asia recently they adopted image lazy loading and we're able to save one megabyte of fewer images that they're serving on initial load now the way that shop II works is that they're displaying a placeholder by default here and one of the images inside the viewport once again using intersection observer they're able to trigger a network call for the image to download it in the background once the image is either decoded if they the browser supports the image decode API or download it if it doesn't the image tag is rendered and they're able to do things like have a nice fade in animation when that image appears which overall looks quite pleasant we can also take a look at Netflix so as Netflix is catalog of films grows it can become challenging for them to present their members with enough information to decide what to watch so they had this goal of creating a rich enjoyable video preview experience so members could have a deeper idea of what was on offer now as part of this Netflix wanted to optimize their home page to reduce CPU load and network traffic to keep the UX intuitive the technical goal was to enable fast vertical scrolling through 30 plus rows of titles the old version of their home page would render all of the tiles at the highest priority and that would include data fetching from the server creating all the dom fetching all of their images and they wanted the new version to load much faster minimize memory overhead and and they will smoother playback so here's where they ended up when the page now loads they first render billboard images at the very top three rows on the server once they're on the client that make a call for the rest of the page and the render the rest of the rows and then load all the images in so they're effectively simply rendering the first three reasons with rows of dom and lazy loading the rest as needed the impact of this was decreased load time for members who don't scroll quite as far and this is effectively a summary where they'll ended up overall faster startup times for video previews and full screen playback so before there's cpu load required to generate all their Dom nodes and get images to load now they don't matte saturate quite as much member bandwidth and they pull in four times for your images on initial load so their video previews now have faster load times they've got less bandwidth consumption and lower memory overall so from our tests image lazy loading has helped many brands shave an average of seventy four six percent off of their image bytes on initial load as a result of using this optimisation these include the likes of Spotify and Target so it looks like there could be something here we could bring into the platform so today we're happy to announce that native image lazy loading is coming to Chrome this summer now the idea here is that with just one line of code using the brand new loading attribute you'll be able to add lazy loading to your pages this is big deal very excited about it this will hopefully work with three values the lazy value eager if an image is not gonna be lazy loaded and auto if you want to defer it to the browser thank you so we're also happy to announce that this capability is also coming to iframes so the exact same attribute the loading attribute is going to be possible to use on iframes and I think that this introduces a huge opportunity for us to optimize how we address loading third-party content now here is an example of the brand new loading attribute working in practice so what way this is going to work is that on initial load we're actually just going to fetch the images that are in or near the viewport we're also going to fetch the first two kilobytes of all of our images as that will give us dimension information and help us avoid reflow it'll give us the placeholders that we need and then we start loading these images on demand and what this leads to is quite nice savings so we're only loading up 548 kilobytes of images rather than those 2.2 megabytes now Chrome's implementation of lazy loading is doing a few other things behind the hood we actually factor in the users effective connection type when we decide what distance from viewport thresholds were going to use and those can be different from whether you're on 4G – whether you're on 2g now the loading attribute can either be treated as a progressive enhancement so only using it in browsers that support it or you can load javascript lazy loading library as a fallback so here we're checking for the presence of the loading attribute on HTML image element if it's present we'll just use the native attribute and we'll upgrade our image data sources and if it's not we can fetch in something like lazy sizes and apply it to get the same behavior so here it is working in Firefox where we've applied you know this exact same pattern and so we're able to get to a place where we have cross browser image lazy loading with a relatively hybrid technique that works quite well usually expect images to look good and be performing across a wide variety of devices this is why responsive images are an important technique responsive images are the practice of serving multiple versions of an image so that the browser can choose the version that works best for the user's device responsive images can either be based on serving different widths of an image or based on different densities of an image density refers to the device pixel ratio or pixel density of the device that the image is intended for for example traditional CRT monitors have a pixel density of 1 whereas retina displays have a pixel density of 2 however these are only just two of the many pixel densities in use on devices today and what Twitter realized was that it was unnecessary to serve images beyond a retina density this is because the human eye cannot distinguish between images beyond that density this is an important realization because it decreased image size by 33% the one exception to this is that they do continue to serve higher density images in situations where the image is displayed fullscreen and the user can pinch zoom on the image responsive images are just one of the many techniques that go into a fully optimized image when we're talking with large brands those optimizations not only include the usual suspects like compression or resizing but also more advanced techniques like using machine learning for automated art direction or using a/b testing to evaluate the effectiveness of an image and this is where image CD ends come in you can think of image CD ends as image optimization as a service and they provide a level of sophistication and functionality that can often be difficult to replicate on your own with local script based image optimization at a high level image CD ends work by providing you with an API for accessing and more importantly manipulating your images and image CDN can be something that you manage yourself or leave to a third party many companies do decide to go as a third party because they find that it is a better use of resources to have their engineers focus on their core business rather than the building and maintenance of another piece of software Chewbacca was a travel site based in Europe who switched to cloud in area and this was exactly their experience when Chewbacca switched to an image CDN they found that overall image size decreased by 80% those results are very good but they're not necessarily unusual when talking with brands who switched to image see the ends we found that they experienced a drop an image size of anywhere from forty to eighty percent I personally think part of the reason for this is that image students can often provide a level of optimization and specialization that can be difficult to replicate on your own if only due to lack of time and resources images are of a single largest component of most website so this translates into a significant savings and overall page size so next let's talk about JavaScript starting off with deferring third-party script and embeds things like ads analytics and widgets now third-party code is responsible for 57% of JavaScript execution time on the web that's a huge number this is based on HP archive data and this represents a majority chunk of script execution time across the top four million websites this includes everything across ads analytics embeds and a lot of the CPU intensive scripts can cause issues with things like script execution and can delay your user interaction so we need to exercise a lot of care when we're including third parties in our pages well now when I ask folks how their JavaScript diet is going it usually isn't very great tag managers adds libraries maybe there's an opportunity for us to defer some of this work to a smarter point in time let's talk about a site that actually did this for real The Telegraph so the Telegraph knew that improving the performance of third-party scripts would take time and it benefits from installing a performance culture in your organization they say that everybody wants that tag on their page that's gonna make the organization money and it's very important to get the individuals in a room to educate challenge and work together on this problem so what they did was they set up a web perf working group across their ads marketing and Technology teams to review tags so that non-technical stakeholders could actually understand what the opportunity here was what they discovered was a change the single biggest improvement at the Telegraph was deferring all Java scripts including their own using the defer attribute this hasn't skewed analyst or advertising based on their tests this is a really huge deal especially for a publisher because usually you see a lot of hesitation from marketing folks from advertising from analytics because there's this fear that you're going to end up losing revenue or not quite tracking as many users as you want to be able to track but through collaboration through building that performance culture they're actually get to able to get to a place with the org where they kept building on top of this including leading to changes such as a 60 second improvement in their time to interactive so they still have work to do but this is a really solid start we can also talk about tooie who are a travel operator in Europe they were looking at how to be more customer centric and realize that just think prices wasn't gonna cut it if visitors were leaving their site because of slow speed now for speed projects at their organization to get off the ground they had to get organizational buy-in from management all the way up to their CEO and through a test and learn mindset they were able to discover that when low times decreased by 71% bounce rates decreased by 31% now part of the things that allowed them to get to a place where they can improve performance were these two optimizations to our room using Google tag manager in the document head in their case we're using it to inject tracking scripts and things like that so they move the execution of Google tag manager after the load event they didn't see any drop in tracking a meaningful level as a result of this and the result was great in their perspective the 50% reduction in Dom complete so we also had a third-party AV testing library that they were using that weighed a hundred kilobytes of gzipped and minified script they realized that even if they were to push this to after the onload event it could potentially have some issues they noticed some flickering as it would switch between one a/b tests to the other so they completely threw that dependency out and they rewrote their a/b testing as something custom part of their CMS in under a hundred lines of JavaScript the impact was being able to throw away that dependency and a 15% reduction in homepage javascript let's also talk about embeds now we notice lighthouse flags chrome comm is having a high JavaScript execution time despite it looking like it's mostly a static site this would delay how soon users could react with the experience no we saw was that chrome comm actually had this watch video button on it where they show a YouTube promo if you clicked on the button unfortunately they dropped in YouTube's default embed into their HTML and this was pulling in all of the YouTube video player all of its scripts and resources on initial page load bumping up their time to interactive to 13.6 seconds now the solution here was that instead of loading those YouTube embeds and their scripts eagerly on page load switching to doing it on interaction so now when a user clicks to watch that video that's the point when we load in all those resources on demand because the user signaled an intense that they're interested in watching this led to a 69 point improve in their lighthouse performance score as well as a ten-second faster times interactive so really big change now no performance talk is complete with a doubt a discussion of the cost of libraries and how you should just remove all of them but since that topic has been done so so many times I wanted to take a little bit different angle and instead talk about what are some alternatives to removing expensive libraries in other words that's an option for you what are some other things you can look into first is deferring or deprecating expensive libraries so taking steps to eventually removing that library for placing the library with something less expensive deferring the use of an expensive library until after the initial page load and updating a library to a newer version when replacing libraries there are generally two things you want to look for one that the library is smaller but also maybe more importantly that it's tree-shaking by only using tree shakable dependencies you're ensuring that you're only paying the cost for the parts of the library that you actually use you can also defer the loading and use of expensive dependencies until after the initial page load tokopedia is an online retailer based in indonesia and they're using this technique on their landing page they really wanted their initial landing page experience to be as fast as possible so they rewrote it in spelt the new version only takes 37 kilobytes of JavaScript to render above-the-fold content by comparison their existing react app is 320 kilobytes I think this is a really interesting technique because they did not rewrite their entire app instead they're still using the react app they just lazy loaded in the background using service workers this can be a really nice alternative to rewriting an entire application as I mentioned tokopedia use felt for their landing page in addition to this felt pre-act and lit HTML are two other very lightweight frameworks to look into and last consider updating your dependencies as a result of using newer technologies newer versions of libraries are often much more performant than their predecessors for example zalando is a european fashion retailer and they notice that their particular version of react was impacting page load performance they ap tested this and found that by updating from react 15.6 one to sixteen point two they were able to improve load time by 100 milliseconds and lead to eight point seven percent uplift in revenue per session now another useful optimisation to consider is code splitting when we're thinking about loading routes and components we ideally want to do three things we want to let the user know that something is happening we want to load the minimal code and data really fast and we want to render as quickly as possible now Co splitting enables us to do this more easily by breaking our larger bundles into smaller ones whoops that allow us to load them on demand this enables all sorts of interesting loading patterns including progressive bootstrapping now when it comes to JavaScript it actually does have a real cost and those two costs are download and JavaScript execution down load times are critical for really slow networks so things like 2g and 3G and JavaScript execution time ends up being critical for devices with slow CPUs because javascript is CPU bound this is one of those places where small JavaScript bundles can be useful for improving your download speeds lowering memory usage and reducing your overall CPU costs now when it comes to JavaScript our team have a motto if JavaScript doesn't bring users joy thank it and throw it away I believe that this was in an extended special of Murray condos show now one site the breaks of jobs were pretty well is Google Shopping they're interactive in under five seconds over 3G and they have this goal of very loading very very quickly including their project details page now shopping have at least three JavaScript chunks one for above-the-fold rendering one for co2 response those user interactions and one for other Spurs that are supported by search now their work to get to this place involved drawing a new template compiler producing small or code through it and also looking at things like a lighter experience for folks who are on the slowest of connections they actually ship a version that's under 15 kilobytes of code for four users in those types of markets another good example are walmart groceries so walmart grocery is a single page application the loads as much as possible up front and they've been focused on cleaning up their code removing old duplicate dependencies anything that's unnecessary and they split up their core Java Script bundles using code splitting they've also been doing things that Katie's been suggesting earlier like moving to smaller builds of libraries like moment Jas and the impact of this iterative work has been great a sixty nine percent smaller JavaScript bundle and 28% smaller faster TTI now they continue to work on shaving JavaScript off their experience to improve it as much as possible we can also talk about Twitter so Twitter is a popular social networking site eighty percent of their customers are using mobile every day and they've been focused on unlocking a user experience for the web but lets users access content pretty quickly regardless of their device high for their connection now when Twitter light first launched a few years ago the team invested in many optimizations to how they load Java scripts they used route based code splitting and for T on-demand chunks for breaking up those large JavaScript bundles so that users could interactive in just a few seconds over 4g between this and smart usage of resource since they're able to prioritize loading their bundles pretty early so what did the team focus on next after that well twitter is a global site that supports 34 languages now supporting this required a tool chain of libraries and plug-ins for handling things like locale strings now after choosing a set of open-source tools they discover that on every build they were including internationalization strings in those builds that were in validating file hashes across the entire app each deploy would end up with an invalidated cache for their users and this meant that their service worker had to go and redownload everything this is a really hard problem to solve and they ended up actually rewriting their internationalization pipeline and revamping it the impact of this was that it enabled code to be dropped from all of their bundles and for translation strings to be lazy loaded the impact was 30 kilobytes of reduction in over Bundle size and it also unlocked other optimizations such as the emoji picker in Twitter being loaded on demand that saves 50 kilobytes from their core bundles having to include it the changes in the internationalization pipeline also led to an 80 percent improvement in JavaScript execution so some nice wins all around we can also take a look at JavaScript for your first-time user so those people who are coming to your experience for the first time looking at Spotify so Spotify started serving their web player to users without an account and they would show an option to sign up to play as soon as users would click on a song for first-time users that didn't need to you know use their playback library or their CoreLogic they would actually just keep first-time page loads very very low weight just 60 kilobytes of JavaScript to get it interactive really quickly once users actually authenticate and they log in they then lazy load the web player and their vendor chunk meaning that you as a first-time user get a really quick experience and then a an okay experience for the rest of your navigations now Spotify recently also rewrote their web player in react and redux and one decision that they made was to improve performance of navigations in the player previously they would load an iframe for every view which was bad for performance they discovered that Redux was pretty good for storing data from rest api's and a pretty normalized shape and making use of it to start rendering as soon as the user clicks on a link this enabled them to have quick navigations between pages even on really slow connections because you were reducing overall API calls and finally we can take a look at jabong so jabong as Katie mentioned earlier they're a popular fashion destination in India they decided to rewrite one of their experiences as a PWA and to keep that experience fast they use the purple pattern so push render pre cache and lazy load this allowed them to get interactive in just 18 kilobytes of JavaScript so they're using HP server push they trim their vendor bundles to eight kilobytes and their pre caching trips for future routes using service workers which overall led to a TTI improvement and 82% with some good business wins off the back event performance sites display text as soon as possible in other words they don't hide text while waiting for a web font to load by default browsers hide text if the corresponding font is not and the length of time that they will do this for depends on the browser simple to see why this is not ideal good news is that the fix is also simple wherever you declare a font face simply just add the line font display swap this tells the browser to swap out you use a default system font initially and then swap it out for a custom web font once the it arrives although you do currently have to self host web fonts to add font display to your pages right yes but we have a special announcement today so developers have been asking us to do something with google fonts for about a year and a half and today we're happy to announce that google fonts is su gonna support font display so you'll be able to set things like font display swap optional and a full set of values we're very excited about this change and this is actually just came in like last minute so we've got some last night last night last night so we've got some Docs to update let's also talk about resource hints so browsers do their best to prioritize fetching resources they think that are important but you as an author know more about your experience than anybody else now thankfully you can use things like resource hints to get ahead of that here's some examples so barefoot is an award-winning wine business they recently used a library called quick link which is under a kilobyte in size and what they do is they prefetch links that are in viewport using intersection observer and what they saw off the back of this was a 2.7 seconds fast or TTI for future pages off the back of it jabong are a site that are very heavily dependent on JavaScript for their experience so they used link rel preload to preload their critical bundles and saw a 1.5 second faster time to interactive off the back and chrome comm it was originally connecting to nine different origins for their resources they used link rel pre connect and saw 0.7 second decrease in latency off the back what are other folks doing with prefetching so eBay is one of the world's most popular e-commerce sites and help them speed up how soon users can view content they've started to prefetch search results so ebay now prefetch the top five items on a search result page for faster subsequent loads this led to an improvement of 759 milliseconds for a custom metric called above the full time it's a lot like first meaningful paint eBay shear that they're already seeing a positive impact on conversions through prefetching well the way that this works is effectively doing their prefetching after request I'll call back so once the page kind of settles and this is rolling out to a few different regions right now it's shipped to eBay Australia and it's coming soon to the US and UK now as part of eBay site speed initiative they're also doing predictive prefetching of static assets so if you're on the homepage it'll fetch the assets for the search page if you're on the search page it'll do it for the item page and so on right now the way that they're doing predictive prefetching is a little bit static Bribie are excited to experiment with how to use machine learning and analytics in order to do this a little bit more smartly no not a site that are that's using a very similar technique to this is Virgilio sports they're a Sports News website by attalia and they've been improving the performance of their core journeys they actually track impressions and clicks from users who are navigating around the experience and they're actually able to use link rel prefetch and serviceworkers to prefetch the most clicked article URLs that every 7 minutes their service workers will go and fetch the top articles picked by their algorithms except if you're on a slow 2g or 2g connection the impact of this was a 78% faster article fetch time and they've also seen that article impressions have been on the rise too after just three weeks of using this optimization they saw a 45 percent increase in article impressions critical CSS is CSS necessary to render above the fold content it should be inlined and that initial document that is inlined in should be delivered in under 14 kilobytes this allows the browser to render content to the user as soon as the first packet arrives in particular critical CSS hence have a large impact on first content will paint for example tui is a European travel site and they were able to improve their first content full paint by 1.4 seconds down to one second by inlining their CSS the K is another site using critical CSS there are a large Japanese newspaper publisher and one of the issues they ran into when implementing this was that they had a lot of critical CSS 300 kilobytes to be specific and part of the reason for that was there were a lot of differences in styles between pages but also due to factors like whether user was logged in whether a paywall was on whether a user had a paid or free subscription and so on once I realized this they decided to create a critical CSS server that tooken all these variables as inputs and return the correct critical CSS for a given situation the application server then in lines this information and it's returned to the user they're now taking this optimization a step further and trying out a technique known as edge side inclusion edge side inclusion is a markdown language that allows you to dynamically assemble documents at the CDN level why this is exciting is that it allows Nikkei to get the benefits of critical CSS while also being able to cache the CSS granted they're caching it at the CDN level and not the browser level in the event that the Desa sorry CSS isn't already cached on the CDN it simply falls back to serving the default CSS and that requested CSS is cached for future use McKay is still testing out the use of edge side inclusion but just through dynamic CSS alone they were able to decrease the amount of inline CSS in their application by 80% and improve their first content full paint by a full second probably is a newer compression algorithm that can provide better text compression than gzip oh yo is a Indian hospitality company and they use brightly to compress CSS and JavaScript this has decreased the transfer size of JavaScript by 15% which is translated into a 37 percent improvement in latency most companies are only using brought Leon static assets at the moment the reason for this is particularly at high compression ratios bratli can take longer and sometimes much much longer than gzip to compress but that isn't to say that bratli can't be used on dynamic content and used effectively twitter is currently using broadly to compress their api responses and on p75 payloads so this would be some of their larger payloads they found that using brightly decreased the size by 90% this is really large but makes sense when viewed in context of the fact that compression algorithms are gonna be more effective on larger payloads and our last topic is adaptive serving so loading pages can be a different experience depending on whether you're on a slow network or slow device or your other on a high-end device now the network information API is one of those who have platform features to give you a number of signals such as the effective type of the users connection save data so you can adapt but really loading as a spectrum and we can take a look at how some sites handle this challenge so for low end users on mobile Facebook actually for all users on low end mobile devices Facebook actually offers a very basic version of their site that loads very fast it has no JavaScript it has very limited images and it uses minimal CSS with tables mostly for layout what's great about this experience is that users can view and interact with it in under two seconds over 3G what about Twitter so cross-platform Twitter is designed to minimize the amount of data that you use which you can further reduce data usage by enabling data saver mode this allows you to control what media you want to download and then this mode images are presented to users when they tap on them so an iOS and on Android this led to a 50% reduction in data usage from images and on web anywhere up to 80% these savings add up and you to still get an experience is pretty fast with Twitter on limited data plans it was part of looking into how Twitter are handling their usage of effective type we discovered they're doing something really fascinating they're handling image uploads in an interesting way so on the server Twitter compresses images to 85 percent JPEG and a max edge of 4096 pixels but what about when you've got a phone out you're taking an image you're taking a picture but you're on a slow connection and may not be able to upload it well on the client what they now do is that they check if images appear to be above a certain threshold and if so they draw it to the canvas output it at 85 percent JPEG and they see if that it improved size often this can decrease the size of phone captured images from four megabytes down to 500 kilobytes the impact of this was nine point five percent reductions and cancels photo uploads and they're also doing all sorts of other interesting things depending on the effect of type of the users connection and finally we've got eBay so eBay are experimenting with adaptive serving using effective type if a user is on a fast connection the lazy load features like product image zooming if you're on a slow connection it isn't loaded at all if you are also looking at limiting the number of search results that were presented to users on really slow connections and these strategies allow them to focus on small small payloads and really give users the best experience based on their situation so those are just a few things that people are doing with adaptive serving it's almost time for us to go it is we hope you found our talk helpful remember you can get fast with many of the optimizations that we talked about today and you can stay fast using performance budgets and light wallet that's it from us thank you thank you [Music] [Applause] [Music]

compression algorithm

As found on YouTube

Share this article

Leave a comment