One way JavaScript bloat creeps into projects is through the unnecessary inclusion of utility libraries. Lodash is one such library that gained traction during a time when JavaScript itself couldn’t conveniently accomplish what Lodash’s utility functions could. These days, bare JavaScript can replace some of what Lodash provides.
Notist’s architecture also encourages a strong separation of server-side and client-side code. While this can feel like an impediment to productivity, it raises guardrails that guide developers to shipping less client-side code. This helps Notist deliver a great user experience that boots quickly with minimal main-thread work incurred by JavaScript
Once you’ve settled on an architecture that prioritizes a good user experience, drafting a technology statement should be one of your next steps. A technology statement is a document that lays out the technology choices for a project and the rationale behind them. It’s like a code of conduct, but rather than setting expectations for the behavior of community members, it sets forth the technical requirements for prospective contributors. Whether a project is open source or proprietary, you need a technology statement that establishes the ground rules for contributions. Otherwise, when those excitable new contributors come along, their enthusiasm for their own preferred tools and methods can conflict with what’s best for the project.
Modern component-based frameworks (e.g., React) can render component markup not only on the client but also on a JavaScript backend as a string. This makes it possible to use the same component code on both the client and the server, which is an improvement for both the user and developer experiences. Don’t get complacent, though—server rendering is typically a synchronous and computationally expensive process. You’ll need to explore things like component caching in large apps to avoid inflating server response times
Not all client-side routers necessarily pay attention to—or may be able to remedy—crucial accessibility issues such as element focus, scroll position restoration, navigation canceling, and so on. I can’t stress enough that when you use a client-side router, you’re challenging decades of discovery and foundational work that browser vendors have done to ensure a consistent and resilient navigation experience
This is an example of a strong caching policy for a versioned static asset: Cache-Control: max-age=31536000, immutable This instructs the browser to cache the associated asset up to one year. We’ve also added an additional immutable directive as a further optimization, which tells supporting browsers, “Hey! This file will never change. Don’t bother checking with the server to see if it has!”
Progressive enhancement—as it applies to JavaScript, anyway—is this idea that we first implement a website’s essential functionality without JavaScript. Once we have a baseline experience that works without JavaScript, we then apply JavaScript selectively to provide a better experience for those who can benefit from it (http://bkaprt.com/rjs39/02-16). Progressive enhancement is a hard sell because it requires consensus that an experience should function in more than one ideal way. Yet its chief benefit is that redundant layers of functionality make a website more accessible and inclusive no matter where it’s accessed from.
Progressive enhancement—as it applies to JavaScript, anyway—is this idea that we first implement a website’s essential functionality without JavaScript. Once we have a baseline experience that works without JavaScript, we then apply JavaScript selectively to provide a better experience for those who can benefit from it (http://bkaprt.com/rjs39/02-16). Progressive enhancement is a hard sell because it requires consensus that an experience should function in more than one ideal way. Yet its chief benefit is that redundant layers of functionality make a website more accessible and inclusive no matter where it’s accessed from. Set a baseline You can apply progressive enhancement to most critical interaction points. Form-driven interactions are prime examples. For instance, let’s take a social media-type website where people can subscribe to new content from other people. This is an excellent example of where we can use progressive enhancement to facilitate a common interaction pattern that we’re all familiar with. This functionality is provided by—unsurprisingly—a <button> element. We could write some JavaScript to make a fetch call to a backend API when that button is clicked, then update its state on the client if the request succeeded. This isn’t an antipattern per se, but it shouldn’t be the sole way for this functionality to work. It should be an enhancement on
Once the server handles the subscription request, the browser is redirected back to the user’s profile page. Because the HTML is regenerated for that request, the state provided by the server will then show that the user has subscribed to that profile. If this interaction seems inefficient compared to a JavaScript fetch request, you’re not wrong! Thing is, though, the goal at this stage isn’t to be efficient but to set a baseline that expands access in the absence of JavaScript. There are many causes for why JavaScript may fail to load even when page markup succeeds. If—when—that happens, we can still offer a minimally viable experience.
First Paint (FP) and First Contentful Paint (FCP). FP identifies the first moment any pixels are painted to the screen, whereas FCP identifies when content has appeared in the viewport—that is, text, images, a non-white <canvas> element, and so on
Largest Contentful Paint (LCP) measures the moment when the largest piece of content in the viewport appears—be it an image, video, or text node (Fig 3.3). LCP is an improvement over other paint metrics since it emphasizes content prominence, therefore quantifying more perceptually significant milestones.
Performance-monitoring tools like WebPageTest and Lighthouse report LCP. At the time of writing, Google has advised that an LCP of 2.5 seconds or less is optimal (http://bkaprt.com/rjs39/03-04), but future guidance may vary. The best advice is to shoot for the best possible score, since LCP can affect how your page ranks in search results.
We can measure layout stability with a metric called Cumulative Layout Shift (CLS). Unlike time-based metrics, CLS uses a decimal scoring system that quantifies how far elements have shifted in the viewport from their previous position (http://bkaprt.com/rjs39/03-05). A good CLS score is 0.1 or less, but this could always change as some user-experience metrics may evolve over time. In practice, the closer you can get this metric to 0, the better off you’ll be.
Time to Interactive (TTI) is a lab metric that measures when a page is interactive. It’s calculated by first marking the page’s FCP. From there, the next five-second period in which there is neither a long task nor more than two in-flight network requests is also marked. The page’s TTI is at the start of that quiet window. Sites with a high TTI will behave more like screenshots of a website than an actual website, making users think your website is busted. A good TTI is less than five seconds on a middle-tier device (http://bkaprt.com/rjs39/03-07), but at the risk of repeating myself, I am once again asking you to aim as low as possible.
While TTI is great, it describes whether a page is interactive, not whether that page will respond to the first input quickly. First Input Delay (FID) is a field metric that fills in this information gap by measuring the delay between the first interaction with a page and when the browser responds to that interaction. FID isn’t strictly a JavaScript metric. Interactions that don’t rely on JavaScript (links, form controls, and so forth) still factor into the metric’s calculation. If you rely significantly on JavaScript to drive interactivity, however, FID may well reflect that. FID seems simplistic at first glance, but identifying the cause behind a high FID score gets tough. The first interaction with a page will vary from person to person. Someone on a fast device may present with a high FID score because of excessive JavaScript activity, while someone on the same page with a slow device may show a low FID score because they interacted with the page while the main thread was quiet. One way to troubleshoot high FID values is to contextualize them so that you know what’s happening at the time of the input delay. You can record long tasks from the Long Task API ( http://bkaprt.com/rjs39/03-08) occurring around the time of the FID itself. This is what I’ve done in a small metrics collection script I built ( http://bkaprt.com/rjs39/03-09). Being a field metric, FID yields a wide range of values, so it’s sensible to focus on the 95th percentile of those values (http://bkaprt.com/rjs39/03-10). This strategy prioritizes those experiencing extreme input latency.
Always measure TBT in synthetic testing tools that use low- to mid-tier devices or CPU throttling, and avoid testing on fast devices. WebPageTest runs its tests on one of a broad range of physical devices in different locations, and reports TBT in its results summary
the color red has special significance in Chrome’s profiler in that it calls out performance issues. An easy shortcut for finding problems is to scan the profiler’s activity overview and look for red strips. These strips represent periods of blocking time—and as we’ve established, blocking time equals an overworked main thread.
A better alternative for local testing is Chrome’s remote-device debugging tool, which spins up a developer-tools instance for an Android device connected to a laptop or desktop. If the connected device has USB debugging enabled (http://bkaprt.com/rjs39/03-16), you can access any pages the device has open in Chrome on the host machine’s developer tools by pointing Chrome to chrome://inspect#devices (Fig 3.21). You can launch an instance of Chrome’s developer tools from the debugging tool’s home screen by clicking on the Inspect link for a page. From there, you can use Chrome’s performance profiler to record page activity as you normally would
Averages tend to be skewed since large collections of field metrics have many outliers. Focus on percentiles. The 75th, 90th, and 95th percentiles are especially valuable, as these intervals tend to emphasize slower experiences over faster ones. Pay special attention to the 90th percentile and up. This is where you’ll see how slower devices and poor network conditions affect performance. Address pain points in this range, and you’ll make your website faster for everyone regardless of how fast their network connection or device is.
On the Android version of Google Chrome, users can specify a preference for reduced data usage via “Data Saver Mode” or “Lite Mode.” When enabled, this mode does two things: Chrome sets navigator.connection.saveData to true. Chrome sends a Save-Data HTTP header with a value of on for every request. Historically, that platform was the only one to offer such a mode, but more recently the prefers-reduced-data media query in CSS has emerged to expand data-saving functionality to all browsers. It can have a value of either no-preference, which communicates no preference for data usage, or reduce, which reduces data usage.
module.exports = scriptUrls => scriptUrls.map(url => '<script src="${url}"></script>'); // /app.js const buildScriptTags = require("./js/build-script-tags.js"); These differences may seem superficial. They’re not. For one, ESM is supported in browsers and in later versions of Node.js, while CommonJS is only supported in Node.js. Second—and this is the kicker—CommonJS modules can’t be statically analyzed. This is because both CommonJS’s
CommonJS modules can’t be statically analyzed. This is because both CommonJS’s module.exports and require constructs accept dynamic expressions. Therefore, bundlers can’t easily or reliably mark which CommonJS modules are unused (http://bkaprt.com/rjs39/05-08). ESM resolves this via the static import statement, which can only accept a plain string. This makes tree shaking loads more predictable for bundlers.
Code written using experimental features will always require transpiling, which represents an unnecessary performance cost. Never use or transform experimental features in production code. Like, ever. It carries zero user-facing benefits. This is an easy thing to avoid for Babel in particular: step lightly and do your research before using plugins or presets to transform production code if their names contain the word “stage” or “proposal” (http://bkaprt.com/rjs39/05-14).
Queries themselves can be as simple as last 2 versions, which selects the last two versions of every browser—but don’t do this. last 2 versions means you’re transforming JavaScript to be compatible all the way to Internet Explorer 10! Unless that’s your intent, you’re better off with a different query (http://bkaprt.com/rjs39/05-18
If you use Google Analytics, you can feed up to thirty days of visitor data into Can I Use, which will identify the browsers your visitors use. On that website, open the settings dialog and find the “Add from Analytics” section (Fig 5.5): After importing data, you’ll receive a high-level overview of what browsers your website visitors use. Once you’ve finished, Can I Use will also contextualize feature support at the visitor level
You could use all the modern language features and compile your source into two sets of bundles—one for legacy browsers and one for modern ones—and then serve them based on browser capabilities. This is called differential serving.
For differential serving, the bundler outputs two sets of bundles from the same entry point using two different configurations. One configuration applies minimal transforms for modern browsers, resulting in smaller bundles. The other applies all necessary transforms for legacy browsers, resulting in larger bundles.
Your transpiler will need to export two configurations: one with a Browserslist query targeting legacy browsers, and another targeting modern browsers. In the latter case, Babel’s preset-env provides an esmodules option in its targets configuration (http://bkaprt.com/rjs39/05-22).
For specifics, you can reference a guide I wrote on how to do this for webpack (http://bkaprt.com/rjs39/05-23), as well as a live coding demonstration (http://bkaprt.com/rjs39/05-24). I also have an example repository available (http://bkaprt.com/rjs39/05-25
differential serving by relying on two standardized behaviors: Browsers won’t download scripts with type attribute values they don’t recognize. In this case, modern browsers will download scripts served with a type attribute value of modern, but legacy browsers won’t. Inversely, legacy browsers don’t recognize the nomodule attribute, so they’ll download scripts that use it. However, modern browsers do recognize nomodule, so they’ll skip scripts with that attribute. Even so, some browsers don’t get this pattern right. Some browsers have issues where both bundles are downloaded. Some even execute both bundles (http://bkaprt.com/rjs39/05-26).
differential serving by relying on two standardized behaviors: Browsers won’t download scripts with type attribute values they don’t recognize. In this case, modern browsers will download scripts served with a type attribute value of modern, but legacy browsers won’t. Inversely, legacy browsers don’t recognize the nomodule attribute, so they’ll download scripts that use it. However, modern browsers do recognize nomodule, so they’ll skip scripts with that attribute. Even so, some browsers don’t get this pattern right. Some browsers have issues where both bundles are downloaded. Some even execute both bundles (http://bkaprt.com/rjs39/05-26). You have options, though:
you could opt for a server-side approach. One method relies on user agent sniffing via the browserslist-useragent package ( http://bkaprt.com/rjs39/05-28), which itself relies on Can I Use’s compatibility data (http://bkaprt.com/rjs39/05-29).
if you can think of any situation where you’d want to monitor an element’s visibility, and then perform some work when its visibility changes, Intersection Observer is the right tool for the job, and it does that job fast. Intersection Observer isn’t the only observer in town. We also have Mutation Observer to detect if changes have occurred in the DOM, Resize Observer to detect if the size of an element has changed, and Performance Observer to detect if new entries have been added to performance APIs such as Navigation or Resource Timing.
Comlink (http://bkaprt.com/rjs39/06-25) is an excellent and minimalist off-the-shelf abstraction by Surma (http://bkaprt.com/rjs39/06-26) for web worker communication.
if you load any scripts within the web worker context, the web worker parses and compiles these off the main thread as well. If a dependency in your application can be restricted solely to the web worker scope, you can also confine its startup costs to the worker thread. That’s a big deal.
If the work that needs to be done on your website doesn’t require direct access to the DOM and is computationally expensive, web workers can improve your website’s runtime performance by reducing main-thread work.
Simon Hearne’s Request Map Generator (http://bkaprt.com/rjs39/07-12) finds all the unique origins involved for a given web page. You can also use the Domains tab in WebPageTest to do this. With this information, you can establish connections as early as possible using the preconnect resource hint, which we covered in Chapter 4. If you can avoid some of the latency involved in these request chains, that’s a big performance win.
When you preload assets, you’re effectively boosting their priority; if everything is prioritized, then nothing is. Where preload and JavaScript is concerned, you should only use it for first-party scripts that are critical to rendering—and that’s only if you can’t find a way to take such JavaScript out of the critical path and serve contentful markup directly from the server. In any case, squandering preloads on third-party scripts siphons away available bandwidth from scripts that power critical user-experience fixtures.
Users don’t care if your analytics or heatmapping tools load faster. They’re after your content or product. You should never prioritize gathering data over performance, so save preload for the stuff that provides a material user-experience benefit.