Responsible JavaScript

Jeremy Wagner

One way JavaScript bloat creeps into projects is through the unnecessary inclusion of utility libraries. Lodash is one such library that gained traction during a time when JavaScript itself couldn’t conveniently accomplish what Lodash’s utility functions could. These days, bare JavaScript can replace some of what Lodash provides.

Link · 350-352

bare JavaScript can replace some of what Lodash provides.

Link · 352-352

Notist’s architecture also encourages a strong separation of server-side and client-side code. While this can feel like an impediment to productivity, it raises guardrails that guide developers to shipping less client-side code. This helps Notist deliver a great user experience that boots quickly with minimal main-thread work incurred by JavaScript

Link · 428-430

Once you’ve settled on an architecture that prioritizes a good user experience, drafting a technology statement should be one of your next steps. A technology statement is a document that lays out the technology choices for a project and the rationale behind them. It’s like a code of conduct, but rather than setting expectations for the behavior of community members, it sets forth the technical requirements for prospective contributors. Whether a project is open source or proprietary, you need a technology statement that establishes the ground rules for contributions. Otherwise, when those excitable new contributors come along, their enthusiasm for their own preferred tools and methods can conflict with what’s best for the project.

Link · 440-446

When we rely solely on client-side rendering, each script responsible for populating the app shell becomes a potential point of failure

Link · 559-560

Modern component-based frameworks (e.g., React) can render component markup not only on the client but also on a JavaScript backend as a string. This makes it possible to use the same component code on both the client and the server, which is an improvement for both the user and developer experiences. Don’t get complacent, though—server rendering is typically a synchronous and computationally expensive process. You’ll need to explore things like component caching in large apps to avoid inflating server response times

Link · 578-582

Not all client-side routers necessarily pay attention to—or may be able to remedy—crucial accessibility issues such as element focus, scroll position restoration, navigation canceling, and so on. I can’t stress enough that when you use a client-side router, you’re challenging decades of discovery and foundational work that browser vendors have done to ensure a consistent and resilient navigation experience

Link · 595-599

This is an example of a strong caching policy for a versioned static asset: Cache-Control: max-age=31536000, immutable This instructs the browser to cache the associated asset up to one year. We’ve also added an additional immutable directive as a further optimization, which tells supporting browsers, “Hey! This file will never change. Don’t bother checking with the server to see if it has!”

Link · 688-693

Progressive enhancement—as it applies to JavaScript, anyway—is this idea that we first implement a website’s essential functionality without JavaScript. Once we have a baseline experience that works without JavaScript, we then apply JavaScript selectively to provide a better experience for those who can benefit from it (http://bkaprt.com/rjs39/02-16). Progressive enhancement is a hard sell because it requires consensus that an experience should function in more than one ideal way. Yet its chief benefit is that redundant layers of functionality make a website more accessible and inclusive no matter where it’s accessed from.

Link · 698-704

Progressive enhancement—as it applies to JavaScript, anyway—is this idea that we first implement a website’s essential functionality without JavaScript. Once we have a baseline experience that works without JavaScript, we then apply JavaScript selectively to provide a better experience for those who can benefit from it (http://bkaprt.com/rjs39/02-16). Progressive enhancement is a hard sell because it requires consensus that an experience should function in more than one ideal way. Yet its chief benefit is that redundant layers of functionality make a website more accessible and inclusive no matter where it’s accessed from. Set a baseline You can apply progressive enhancement to most critical interaction points. Form-driven interactions are prime examples. For instance, let’s take a social media-type website where people can subscribe to new content from other people. This is an excellent example of where we can use progressive enhancement to facilitate a common interaction pattern that we’re all familiar with. This functionality is provided by—unsurprisingly—a

Link · 698-713

Once the server handles the subscription request, the browser is redirected back to the user’s profile page. Because the HTML is regenerated for that request, the state provided by the server will then show that the user has subscribed to that profile. If this interaction seems inefficient compared to a JavaScript fetch request, you’re not wrong! Thing is, though, the goal at this stage isn’t to be efficient but to set a baseline that expands access in the absence of JavaScript. There are many causes for why JavaScript may fail to load even when page markup succeeds. If—when—that happens, we can still offer a minimally viable experience.

Link · 725-731

First Paint (FP) and First Contentful Paint (FCP). FP identifies the first moment any pixels are painted to the screen, whereas FCP identifies when content has appeared in the viewport—that is, text, images, a non-white element, and so on

Link · 1013-1015

Largest Contentful Paint (LCP) measures the moment when the largest piece of content in the viewport appears—be it an image, video, or text node (Fig 3.3). LCP is an improvement over other paint metrics since it emphasizes content prominence, therefore quantifying more perceptually significant milestones.

Link · 1030-1032

Performance-monitoring tools like WebPageTest and Lighthouse report LCP. At the time of writing, Google has advised that an LCP of 2.5 seconds or less is optimal (http://bkaprt.com/rjs39/03-04), but future guidance may vary. The best advice is to shoot for the best possible score, since LCP can affect how your page ranks in search results.

Link · 1040-1044

We can measure layout stability with a metric called Cumulative Layout Shift (CLS). Unlike time-based metrics, CLS uses a decimal scoring system that quantifies how far elements have shifted in the viewport from their previous position (http://bkaprt.com/rjs39/03-05). A good CLS score is 0.1 or less, but this could always change as some user-experience metrics may evolve over time. In practice, the closer you can get this metric to 0, the better off you’ll be.

Link · 1064-1068

Time to Interactive (TTI) is a lab metric that measures when a page is interactive. It’s calculated by first marking the page’s FCP. From there, the next five-second period in which there is neither a long task nor more than two in-flight network requests is also marked. The page’s TTI is at the start of that quiet window. Sites with a high TTI will behave more like screenshots of a website than an actual website, making users think your website is busted. A good TTI is less than five seconds on a middle-tier device (http://bkaprt.com/rjs39/03-07), but at the risk of repeating myself, I am once again asking you to aim as low as possible.

Link · 1090-1096

While TTI is great, it describes whether a page is interactive, not whether that page will respond to the first input quickly. First Input Delay (FID) is a field metric that fills in this information gap by measuring the delay between the first interaction with a page and when the browser responds to that interaction. FID isn’t strictly a JavaScript metric. Interactions that don’t rely on JavaScript (links, form controls, and so forth) still factor into the metric’s calculation. If you rely significantly on JavaScript to drive interactivity, however, FID may well reflect that. FID seems simplistic at first glance, but identifying the cause behind a high FID score gets tough. The first interaction with a page will vary from person to person. Someone on a fast device may present with a high FID score because of excessive JavaScript activity, while someone on the same page with a slow device may show a low FID score because they interacted with the page while the main thread was quiet. One way to troubleshoot high FID values is to contextualize them so that you know what’s happening at the time of the Input Delay. You can record long tasks from the Long Task API ( http://bkaprt.com/rjs39/03-08) occurring around the time of the FID itself. This is what I’ve done in a small metrics collection script I built ( http://bkaprt.com/rjs39/03-09). Being a field metric, FID yields a wide range of values, so it’s sensible to focus on the 95th percentile of those values (http://bkaprt.com/rjs39/03-10). This strategy prioritizes those experiencing extreme input latency.

Link · 1097-1113

Always measure TBT in synthetic testing tools that use low- to mid-tier devices or CPU throttling, and avoid testing on fast devices. WebPageTest runs its tests on one of a broad range of physical devices in different locations, and reports TBT in its results summary

Link · 1130-1132

the color red has special significance in Chrome’s profiler in that it calls out performance issues. An easy shortcut for finding problems is to scan the profiler’s activity overview and look for red strips. These strips represent periods of blocking time—and as we’ve established, blocking time equals an overworked main thread.

Link · 1213-1215

A better alternative for local testing is Chrome’s remote-device debugging tool, which spins up a developer-tools instance for an Android device connected to a laptop or desktop. If the connected device has USB debugging enabled (http://bkaprt.com/rjs39/03-16), you can access any pages the device has open in Chrome on the host machine’s developer tools by pointing Chrome to chrome://inspect#devices (Fig 3.21). You can launch an instance of Chrome’s developer tools from the debugging tool’s home screen by clicking on the Inspect link for a page. From there, you can use Chrome’s performance profiler to record page activity as you normally would

Link · 1315-1322

Averages tend to be skewed since large collections of field metrics have many outliers. Focus on percentiles. The 75th, 90th, and 95th percentiles are especially valuable, as these intervals tend to emphasize slower experiences over faster ones. Pay special attention to the 90th percentile and up. This is where you’ll see how slower devices and poor network conditions affect performance. Address pain points in this range, and you’ll make your website faster for everyone regardless of how fast their network connection or device is.

Link · 1411-1416

On the Android version of Google Chrome, users can specify a preference for reduced data usage via “Data Saver Mode” or “Lite Mode.” When enabled, this mode does two things: Chrome sets navigator.connection.saveData to true. Chrome sends a Save-Data HTTP header with a value of on for every request. Historically, that platform was the only one to offer such a mode, but more recently the prefers-reduced-data media query in CSS has emerged to expand data-saving functionality to all browsers. It can have a value of either no-preference, which communicates no preference for data usage, or reduce, which reduces data usage.

Link · 1838-1848

module.exports = scriptUrls => scriptUrls.map(url => ''); // /app.js const buildScriptTags = require("./js/build-script-tags.js"); These differences may seem superficial. They’re not. For one, ESM is supported in browsers and in later versions of Node.js, while CommonJS is only supported in Node.js. Second—and this is the kicker—CommonJS modules can’t be statically analyzed. This is because both CommonJS’s

Link · 2362-2368

CommonJS modules can’t be statically analyzed. This is because both CommonJS’s module.exports and require constructs accept dynamic expressions. Therefore, bundlers can’t easily or reliably mark which CommonJS modules are unused (http://bkaprt.com/rjs39/05-08). ESM resolves this via the static import statement, which can only accept a plain string. This makes tree shaking loads more predictable for bundlers.

Link · 2367-2373

Code written using experimental features will always require transpiling, which represents an unnecessary performance cost. Never use or transform experimental features in production code. Like, ever. It carries zero user-facing benefits. This is an easy thing to avoid for Babel in particular: step lightly and do your research before using plugins or presets to transform production code if their names contain the word “stage” or “proposal” (http://bkaprt.com/rjs39/05-14).

Link · 2597-2602

Queries themselves can be as simple as last 2 versions, which selects the last two versions of every browser—but don’t do this. last 2 versions means you’re transforming JavaScript to be compatible all the way to Internet Explorer 10! Unless that’s your intent, you’re better off with a different query (http://bkaprt.com/rjs39/05-18

Link · 2622-2627

If you use Google Analytics, you can feed up to thirty days of visitor data into Can I Use, which will identify the browsers your visitors use. On that website, open the settings dialog and find the “Add from Analytics” section (Fig 5.5): After importing data, you’ll receive a high-level overview of what browsers your website visitors use. Once you’ve finished, Can I Use will also contextualize feature support at the visitor level

Link · 2635-2639

To accommodate legacy browsers, I use a Browserslist query of ie 11.

Link · 2643-2644

You could use all the modern language features and compile your source into two sets of bundles—one for legacy browsers and one for modern ones—and then serve them based on browser capabilities. This is called differential serving.

Link · 2671-2673

For differential serving, the bundler outputs two sets of bundles from the same entry point using two different configurations. One configuration applies minimal transforms for modern browsers, resulting in smaller bundles. The other applies all necessary transforms for legacy browsers, resulting in larger bundles.

Link · 2693-2695

Your transpiler will need to export two configurations: one with a Browserslist query targeting legacy browsers, and another targeting modern browsers. In the latter case, Babel’s preset-env provides an esmodules option in its targets configuration (http://bkaprt.com/rjs39/05-22).

Link · 2697-2701

differential serving by relying on two standardized behaviors: Browsers won’t download scripts with type attribute values they don’t recognize. In this case, modern browsers will download scripts served with a type attribute value of modern, but legacy browsers won’t. Inversely, legacy browsers don’t recognize the nomodule attribute, so they’ll download scripts that use it. However, modern browsers do recognize nomodule, so they’ll skip scripts with that attribute. Even so, some browsers don’t get this pattern right. Some browsers have issues where both bundles are downloaded. Some even execute both bundles (http://bkaprt.com/rjs39/05-26). You have options, though:

Link · 2743-2753

differential serving by relying on two standardized behaviors: Browsers won’t download scripts with type attribute values they don’t recognize. In this case, modern browsers will download scripts served with a type attribute value of modern, but legacy browsers won’t. Inversely, legacy browsers don’t recognize the nomodule attribute, so they’ll download scripts that use it. However, modern browsers do recognize nomodule, so they’ll skip scripts with that attribute. Even so, some browsers don’t get this pattern right. Some browsers have issues where both bundles are downloaded. Some even execute both bundles (http://bkaprt.com/rjs39/05-26).

Link · 2743-2752

a script-injection pattern that checks for browser support of the