Rolling out new features is always a blast, and it's extra rewarding when the new feature is a response to a customer request. We've had many conversations with SpeedCurve users who've told us that multiple Favorite dashboards would be a huge benefit for their teams.
Today, we're very excited to announce that multiple Favorites dashboards are now available. Here's why you need them and how to create them.
One of the best – and worst – things about real user monitoring is that it gives you unparalleled access to massive amounts of user data. The problem is when all this data leads to data indigestion. How do you know where to begin? And how do you know what to leave out in order to present a clear case for performance?
At SpeedCurve, we care about more than just showing you all your data. We want to show you the most important data. And we want to make it easy for you to share that data with people throughout your organization. That’s why we’re excited about the newest addition to our family of visualizations: engagement charts.
Being able to track which changes have an impact – either positive or negative – on your site’s performance is an important part of performance monitoring that can provide valuable feedback to your team. We wanted to make it easier to see at a glance all the changes to your site, without the cognitive overhead of interpreting charts. That’s why we created the new Changes dashboard, which gives you a newsfeed-style overview of recent activity in your SpeedCurve account.
Your Changes dashboard shows your performance budget alerts, deploys, site notes, and SpeedCurve product updates:
I’m at Shop.org this week, having really interesting conversations with online retailers. What I love about talking with this crowd is that – like me – they're super focused on user-perceived performance. Not surprisingly, we have a lot to talk about.
Making customers happy is the not-so-secret secret to retail success. Delivering a fast, consistent online experience has been proven to measurably increase every metric retailers care about – from conversions and revenue to retention and brand perception. (In fact, there's so much research in this area, I dedicated an entire chapter in my book to it. You can also find a number of great studies on WPO Stats.)
Delivering great, fast online experiences starts with asking two questions:
The good news is that most of the issues that are making pages slow for your shoppers are right on your pages, which means you have control over them. Here's an overview of the most common performance issues on retail sites, and how you can track them down and fix them.
I’m super excited to be able to say that I’ve joined Mark, Steve, and Tammy at SpeedCurve!
I’ve watched how Mark has shown over the last couple of years that performance monitoring doesn’t have to be dry and data-heavy; it can be insightful, interactive, and actionable. I’ve also been a follower of Steve’s work for many years. In fact, I should probably thank Steve for providing me with the knowledge that got me interested in web performance in the first place! Tammy’s work has been really interesting to follow, too - her focus on real people and how web performance impacts the way they use our websites is something that resonates strongly with me.
Joseph making BBC News way faster for all users.
We've improved our already fantastic interactive waterfall chart with a new collapsed mode that highlights all the key browser events. This lets you quickly scan all the events that happen as the page loads and if you scrub your mouse across the waterfall you can easily correlate each event to what the user could see at that moment.
Along with all the browser metrics you also get to see our new hero rendering times in context. Click on any event to see a large version of that moment in the filmstrip.
The key to a good user experience is quickly delivering the content your visitors care about the most. This is easy to say, but tricky to do. Every site has unique content and user engagement goals, which is why measuring how fast critical content renders has historically been a challenging task.
That's why we're very excited to introduce Hero Rendering Times, a set of new metrics for measuring the user experience. Hero Times measure when a page's most important content finishes rendering in the browser. These metrics are available right now to SpeedCurve users.
More on how Hero Rendering Times work further down in this post. But first, I want to give a bit of back story that explains how we got to here.
A couple of month ago, someone asked if I'd written a page bloat update recently. The answer was no. I've written a lot of posts about page bloat, starting way back in 2012, when the average page hit 1MB. To my mind, the topic had been well covered. We know that the general trend is that pages are getting bigger at a fairly consistent rate of growth. It didn't feel like there was much new territory to cover.
Also: it felt like Ilya Grigorik dropped the mic on the page bloat conversation with this awesome post, where he illustrated why the "average page" is a myth. Among the many things Ilya observed after analyzing HTTP Archive data for desktop sites, when you have outliers that weigh in at 30MB+ and more than 90% of your pages are under 5MB, an "average page size" of 2227KB (back in 2016) doesn't mean much.
The mic dropped. We all stared at it on the floor for a while, then wandered away. And now I want to propose we wander back. Why? Because the average page is now 3MB in size, and this seems like a good time to pause, check our assumptions, and ask ourselves:
Is there any reason to care about page size as a performance metric? And if we don't consider page size a meaningful metric, then what should we care about?
Being able to monitor and measure the performance of your pages is crucial. You know that already. You also know that the next step is to quickly find out what’s hurting your pages so you can stop the pain.
You want to know:
We’re super excited to announce that you can now use SpeedCurve to answer these questions.
If Mark and Steve invited you to work with them, what would you say? Exactly.
Okay, I have to elaborate a bit more about why I’m ridiculously excited about working with Mark and Steve. My first foray into the performance space was at the Velocity Conference in 2009. If you had told me then that someday I’d be working with that tall guy rocking the main stage, I would’ve thanked you for the kind words… while secretly thinking you were nuts. But here I am!
Tammy at International Women's Day Tech Talks in Toronto
SpeedCurve is a SPA (Single Page App) so we construct the charts dynamically using JSONP. It works great, but we're always looking for ways to make the dashboards faster. One downside to making requests dynamically is that the browser preloader isn't used. This isn't a factor for later SPA requests, but on the first page view the preloader might still bring some benefits. Or maybe not. We weren't sure, so we ran an A/B test. Long story short, doing the first JSONP request via markup caused charts to render 300 milliseconds faster.
We've improved the "Favorites" dashboard which now lets you build your own charts which:
Here's a walkthrough showing you some of the new features:
SpeedCurve reports the number of critical blocking resources in the page. These are the resources that block rendering. Since it's important that users see your content as quickly as possible, it's important to know what might be causing your page to render slowly. We recently enhanced the way we measure blocking resources and wanted to share those improvements with our customers as well as the performance community at large.
The main culprits that block rendering are scripts and stylesheets that are loaded synchronously. A great way to avoid this blocking problem is to load your scripts and stylesheets asynchronously. You can do that for scripts by using the async and defer attributes, plus other programmatic techniques. Loading stylesheets asynchronously is less popular but is still possible using techniques like loadCSS.
We're excited to announce SpeedCurve's RUM product, LUX.
The name LUX is a play on "Live User eXperience" and reflects how we've taken a different approach compared to other Real User Monitoring products. SpeedCurve's mission is to help designers and developers create joyous, fast user experiences. To do that, we focus on metrics that do a better job of revealing what the user's experience is really like.
In addition to standard RUM metrics like page load time and total size, LUX includes innovative new metrics that have more to do with the user experience like start render time, number of critical blocking resources, images above the fold, and viewport size. LUX's RUM metrics help you figure out which design and development improvements will make your users happier and your business more successful.
We put a lot of thought into curating a thematic set of dashboards that help you understand the performance of your front-end, but sometimes you just want to play with the data yourself and slice 'n' dice the data in all sorts of different ways. We've added a new "Favorites" dashboard that lets you do just that. You can explore the data and build your own charts, then rearrange them and share them with the team to help demonstrate the performance issues you're focused on right now.
Here's a walkthrough showing you how to slice the available data in different ways:
We also measure the CPU usage to different key events in the rendering of the page. SpeedCurve's focus is on the user experience and getting content in front of people as fast as possible, so we show you what the CPU is doing up till the page starts to render. This reflects CPU usage during the browser critical rendering path and can highlight various issues. If there's lots of CPU idle time then you're not delivering your resources efficiently. You want to get the CPU busy nice and early rendering the page, rather than sitting idle waiting for slow resources.
In the test below we see in the first pie chart that the CPU is spending a lot of time on layout up to the start render event, which is quite a different picture from the Fully Loaded CPU usage.
If you're a performance engineer, then you're familiar with waterfall charts. They are found in browser dev tools as well as other performance services. I use multiple waterfall tools every day, but the waterfall chart I love the most is the one we've built at SpeedCurve:
Progressive Web Apps (PWAs) combine the best and newest features of the Web to deliver an experience that rivals native applications on mobile. Even better, they work on desktop, too. In fact, they work everywhere that the Web works! "Ah", you say, "that's not true! They require features that don't exist in all browsers." Because PWAs are "progressive", they can adapt to older browsers to deliver the best experience possible given the features that are available.
Given these winning attributes, it's no surprize that PWAs are the most popular movement in web development today. And while there are already numerous conferences, videos, and evangelists for PWAs, there's less focus on testing and tracking the performance of PWAs. Mark and I discussed this gap in PWA performance. In response, we added some new features to SpeedCurve that, coupled with some existing features, provide a great toolkit for evaluating the performance of PWAs.
SpeedCurve’s sweet spot is the intersection of design and performance - where the user experience lives. Other monitoring services focus on network behavior and the mechanics of the browser. Yet users rarely complain that “the DNS lookups are too slow” or “the load event fired late”. Instead, users get frustrated when they have to wait for the content they care about to appear on the screen.
The key to a good user experience is quickly delivering the critical content.
SpeedCurve users may have noticed some changes recently. At the beginning of March we released a major redesign of our Settings UI that gives users more flexibility to get the exact test results they want. The two biggest changes were to the way we emulate devices and to allow different testing configurations for different sites.
At SpeedCurve, we focus on metrics that capture the user experience. A big part of the user experience is when content actually appears in front of the user. Since stylesheets and synchronous scripts are the culprits when it comes to blocking rendering, we've rolled out some new metrics that focus on these critical blocking resources.
The most helpful innovation we made is to highlight the critical blocking stylesheets and synchronous scripts in our waterfall charts. In the following example waterfall chart for ESPN, notice how the critical stylesheets (green) and synchronous scripts (orange) have a red hash pattern. Not surprisingly, the Start Render metric is delayed until after the last of these critical blocking resources is done loading. The "scrubber" at the bottom of the waterfall (showing the screenshot at that point in time) confirms that rendering has been blocked up to this point. Explore an example of a interactive waterfall chart.
If you want to improve performance, you must start by measuring performance. But what should you measure?
Across the performance industry, the metric that's used the most is "page load time" (i.e, "window.onload" or "document complete"). Page load time was pretty good at approximating the user experience in the days of Web 1.0 when pages were simpler and each user action loaded a new web page (multi-page websites). In the days of Web 2.0 and single-page apps, page load time no longer correlates well with what the user sees. A great illustration is found by comparing Gmail to Amazon.
In the last few years some better alternatives to page load time have gained popularity, such as start render time and Speed Index. But these metrics suffer from the same major drawback as page load time: they are ignorant of what content the user is most interested in on the page.
Any performance metric that values all the content the same is not a good metric.
Users don't give equal value to everything in the page. Instead, users typically focus on one or more critical design elements in the page, such as a product image or navbar. In searching for a good performance metric, ideally we would find one that measures how long the user waits before seeing this critical content. Since browsers don't know which content is the most important, it's necessary for website owners to put these performance metrics in place. The way to do this is to create custom metrics with User Timing.
Performance budgets are an important tool for ensuring your site is delivering a great user experience. Steve first experienced performance budgets while Head Performance Engineer at Google. The practice of using budgets to track performance took off with Tim Kadlec's blog post Setting a Performance Budget. The idea is to identify your performance goals and track the metrics that help you achieve your goals.
At SpeedCurve, we give performance budgets first-class status by tracking them in the Site dashboard. Here's an example of tracking a budget for image size.
Before setting your performance budgets, you first have to be monitoring your user experience. Only then can you set budgets and thresholds for improving your baseline user experience. This also allows you to quantify the improvements you're making and share success stories across the organization like "We just improved start render by 32% by reducing image requests to half the budgeted amount".
SpeedCurve now provides a visual diff of every deploy. A full resolution PNG is captured for each URL and each pixel is diffed with the previous deploy allowing you to easily spot any visual changes you may or may not have expected.
The key to practising safe continuous deployment is to have a robust set of tools that give you immediate feedback on how your code has changed between deploys and its effect on the user experience. It's often very hard to spot all the visual changes in each deploy, especially in fast moving teams where a lot of the focus is on unit tests and other automated pass/fail systems. Visual diffs bring an increased level of tracking and confidence when you're able to compare any two deploys and see exactly what has visually changed.
We do a visual diff every time you click "Test Now" on the Deploy dashboard or you use the SpeedCurve API to trigger a round of deploy testing. Integrating with the Deploy API is super easy and provides a robust set of metrics and before/after screenshots, visual diffs, waterfall charts, filmstrips and videos for each deploy. You can then compare any two deploys to see exactly what's changed over time.
This week we added support for organizations with multiple teams and multiple users.
One of the toughest challenges was simply working out what to call the different layers within the SpeedCurve app. Developers can spend years buried inside a data model but at the end of the day the UI has to be intuitive and easy to use! I hope we’ve done that and if not we’d love your feedback.