April 7, 2020
Episode 2 – Optimizing for Web Performance

Episode 2 – Optimizing for Web Performance


– So, Harry, in the first video, we talked about how you moved
from CSS to performance, and how web performance is
very important for business, and then we discussed
the fact that there are, web performance is all the way, you know, there’s a
full spectrum of things that you need to deal with, and you said, like, it
starts with the back end, way down in the back-end
and infrastructure. Maybe we should discuss
a little bit this one. Maybe this is the one that
you deal with the least, and then we can move into
the other topics closer to the other side. – Sounds good. – So what about the back end? – So, for me, even though I understand
back-end stuff the least, it’s really important for me to admit that if you’ve got back-end
and infrastructure problems, any work you do on the
front end is just trying to play catch up, right? So you need to solve back-end
infrastructure things first, and that could be in
many, many different ways. So, working with, for
example, Prismic, right, headless company. So there’s a lot of shuffling
data around with APIs perhaps, and it’s just making
calls to the right things. That’s traditionally gonna to
have quite a small footprint, and would be quite fast by default. So clients I work with like that, what they can tend to do is defer a lot of the back-end work to APIs, obviously make sure their APIs
are running nice and quick. But then, everything that they control on the front end is entirely up to them, and they can make it fast or slow. – So basically the first thing, what I understand is that
to get something stable, response time from the APIs, is what you want to be guaranteed. – Yeah. – Because if that’s not stable,
you can’t optimize anything. It’s a mess. – Yeah, exactly. If you’ve got, if your time
to first byte is a second, you’ve already lost a
tremendous amount of time there. So you need to be way faster now. Now I’ve worked with clients who, they haven’t worked in
headless environments, but they’ve had a lot of
GraphQL on the backend perhaps, and it’s just been really expensive, and really badly optimized. Now I can’t go in and tell them how to optimize that necessarily, but we can do things like
instruments on the back end get sort of coverage on
how long things take, and then start to drill
down what’s going on. – Right. – Then once all that’s solved, just wrapping that in whatever you want, which is beauty of headless, right? Because– – So headless basically
is that you have something that provides you with an API and you know it’s already dealt with. – Yeah, give me the raw
data, make sure that’s fast. Whatever you wrap around it
is entirely under your control and you’re in the position to
make it as fast as you want, and that also balances things
like developer experience. If you prefer maybe JAMstack over Ruby, you don’t have to be
forced down the Ruby path by whatever CMS or e-commerce
platform you’ve chosen. So I think that’s held a lot of my clients in really good stead, is going to sort of an API based model, or at
least with having raw data that they consume exactly as they see fit, and also not having any
dead weight around it. So traditional CMS, or
traditional e-commerce platform may have loads of other
stuff, whether it’s necessary, whether it’s architectural,
whether it’s an application, whatever, it’s just kind of
loads of unnecessary stuff in the way, and that could be making your application slower. Because if you can strip it back to just the connector.
– The core. – Yeah. – Just give me the raw
data, good response time, and let me handle everything else. – Exactly. – And from the framework side? You know, if we talk about, because then, whenever you get the raw data, you will have an app that
will be kind of getting these calls and then doing
some HTML or some DOM with it for the rendering. Do you have any, I don’t know, advice, or any experience on that? – From most cases, for nearly
every case that I’ve seen, whatever framework you want
to use, just server render it. Like, server render it, and
hydrate it on the front end or rehydrate as the page loads–
– Rehydrate, yeah, okay. – Purely because that’s
the best way to be fast. I’ve worked with a lot of
clients who they’ve gone full JavaScript in the browser. The immediate problem you’ve got there is, you’ve got potentially
several megabytes of runtime to download before you can even start manipulating your own data. – So nothing shows until
you download all of it. – Exactly. I won’t, just out of, there
aren’t any non-disclosure things but just out of good taste,
I won’t name any clients, but I’ve worked with clients who, they’ve had server rendered sites, which have been mega-fast,
and the cost of Express flattening down React into a
server-rendered application, is actually faster than the API calls, so that’s kind of continuing what we were talking about about APIs. We managed to find out, oh my goodness, the cost of actually
server rendering React is far less than actually
making the API call, whereas on the flip side,
I’ve had clients where they’ve gone full React in the browser, not tree shaken any of it,
not done any code splitting, and the developers say things like, “Oh well, they can download React once “and it will be cached forever”, and you look at their caching headers and it’s like, well, it won’t, ’cause you’ve not actually
set any caching headers, so– – Oh there’s that as well, yeah. – Yeah, so there’s loads
of stuff like that. And another one I’m
coming up at the moment, I’m coming up against at
the moment with a client is, they did go for the server render, so actually their first render on screen is really, really fast, but then they’re not actually
trimming down their API calls, so the actual amount
of calls they’re making from the very front end are,
there’s way too many of them, but it’s also returning way
more data than they need. So what we’ve actually got is almost a bottleneck in their front end where the APIs are as fast as they want, right. The APIs return the data really quickly. They’re just getting firehosed by it, they’re just consuming, trying to consume way too much of it. So that’s when we go back
onto the back end and say right, well, cache these
queries for 15 minutes. Nobody needs genuine
real time most popular TV shows–
– Navigation, navigation for instance. Something you query on every page. – Right, exactly. – But you don’t need every
second that we update it right? – Exactly. So what’s interesting is, API-based models give you a lot of flexibility, which could be argued that
developers need to have a really good overview of
everything that’s going on because if things grab data from wherever, they need to make sure
they’re using it responsibly, but also it does mean that you can respond to things very quickly. So with clients that I’ve
sort of just described, we can bounce between front and back end and try and work out all the time that, oh, here’s your bottleneck,
oh, now it’s here. And I think that’s quite
an interesting paradigm. That’s quite an interesting
environment to work in. – So from the sort of
things you talk about that are kind of interesting, first you told about server rendering, and then now we see like,
on top of React and Vue, there are several
technologies trying to kind of go that path and provide server rendering. First there is Gatsby, which
is completly static, you know, it’s compiled in static time. So it gets all the content
and then produces it, and then you get the
something like Nuxt and Next, that server render, and also they have an export option, I guess to do that. So everyone is like, I feel
like all the frameworks are trying to go towards that. That’s from one side,
and then you talked about fetching too much data,
or doing too many calls, and that’s interesting
because, as you’re saying, it goes like a lot of
round trips to the server, and the more you do,
the more delay you have. But also that also, you
think about the modeling of your content, and that’s
from our side for instance, when we, you have a page,
and developers tend to break it into two little
pieces, because, you know, developers, we like to break
things into little pieces and assemble them. But sometimes, and this
gets me back to my memories of dealing with databases,
and when people started talking about normalization,
normalize your database, and you will get it with one query instead of doing all
these kind of joins and, I don’t know if you run
into that kind of period where started talking a lot
about normalizing databases. – Yeah, yeah. – So, but maybe GraphQL
is also helping with that, to fetch, you know, a lot of data together instead of doing different queries. – Yeah, yeah. This is sort of straying
out of the kind of thing I deal with, but it does abstract a lot of that problem away or at least
remove a lot of that problem. So generally, as long as
you’re stepping carefully, you can get just as much data as you need. The good thing about doing hydration from the front end as well is that you can hydrate components individually. So you could have your
server rendered first pass might be critical content, but anything that could be
done lazily around that, then you introduce more round trips, but, if it’s peripheral
content, like, for example, a sidebar of trending topics on Twitter, let’s have a second API call,
let’s have more round trips, because there’s no point
plugin your critical data with this kind of transient,
less important stuff. So it does again give developers, as long as they’re thinking
about things correctly, the ability to design
very bespoke solutions, and I guess with a CMS or
any headless environment, being able to actually split that page out instead of having one big payload that gets fully server rendered, because then the problem you’ve got is, your server rendering is only as fast as slower or weakest query.
– Everything else, yeah. Yeah, exactly. – Doing bits of it on the
client means you can start to hydrate it individual–
– Yeah, that’s an interesting, and you know, that got me wondering once, GitHub was affected by some AWS problems, and what they said is that
only the search is impacted, everything else is not impacted. And that’s how they can be resilient to any component’s failure, and they still provide you
with most of the service, but only one thing is impacted. That thing, you can
stretch it to performance. If something’s going slower, it’s fine. Just don’t make everything suffer from it. – Yeah, it’s pretty elegant, right? – Yeah, yeah, and then you
guarantee as much as you can the good experience of
the performance website. – Exactly, yeah. – Maybe let’s touch
other topics, you know, from the CDN and above, in the next video. – Sounds perfect. – Thanks.

1 thought on “Episode 2 – Optimizing for Web Performance

Leave a Reply

Your email address will not be published. Required fields are marked *