How to Dramatically Improve Your Web App Performance and Security in One Hour

How to Dramatically
Improve Your Web
App Performance
and Security
in One Hour

Web App Performance and Security

With valuable input from Felix Hassert and Roland Guelle.

Black Friday, Cyber Monday and Holiday Season sales agitation is coming. Your customers already preferred to do the shopping online. Because of the well-known global situation many of them even don’t have any other choice.

During rapidly accelerated digital transformation for all of us, all the service providers and creators keep struggling  to deliver working solutions faster. Let’s admit it, it also means cutting corners sometimes, all for the delivery speed.

Proper performance and security optimizations are profoundly  important for any web app, unfortunately they are costly, time consuming and require special skills. We all are only too aware of that.

→ Explore Progressive web apps (PWAs). How to engage your customers more effectively.

Is there any faster way, ideally an immediate turn-key solution to make it happen? A tool that would enable digital companies to deliver our customers responsive and secure digital experience while giving the service creators and maintainers so much needed breathing space to continue with the optimizations under the hood.

A tool that helps with all web applications and is not limited to ecommerce stores.

Yes, there is one, but let’s start from the beginning.

Part 1: Non-functional requirements

These are the requirements that are usually not on the first page of Requirements or on dashboards. We all tend to focus on the visual side of things and how comfortable the interaction is and will be with the user. It’s natural.

Usually the heated conversations are not about ‘too-tech’ topics, like efficient content delivery, which TLS version should be supported, etc…

Of course the web application should perform fast, use an optimal amount of bandwidth and be secure. No serious application vendor in the world would ever ask their clients “Do you want your app to perform optimally? Shall we pay attention to the security of your solution?” Of course they do.

→ Read more about future of business applications in the eyes of CIOs

How to do it home alone

For any of the non-functional requirements specified, or temporarily forgotten, there are multiple commercially available and open source frameworks and components.

So it is just a matter of using your npm skills to install them and make some configuration changes and you’re ready. Right? Unfortunately no.

First of all, learning how to use all the solutions has a steep learning curve and people are often reluctant to invest in that (you don’t see security or performance with naked human eyes unless it’s terrible and already too late).

→ Explore also DevSecOps – DevOps with security

Second, combining different open source solutions to work on as a team can be a painful and time consuming adventure.

Third, once done, it will stop working soon after because of the dynamics of the development of open source software and dependencies, because they break down all the time.

As you can see it’s certainly an interesting but long and arduous process. Making it right may tax the budget of the project up to a level which isn’t acceptable.Wao


Avenga did it for you. is a comprehensive Software As a Service (SaaS) solution which can be turned on in minutes and right away you will benefit from improved security and performance. There is no need to modify your code, it’s just a DNS change and you’re ready to go.

It is a proxy-cache-optimizer, all in one, between your client’s browser and your web server.

We promise to take a look under the hood of the box, but the simplest diagram showing in action is the following:
→ Read about one more Avenga product Couper.ioWeb apps – all the things you don’t have to do on your own. Part 2: Performance

Part 2: Performance

Attention spans are getting shorter all the time. If no page content is shown in a few seconds, users will close the tab and go somewhere else.

Also, if they see it’s loading and loading, and there is no possibility to interact with the application, most likely this time waiting will be spent thinking about your competitors and how poor of a user experience you have delivered. People want to jump into your digital solution, check something, or do something quickly using their smartphones, tablets or laptops, and then move on.

→ Explore Performance testing trends – from testing to performance engineering

When we speak about the performance of anything, usually there’s this typical voice that says ‘let’s buy / rent a better stronger hardware and get over it’, because the cost of a developer’s time is much higher. It’s true that a developer’s time is expensive but hardware based fixes are quite limited in terms of what you can achieve.

Moore‘s law stopped a few years ago. CPUs don’t become faster anymore, we only get more of them (more cores). That puts an end to HW-only optimizations, if the architecture doesn’t allow for horizontal scaling. Most web sites don’t have such sophisticated setups: There is a web server, an application server, and a database. None of which benefit from more CPUs without further tuning and attention.

Plus let’s add the carbon footprint, avoiding smart optimizations, and generating tons of wasted electrical energy. This matters both for the environment and the economy.

Data transmission costs are not a free resource. Either it’s the bandwidth you have to buy or the transmission packages (like N TB / month etc.).

The Internet is not a magical black box or legendary cloud that is everywhere around us. Geographies still matter and will continue to matter in the future, as you want your node to be close to your clients. Close in this context means within the same geographical region, with low latency (physics cannot be beaten) and high throughput (which price heavily depends on how far away you are from the nearest transmission hub).


One of the classical approaches for improving the performance of any IT system, web app including, is to use caching. This strategy is used everywhere from the CPU itself to any operating system, application, database, or file system driver, so there’s no surprise that caching is almost as old as the internet itself.

The idea is always the same. If an operation always yields the same result upon the same input, don’t compute it again. Store and reuse the result. What you think will be used in the near future and/or more frequently, should be located where there’s a lower latency and higher transmission throughput. The cache does it at the expense of its capacity; it cannot store everything.

The cache, in the case of web applications, is located in multiple places. The best idea is to not transmit the content at all (again), so we have a client side cache and client side local storages (HTML5) with different expirations and refresh policies.

But that’s not all. It can be further improved by server side caching. Image generation (especially in case of dynamic images such as custom ads or graphs) is expensive, so this is another step that can be avoided in order to reduce the CPU and memory impact, and transmission costs as well. As a result your local browser cache can be filled faster with pre rendered images cached on the server.

It also applies to heavy Javascript modules which are the key components of a feature rich modern web experience.

What is special about the caching subsystem? implements multiple caches at different places in its infrastructure. There are varnish caches between the visitors of a website and the ‘optimization cloud’, there are caches between the ‘optimization cloud’ and the origin servers, and there are several caches within the ‘optimization cloud’ which contain the optimized files that can be revalidated when the actual http caching requires revalidation or retransmission. That way we ensure that we only re-transfer and re-optimize content when really necessary.

→ Read about one more Avenga product

Part 3: Image optimization

Images take most of the bandwidth on modern web pages, so it makes the most sense to minimize size (meaning both weight in kilobytes and resolution as well) to fit the target device.

Most of the images are compressed using algorithms that reduce image quality. But adding additional compression to the already compressed image may result in visible artifacts. So the key question here is how to find the right balance between image size and image quality. There are new compression algorithms and image formats that are helpful to reduce image size by tens of a percent without any noticeable image degradation.

There are GIFs, PNGs, and JPEGs, but also WebP, HEIF (HEIC), FLIF, BGP and others with various degrees of compatibility, but all delivering on the promise of smaller images with the same image quality (as viewed by humans). For those working with HEIC image files, a reliable heic to jpg converter download for Windows 10 can ensure compatibility and ease of use across various devices and platforms.

→ Have a look at  UI design trends cycle – from skeuomorphism back to… neumorphism

The optimal compression ratio depends on each particular image; how visible the artifacts will be and what was the initial compression level. We cannot just apply a single compression level parameter to the whole page or use a percentage of size reduction as a guide here.

Is it a too heavily compressed image (albeit much smaller), or maybe is it still good enough?

Can it be determined automatically? Fortunately, yes. There are algorithms to compare images before and after compression to measure error visibility and determine structural similarity. For instance, people are less susceptible to chroma errors (color displacement) and more to blocky artifacts (blockiness), but also to artifacts in luminance change, especially on blue gradients of the skies, however much less in the case of trees and foliage. People are also very sensitive to faces and artifacts that are visible there.Original image lamp

How is doing this?

A website in its entirety has a lot of different images. Every single one of them can be compressed to a certain degree until it looks bad. Choosing a single compression ratio for the whole site means either hurting a lot of images or wasting a lot of space. Usually it means both. The worst part is that you can’t control which image falls into which bucket. uses advanced and unique image quality assessment techniques to calculate the optimal compression force for every single image. This maximizes the bytes saved, while never visibly hurting the image quality. also detects images that are already well compressed and delivers them quickly without any changes. One of the most difficult tasks of optimizing is to decide when not to optimize.

Converting images to so-called next generation formats such as WebP may not sound hard. But (fortunately) there is more than Web Browser out there. The web server has to take care that only supported image formats are served to the browser. Internet explorer doesn’t support WebP, neither do older Firefox versions. handles the format picking for you. So you don’t have to change all your HTMLs to use the “responsive” picture element.

Ultimate optimization – do not load images until necessary

The technique called lazy loading means to load images as late as possible. Even optimized images are still slowing down the page, so they can be loaded as late as possible unless they have to be seen by the end user.

How is helping with image optimization? analyzes which images are in the initial viewport of a website. That way, we can ensure that we only delay, respectively lazy load, those images that are really not necessary. This allows us to let the necessary script to be loaded asynchronously, truly minimizing the render blocking impact, not only of the images, but also of the optimization itself.

Additionally, lazy loading analyzes the scroll behavior of the user on the currently displayed document. If the scroll speed exceeds a certain threshold, does not load images that would scroll through the viewport that fast, because it would be impossible for a visitor to actually look at them. also offers different styles of placeholders, from transparent pixels, over blurry preview images, to custom colored or automatically colored (calculating a single color representative to the image) spaces. creates placeholder images with the dimensions of the original image. It allows the browser to render the HTML page in its final layout without any “reflows” when the original images are loaded. That way a “jumping” layout can be avoided and the CPU necessary for rendering the page is lowered.

Read how to create modern User Experiences with 

Client side optimization

How can servers help with client side optimization? For instance every line of JS code or CSS scripts that is not necessary for the application is a waste of CPU and memory resources.

How does help us with this aspect of optimization? uses modern compression algorithms, like brotli, to minimize the amount of data transferred. also mangles and minifies resources, reducing file sizes even further but doesn’t change the functionality of those scripts. Combining this with the possibility of forbidding content type sniffing, not only allows for a minimized file size, but also maximizes the security of a website when transferring assets or hindering malicious code from being executed or supported in the case of compromised files possibly being delivered to a visitor.

CDN compatibility

Your company may have already chosen and used CDN. Most of the CDNs require that they are the only CDN that is in use. However can work with other CDNs properly.

How is that made possible? Why can work with other CDNs while other CDNs want to have a ‘monopoly’?

The most common setup of as a companion for a CDN is  between the CDN and your origin server. The “C” in CDN stands for content – and in many cases that is what CDNs get paid for. You can save lots of money by reducing the volume of data transmitted to clients through the CDN (with image compression, and so forth).

In cases where a site’s DNS doesn’t point to directly, the self checks can determine whether the site is active behind a downstream CDN. Hence it can adapt its behaviour accordingly. This is important for issuing SSL certificates, or resolving the IP address of the CDN’s client for security features to function correctly.


HTTP 2 enables faster page loading times thanks to a technique called request multiplexing and it keeps the connection alive for a long time (major departure from the connectionless idea of HTTP 1.x). What used to be performance best practices in HTTP 1 are considered harmful in HTTP 2 setups. For example, inlining of small resources such as images or scripts was used to reduce the number of requests while inflating the document size and impeding caching at the same time. With HTTP 2 requests don’t have a significant cost overhead anymore so we can finally benefit from cache effects and reduced bandwidth.

While this is a good thing for the future, right now we have to deal with old clients using HTTP 1 and newer ones using h2. Decisions – like inlining or not – have to be made at runtime. Also, a carefully configured routing and network stack is necessary to actually benefit from the promised performance improvements. Did we mention that it works for https only? Deploying h2 takes more than flipping a switch.


Many web resources such as images or JavaScript – even full HTML pages – are cacheable. Thus, every user will load the files only once. But many web sites don’t have a cache in their setup. With a shared cache between client and server every file is only loaded once for all users.

Maintaining a cache isn’t easy. There is a trade-off between storage cost and cache utility. Scaling a cache is a topic on its own. While adding servers to a system may increase its performance, adding more caches horizontally decreases the cache hit rate. To turn this into an advantage, a proper routing of requests to the cache tier needs to be out in place.

The benefit of a well configured cache is that the better part of requests will not hit the origin server. In plain words: it has less to do. This delta in requests to handle frees up resources for more important jobs like generating dynamic html pages.

We could go on with performance, as the topic is so important, but let’s not forget about the security of web applications. This is the topic of the next chapter.

Part 4: Security

So, now we switch from performance to the security of the web application.

Client side security

The product cannot fix all the security issues that are created on the server side but can catch

Problems with the server side (API side) start with the known exploits of the web servers themselves, addressing known vulnerabilities to exploit the underlying OS or inject.

Transmission channel security

It’s been a long time since TSL replaced SSL, and TLS is evolving to keep up with the new demands for channel security. Recently older versions of TLS have been deprecated and support for them – removed. So the support for the latest stable TLS specification is very important in order to avoid man in the middle and other attacks.

The security is a cat and mouse game between hackers and developers. It’s a process which requires effort all the time to maintain and improve the security of your solution.

Part 5: Why WAO.IO?

In this chapter we will address the following question:  Why and not CDN, which is a name more familiar to you?

Why not the more-known CDN?

First of all, all the clients are closer to us in terms of support and care, plus we operate locally to provide the best customer experience. This includes how easy it is to measure the benefits you can get and that you can start using it right away.

Second, we have unique features and approaches to the known problems with web apps and their solutions. We fully understand how our solution is working, and can propose and prepare the best solution for your particular case.

And last but not least – it’s much cheaper and you don’t have to sacrifice the network parameters or stability.

From the privacy perspective, you know where your data is and there’s no anonymous behemoth backing it up.

Final words

Developers often spend a lot of time dealing with non-functional requirements. And once it’s done . . . well it’s not done, as performance and security are processes requiring constant effort during the duration of your digital solution.

Avenga can help you  if you want to do it yourself, and with our broad range of services and expertise we can do it as your partner as well.

But take into consideration that many of the mentioned aspects are just a few clicks away when using

Faster web experiences mean much better user experience and more returning customers. And your consumers will appreciate that.

Other articles


Book a meeting

Call (Toll-Free*) +1 (800) 917-0207

Zoom 30 min

* US and Canada, exceptions apply

Ready to innovate your business?

We are! Let’s kick-off our journey to success!