Web developer, open source enthusiast, amateur photographer & Linux user. Blogs @ http://t.co/Xgj1dYR9wb (en) & http://t.co/m0VDEYHnHq (fi)
46 stories
·
1 follower

Single Page Application Is Not a Silver Bullet

1 Share

Single-page applications are everywhere. Even blogs, simple html pages (in the beginning something like https://danluu.com/), have turned into big fat monsters – for example, jlongster’s blog has 1206 stars at the moment of writing, and not because of the content (people usually subscribe to blogs rather than star the source): the only reason is that once he implemented it using React & Redux. What is the problem, though? He wants, he makes it, no questions here. The problem is that it is considered normal to make blogs for reading so bloated – of course, there are some people who complain, but the general response is utterly positive. But who cares about blogs – the problem is that nowadays pretty often question is not “to SPA or not to SPA”, rather “which client-side framework should we use for our SPA”.

I am a frontend developer, who all his career was involved into creating quite complicated single-page applications (like Lebara Play, or app.contentful.com – you have to have an account to see the actual app, though) – and for a long time the question “should be a single-page application” did not exist for me: of course it should! But some time ago, at my previous job, my manager came to me asking to investigate our application size – about two years ago we migrated from WordPress to React/Redux/Immutable.js stack (which is a pretty common choice, I guess), and it turned out that average load time increased twice over this time! Alright, maybe it was just our problem, and we are not that good – so I was looking into it for a couple of weeks, and after my research I re-thought a lot in front-end development.

State of the Art

Nowadays a lot of startups, cool companies and personal websites are made using single-page applications – it is pretty common, and nobody is particularly surprised when they don’t see a reaload after clicking on a link inside the application.

Let’s summarize the experience, starting with the pros:

  • no page reloads after clicking on the link
  • with implemented caching subsequent pages open faster, because a lot of info can be already downloaded (like user info, movie details, etc)
  • granular actions are super fast – like add to the favourites, subscribe to the news; of course, it is not rocket science, so you can sprinkle jQuery here and there, but it is much more complicated to maintain
  • frontend is decoupled from backend, makes development of complicated features much easier (especially when we need to interact between screens)
  • possible to create almost “native” (I think nobody ever felt it, though) experience, with fallbacks if no internet is there

Cons:

  • broken “back” button (sometimes it works properly, but in general people don’t trust it)
  • broken “open in a new tab” behaviour – people like to handle links in onClick handler, and browser can’t recognize it as a link (even if the majority of the links are valid, sooner or later you’ll encounter a non-link “link”)
  • sometimes broken “refresh” button – after refreshing you end up in a different UI (usually slightly, but still different)
  • increased TTI (time to interaction), more on this later
  • bad performance on low-end devices (and I suspect higher battery consumption, but can’t really proof this one)

Why is it slow?

Back to the beginning, where I said we found that WordPress was actually faster than our new shiny stack. Our application was not the most sofisticated, but had some interactive parts – it was a financial website, where you can apply for a loan, go to your personal accout, restructure your loan, and change some personal details. Because it is financial and fully online, we had to collect a lot of data, and it was several forms with live validation, on-the-fly validation and some intrinsic flows, so it made it a perfect case for frontend. However, one problem – it was slow; as I mentioned, load time increased twice. So, I was looking into the reasons why it is so big, and is turned out, the size was basically dictated by our react ecosystem – all libraries we needed created this big chunk of js (around 350KB gzipped). Another big chunk was our actual application code – it was another 350kb, and in total we ended up with ~2.6MB non-gzipped data, which is, of course, insane. It means that no matter how optimized our code is, in order to start the application, a browser has to do several things:

  1. Download this 650KB file
  2. Ungzip it
  3. Parse it (this task is extremely pricy for mobiles)
  4. Execute it (so we’ll have React runtime and our components become interactive)

You can read more on this process in Addy’s Osmani article.

At the end, my findings were that we owe this time increase to big bundle size. As I mentioned, though, our vendor chunk was half of the size, so it means that basic functionality (we had pretty “rich” homepage) would require a big chunk already, and can solve the problem only partially.

I have to say that we were modern enough, and in order to help SEO and mobile clients, we had server-side rendering, implemented using Node.js + Express, where we fetched all needed data for the client. And it really helped – the user was able to see the markup (though it does not work on old mobile devices), but the problem is that before the javascript is downloaded, parsed and executed, so React can add event listeners and your page is finally interactive, a lot of time actually passes.

How slow is it?

Let’s take a look at actual sizes of the applications. Usually I am logged out, and I have my adblocker enabled.

Airbnb (individual place page)

Bright engineering team, many articles about migration to React. Let’s see how is it going, on the typical page of individual place:

There are tons of files, and in total it makes it 1.3MB gzipped. Again, 1.3MB to show you a place – pictures and description. Of course, there is a lot of user interaction, but at the end of they day, as a user, I want to look at the place – also I can be logged out. Is it user-friendly? I don’t know, but I’d be happy with static content to just explore what are requirements, to read description, etc. I am pretty sure that aggressive code-splitting allows them to ship features faster, but the price here is user’s comfort and speed.

Twitter (regular feed of logged in user)

Just 3 initial files, and 1 later (I guess lazy loading):

  • init file, 161KB
  • common file, 249KB
  • home page file (page splitting!), 65KB
  • “native bundle”, 44.5KB (not sure what it is, but it was loaded afterwards)

In total 475KB + 44.5KB for some lazy-loaded file. Much better than AirBnB, but still, a lot of stuff, just to show you feed. There is also a mobile version, which feels much lighter, but the size is actually similar.

Medium (Article about cost of JS)

Medium is a cool platform for blogs. So, essentially, it is an advanced place for texts, similar to one you are reading right now. Let’s open an article, which I mentioned before, about cost of JS:

Also 3 files:

  • main bundle, 337KB
  • common bundle, 183KB
  • notes, 22.6 (maybe this amazing functionality to highlight commas)

In total 542KB, just to read an article. You can read it without JS at all, by the way, so it means that it is not that crucial for the main task.

What can be done?

All these websites I’ve mentioned load on my latest 15” MBP with stable internet connection for 2–3 seconds, and become fully usable after another 2 seconds, which is not that bad nowadays. The problem is how normal it has become over the last decade – slack has 0.5s delay between clicking and switching to the channel, and we don’t even notice it anymore.

Don’t take this rant as an attack on the SPAs itself – I like them (at the end of the day, I write one for my day job!), and they allow to create very powerful user experience, which works across all platforms. However, a lot of things I was working on as a SPA, in fact should not be one. For example, some time ago I’ve made a portfolio website to my wife, and of course I did using cool SPA stack – but in fact, it is just a blog with a lot of images and couple of widgets; so I’m guilty of this trend more than many.

Also, feel free to take a look at the first CERN website. Just compare navigation speed when you are on this website, and the responsiveness, when you are leaving it to some external, modern one.

Read the whole story
iiska
15 days ago
reply
Oulu, Finland
Share this story
Delete

Firefox Nightly enables support for FIDO U2F Security Keys

2 Shares

This week, Mozilla enabled support for FIDO U2F (Universal 2nd Factor) security keys in the pre-beta release of Firefox, Firefox Nightly. Firefox is the second largest internet browser by user base. In the near future, 80% of the world’s desktop users, including Chrome and Opera users, will benefit from the open authentication standard and YubiKey support out of the box.

When GitHub made support for U2F in 2015, the open source community voted U2F as the most wanted feature in Firefox. We are delighted to now see it happening. Yubico has helped with U2F integration for Firefox and for other platforms and browsers that have or are in the process of making support, as it is critical for taking the YubiKey and U2F unphishable authentication to the global masses.

In today’s world, software installation brings with it not only added complexity for the user, but also the potential risk of malware. Chrome has already enabled millions of websites and services to deploy FIDO U2F seamlessly, mainly through Google and Facebook social login, to help mitigate that. Now with native support for FIDO U2F security keys in Firefox, millions more will benefit from strong, hardware-based two-factor authentication without the need to download or install client software.

Thanks Mozilla for working on increasing security and usability for internet users!

The post Firefox Nightly enables support for FIDO U2F Security Keys appeared first on Yubico.

Read the whole story
miohtama
135 days ago
reply
Helsinki, Finland
iiska
149 days ago
reply
Oulu, Finland
Share this story
Delete

RESTful DOOM

2 Shares

TL;DR I embedded a RESTful API into the classic 1993 game DOOM, allowing the game to be queried and controlled using HTTP and JSON.

“We fully expect to be the number one cause of decreased productivity in businesses around the world.”

   - ID Software press release (1993).


1993

1993 was an exciting year - Sleepless in Seattle opened in theatres, Microsoft shipped Windows NT 3.1, and Whitney Houston’s ‘I Will Always Love You’ was the best selling song for 2 straight months. Oh, and a game called Doom was released!

Doom was created by a small team at ID Software. Wikipedia describes it as one of the most significant and influential titles in video game history, and growing up I loved playing it. As an adult I couldn’t put down a book called Masters of DOOM, which describes the back story of ID Software.

ID Software has a super cool practice of releasing source code for their games. For the kind of hackers who lurk on /r/gamedev, an ID Software engine is an amazing resource to learn from. And lo, in 1997, the Doom engine source code was released, causing much happiness!

2017

I was having trouble finding a fun API to use in a talk I had to do. I had spent the normal amount of time procrastinating and stressing about having to give the talk, and wasn’t making any progress on building a compelling demo.

Late one night, out of the blue, I had the idea to create an API for Doom, now 24 years old(!), and obviously never designed to have an API. I could have some fun digging around the Doom source code and solve my API problem at the same time!

My random idea became RESTful-DOOM - a version of Doom which really does host a RESTful API! The API allows you to query and manipulate various game objects with standard HTTP requests as the game runs.

There were a few challenges:

  • Build an HTTP+JSON RESTful API server in C.
  • Run the server code inside the Doom engine, without breaking the game loop.
  • Figure out what kinds of things we can manipulate in the game world, and how to interact with them in memory to achieve the desired effect!

I choose chocolate-doom as the base Doom code to build on top of. I like this project because it aims to stick as close to the original experience as possible, while making it easy to compile and run on modern systems.

Hosting an HTTP API server inside Doom

chocolate-doom already uses SDL, so I added an -apiport <port> command line arg and used SDLNet_TCP_Open to open a TCP listen socket on startup. Servicing client connections while the game is running is a bit trickier, because the game must continue to update and render the world many times a second, without delay. We must not make any blocking network calls.

The first change I made was to edit D_ProcessEvents (the Doom main loop), to add a call to our new API servicing method API_RunIO. This calls SDLNet_TCP_Accept which accepts a new client, or immediately returns NULL if there are no clients.
If we have a new client, we add its socket to a SocketSet by calling SDLNet_TCP_AddSocket. Being part of a SocketSet allows us to use the non-blocking SDLNet_CheckSockets every tic to determine if there is data available.
If we do have data, API_ParseRequest attempts to parse the data as an HTTP request, using basic C string functions. I used cJSON and yuarel libraries to parse JSON and URI strings respectively.

Routing an HTTP request involves looking at the method and path, then calling the right implementation for the requested action. Below is a snippet from the API_RouteRequest method:

if (strcmp(path, "api/player") == 0)
{
    if (strcmp(method, "PATCH") == 0) 
    {
        return API_PatchPlayer(json_body);
    }
    else if (strcmp(method, "GET") == 0)
    {
        return API_GetPlayer();
    }
    else if (strcmp(method, "DELETE") == 0) {
        return API_DeletePlayer();
    }
    return API_CreateErrorResponse(405, "Method not allowed");
}

Each action implementation (for example API_PatchPlayer) returns an api_response_t containing a status code and JSON response body.

Putting it all together, this is what the call graph looks like when handling a request for PATCH /api/player:

D_ProcessEvents();
  API_RunIO();
    SDLNet_CheckSockets();
    SDLNet_TCP_Recv();
    API_ParseRequest();
    API_RouteRequest();
      API_PatchPlayer();
    API_SendResponse();

Interfacing with Doom entities

Building an API into a game not designed for it is actually quite easy when the game is written in straight C. There are no private fields or class hierarchies to deal with. And the extern keyword makes it easy to reference global Doom variables in our API handling code, even if it feels a bit dirty ;)

cJSON library is used to generate the JSON formatted response data from API calls.

We want the API to provide access to the current map, map objects (scenery, powerups, monsters), doors, and the player. To do these things, we must understand how the Doom engine handles them.

The current episode and map are stored as global int variables. By updating these values, then calling the existing G_DeferedInitNew, we can trigger Doom to switch smoothly to any map and episode we like.

Map objects (mobj_t) implement both scenery items and monsters. I added an id field which gets initialized to a unique value for each new object. This is the id used in the API for routes like /api/world/objects/:id.

To create a new map object, we call the existing P_SpawnMobj with a position and type. This returns us an mobj_t* that we can update with other properties from the API request.

The local player (player_t) is stored in the first index of a global array of players. By updating fields of the player, we can control things like health and weapon used. Behind the scenes, a player is also an mobj_t.

A door in Doom is a line_t with a special door flag. To find all doors, we iterate through all line_t in the map, returning all lines which are marked as a door. To open or close the door, we call the existing EV_VerticalDoor to toggle the door state.

API Specification

An API spec describes the HTTP methods, routes, and data types that the API supports. For example, it will tell you the type of data to send in a POST call to /api/world/objects, and the type of data you should expect in response.
I wrote the API spec in RAML 1.0. It is also hosted in a public API Portal for easier reading.

Putting it all together

So now we have an HTTP+JSON server inside Doom, interfacing with Doom objects in memory, and have written a public API specification for it. Phew!
We can now query and manipulate this 24 year old game from any REST API client - heres a video proving exactly that! Enjoy ;)

restful-doom on GitHub


Read the whole story
iiska
151 days ago
reply
Oulu, Finland
Share this story
Delete

USB Cables

6 Comments and 21 Shares
Tag yourself, I'm "frayed."
Read the whole story
iiska
152 days ago
reply
Oulu, Finland
Share this story
Delete
5 public comments
expatpaul
152 days ago
reply
Painfully true
Belgium
mooglemoogle
152 days ago
reply
I’m “Carries data but not power”
Virginia
CaffieneKitty
151 days ago
I'm "Heavy and not very flexible" :-P
deezil
152 days ago
reply
I need USB-C cables to become cheaper, but basically, if it's not "the good one", it gets thrown in the garbage. Monoprice has them for too cheap to worry about them.
Louisville, Kentucky
alt_text_bot
152 days ago
reply
Tag yourself, I'm "frayed."
endlessmike
147 days ago
Heavy and not very flexible
Covarr
152 days ago
reply
And then there's that weird proprietary cable I've had since like 2004 that looks at a glance like micro USB but isn't, so I get halfway across the country for my vacation with no way to charge anything at all and have to buy spares at the airport for exorbitant prices.
Moses Lake, WA
skittone
152 days ago
Throw it away.
bodly
151 days ago
Or label it.
JimB
148 days ago
My mate threw his away, then wondered why he could no longer connect his Panasonic camera to the computer...

Ringer Volume/Media Volume

9 Comments and 17 Shares
Our new video ad campaign has our product's name shouted in the first 500 milliseconds, so we can reach the people in adjacent rooms while the viewer is still turning down the volume.
Read the whole story
iiska
171 days ago
reply
Oulu, Finland
Share this story
Delete
8 public comments
CaffieneKitty
170 days ago
reply
I have the opposite. I turn my ringer to max and all my morning alarms get turned down to whisper. :-P
rtreborb
170 days ago
reply
The frustration is real
llucax
171 days ago
reply
For UX people out there...
Berlin
ChrisDL
171 days ago
reply
this is me starting twitch while a human being sleeps next to me, trying not to wake her.
New York
mooglemoogle
171 days ago
reply
...Moviefone! If you know the name of the movie you'd like to see....
Virginia
francisga
171 days ago
reply
Yes, happens to me all the time.
Lafayette, LA, USA
alt_text_bot
171 days ago
reply
Our new video ad campaign has our product's name shouted in the first 500 milliseconds, so we can reach the people in adjacent rooms while the viewer is still turning down the volume.
darastar
171 days ago
reply
IT ME!

How to categorize objects

1 Share

How do you categorize software errors?

There are several possible axes we might think of:

  • Severity: e.g. notice, warning, error, fatal.
  • Module: what library or group of classes did the error come from?
  • Layer: database, framework, controller, model, view.

In Exceptional Ruby, I suggested a different approach for categorizing errors. Rather than thinking of different taxonomies that errors might fall into, think about how various types of errors are dealt with. For instance:

  • Inform the user that they tried to use the system in a way that is either not supported or not permitted.
  • Note that the system is in a state that was never planned for, inform the user of a fatal error, and log a problem report back to the developer.
  • Detect a predictable outage, and either retry automatically, or ask the user to manually retry later.

Then, once we have an idea of how different types of errors are handled and/or reported, we can work backwards from these distinctions in order to come up with a set of categories. Which we can then encode as base exception classes:

  • UserError
  • LogicError
  • TransientFailure

Consider a different domain: tasks in a TODO list. Again, there are a lot of ways that these could be categorized: by urgency, by sphere (work, family, personal), by importance.

The GTD system takes a novel tack: it says, “what properties are we most likely going to want to filter by?” The answers it comes up with are:

  • What tasks can I do where I am right now? (Office, kitchen, out running errands)
  • What tasks do I have time for right now?

Working backwards from these questions, it arrives at the idea of categorizing tasks by “context” and by time needed.

These two examples suggest a general pragmatic rule for categorizing objects: don’t worry about listing “natural” taxonomies. Instead, consider how you will most likely need to filter or sort the items.

In some cases, we might not yet know how we might want to filter or sort the objects. In that case, the rule suggests that we hold off on categorizing them at all.

Read the whole story
iiska
218 days ago
reply
Oulu, Finland
Share this story
Delete
Next Page of Stories