Web developer, open source enthusiast, amateur photographer & Linux user. Blogs @ http://t.co/Xgj1dYR9wb (en) & http://t.co/m0VDEYHnHq (fi)
48 stories
·
1 follower

Fear, trust and PureScript: Building on trust with types and functional programming

1 Share

In my previous post I argue that JavaScript is by default a low trust environment for programming, that ideas that build trust like optional typing, functional transformations, and immutability have severe trade-offs in JavaScript, and that choosing one of these ideas often means rejecting another. Despite that, JavaScript is a better environment than most popular languages for building trust, and the broader argument applies to most popular languages, typed or not.

In PureScript you get types, immutability, and functional programming “for free”– the trade-offs aren’t as steep. This is largely true of a few other languages as well, but let’s see what our discussion of fear and trust from the previous post looks like in PureScript from the same two perspectives: understanding the shape of data and changing data.

Fear and the shape of data

PureScript starts with a high level of trust in the data by default. What is the shape of our data? Whatever the type says it is.

Here’s what the JavaScript example from the previous post might look like in PureScript. Like the JavaScript version, we load user data from the network and later render it. Our loadUser function takes in a user ID and returns an asynchronous effect– think of Aff like a Promise– containing either the validated user or errors. We then take this response in loadAndRenderUser and either render the user’s name, if we received a user, or log the errors if not.

The differences with JavaScript can be subtle. Everything is required to be typed within the PureScript code, so whenever you see the User type, you know it has exactly what the type says: an ID, optionally an email address, and a name. Everything is also immutable unless the type tells you otherwise, so you know this data won’t change underneath you. There are no null values. There are no any types. The value is what it says on the label. The compiler will tell you if you’re using it wrong. You can simply trust the data has the shape you expect, and will stay that way, and stop worrying about it.

What about data you get off the wire? Validation in this example happens with the call to readJSON. This takes in something with an unknown shape – a String – and attempts to turn it into a known shape – the User. It returns either the parsed user data or errors that occurred in validating the data. This is the default way of handling data that comes from outside, and this means that after this point you can trust that the data has the shape of a User. You’re not trusting the external service to provide correct data, because you validate the data as it comes in. You’re also not trusting developers to write correct types – if they write incorrect types, the validation will fail, and you will notice that.

Here’s the boilerplate for validating the data:

In most cases you don’t actually need to write any type validation logic. The compiler and library write it for you. If you have special needs, you can also write out this logic in a straightforward way manually.

The point is, as a developer you can simply trust that the data is shaped how the type says it is. What does this data look like? Does this field exist? Will changing this break the code? Read the type. Change it and let the compiler or a validation error tell you if it breaks something.

Fear and changing data

What about when the data changes? By default in PureScript all data is immutable, so instead of writing code like the mutable example from JavaScript, you might write this:

This looks a lot like the JavaScript example in the previous post using functional conventions. You take in data. You return new data. But the similarity is deceptive. This tells you much more than the JavaScript code because of the way PureScript restricts the language.

This looks just like the JavaScript code with functional conventions, but:

  • It can’t mutate the data.
  • It can’t modify your file system.
  • It can’t launch rockets.

They’re not in the type. The type of the function is

This says you take in a SourceDoc and a String and return a Document. It can’t return anything else. It can’t return null. It can’t do other side effects like changing the data or launching rockets. This is a stupidly simple way of thinking about functions and types. A function takes in something and returns something and does nothing else.

Suppose you actually do want to mutate the data. You could write it like this:

The type of our formatDocument function changed:

Now the function takes in a mutable reference to a document, and a string, and returns something – an effect. The effect has a type Eff _ Unit, which means that when this effect is run, it will do some side effects and return an empty value (Unit).

This is similar to the JavaScript mutation example, but with key differences. We can change the data we receive (the SourceDoc), but only that data – we still can’t change any other data in our program. We also can only change the data in ways that respect the developer’s trust. For instance, we can’t give the data a new field that is not in the type, because that would mean the type is wrong.

Suppose you actually want to launch a rocket from this function. You might write it like this:

Now our function type is

Because launching a rocket is a side effect, the function needs to return an effect. The launching of rockets is explicit in the type – you can’t launch rockets without writing functions that return effects. Users of those effects have to recognize that the function returns an effect and handle it accordingly. This still doesn’t tell you what kind of effect will be performed when this is run – it could be doing many types of side effects. But it tells you clearly that the function does other things than just return a document. And the effect hasn’t actually happened yet. You might decide later that you don’t want to do the effect, and never execute it.

You could also cheat and write your document formatting mutably in JavaScript:


Then you’re closer to the JavaScript default, where you rely heavily on developers to maintain the shapes of your data. You’re not completely abandoning type safety – the PureScript compiler will still prevent you from changing the document within your PureScript code or using it in ways that are not supported by the type. But you’re weakening your level of trust in the code.

You probably don’t need to do this. You probably don’t want to. But you could. The code starts with the assumption of types you can trust, pure functions, and immutability, and you can selectively weaken those assumptions as needed. Or you can make them stronger, by making your types more restrictive. You have the tools to do both.

In JavaScript, by comparison, the base assumption is that you can trust very little. You build and rebuild trust into your system every day with tools like good conventions, optional typing, functional programming, and immutable data. But the bar starts low, and in JavaScript those ideas can only take you so far.

Types supporting functional programming

There’s no need to choose between types, immutability, and functional transformations. In PureScript the ideas all support each other and are pervasive in the code and ecosystem. Here are two examples.

Types and plugging functions together

Many JavaScript developers adopt tools like Ramda to do data transformations in a more functional way, with immutable defaults and useful ideas like currying and function composition. To parse and transform a set of documents received over the wire you might write something like this:

Or the same thing using the proposed `pipe` operator:

But as these pipelines get larger and the number and complexity of transformations grows it becomes harder and harder to trust that the transformations are working correctly. In theory optional typing could help with this, and you could just write the above and have TypeScript or Flow make sure the piping lines up. In simple cases the type inference for this will probably work fine and you can do that. Other times it seems to work, but it quietly wiped out your types in the middle of the pipe. Or the type inference doesn’t work at all, and you end up writing something like this, with the code logic drowned out in type annotations:

This is still a relatively simple example, but in this and especially larger examples you might also want to clean this up in a functional style. You might rewrite it like this:

Even this in simple cases type checks in TypeScript or Flow. Throw in generics right now in TypeScript and even the simplest cases fail. Flow seems to fare better, but in my experience you still run into lots of little edge cases. The TypeScript and Flow teams have been improving their handling of these cases all the time, but at some point you run into more fundamental issues around e.g. how it is possible to do type inference, while maintaining JavaScript compatibility, and avoiding making the type system brittle or complex. Ultimately when you write a lot of code like this, using function composition or lenses or other functional constructs, you have given up on the optional type system. You either don’t use types at all there, or you add verbose type coercions everywhere to make it work.

In PureScript, you just write this:

Or you rewrite it like the second JavaScript example:

In both cases, the PureScript code looks a lot like the untyped JavaScript code, but everything is strongly typed. You can add type annotations if you like, but you don’t have to. The compiler tells you if your functions don’t line up. Most Ramda functions, even those that are difficult or impossible to type out in TypeScript or Flow, are either built in to PureScript or are trivial to write using more general tools.

More broadly, in PureScript the types help you make the piping line up for any types of transformations – data transformations, but also asynchronous network calls, or server middleware, or error handling, or config validations. When you change your code, the type system tells you whether what you wrote even makes sense with everything else that you wrote. Compared to TypeScript or Flow, PureScript is both more expressive and has better type inference. Together these mean you can write code like in a dynamic language, but keep the types. You can also use the types more easily in ways that are difficult or impossible in TypeScript or Flow, like when using function composition or lenses.

Types and immutability

PureScript uses standard JavaScript data structures under the hood, but with immutability enforced through types and the ecosystem. Libraries don’t mutate data, or if they do, it shows up in the type. There is no conflict between types and immutability – on the contrary, the types are necessary to guarantee immutability.

In JavaScript you might get immutability by adopting immutable persistent data structures, where structural sharing is used to reduce copying of data. In PureScript persistent data structures are just a performance optimization. You already have immutable data structures, which map to normal data structures in JavaScript. If you find that structural sharing would help your performance issues, you can adopt persistent data structures. Or you can write that code in JavaScript in a fast mutable style, while exposing an immutable interface to the rest of your code.

Trust and the ecosystem

Ultimately it is possible in PureScript to write the same kind of code as in JavaScript. You could even write it in JavaScript. The higher base level of trust in PureScript comes not just from the language and compiler, but also in part from strong defaults and conventions in the ecosystem.

Types are required and pervasive, and the defaults nudge you toward validating at the edges of your system. Unlike with TypeScript and Flow libraries, types live with the code that uses them, and when the types are checked, that code is checked, too. Of course, at some point many libraries wrap JavaScript libraries, and you are trusting the library developer to handle that accurately. But there are strong conventions and defaults around writing sound types, and the compiler and ecosystem help to support that. You can trust the types within your system, or if you can’t, the issue is likely in your JavaScript code or at the boundary between PureScript and JavaScript.

There’s a similar dynamic with immutability. It’s not that you couldn’t write mutable code in PureScript, or write it in JavaScript and call it from PureScript. But it’s usually much easier to do it immutably. Writing mutable code has worse ergonomics, and in some cases you would be fighting the compiler and the ecosystem. There’s a strong default of manipulating data in immutable ways, backed by the compiler. This means you can trust the data won’t change underneath you, or if it does, you see it in the types or look to the mutable JavaScript code.

Learning to code without fear

PureScript has many interesting and practical ideas, from pattern matching and ADTs to the utility of type classes to property-based testing and type level programming. But from a JavaScript perspective the biggest gains come from the simple ideas. Developers have worked hard to bring ideas like types, immutability, and functional transformations to JavaScript. They end up being a patchwork of useful tools that kind of work, if you apply them deliberately and avoid foot-guns and don’t use them too much together.

In PureScript, there’s no need to choose between these ideas. The ideas all support each other and are pervasive in the code and ecosystem.

What do pervasive strong types, immutability, and functional programming give you? A high base level of trust in the code that you write. A feeling of relative security. The confidence to refactor code freely as needed.

Programming without the fear.

The post Fear, trust and PureScript: Building on trust with types and functional programming appeared first on Reaktor.

Read the whole story
iiska
75 days ago
reply
Oulu, Finland
Share this story
Delete

Fear, trust and JavaScript: When types and functional programming fail

1 Share

As developers, we want to reduce fear of our code failing and increase trust in our code working well. Many developers working with JavaScript borrow useful ideas from functional programming and strongly-typed programming languages to reduce fear by transferring trust from developers to tools and the code. Ideas like optional types, functional transformations, and immutability can all help to write better JavaScript. When pulling these ideas together in JavaScript, however, they come with severe trade-offs, work together poorly, and ultimately fail in the goal of effectively transferring trust from developers to code and tools.

To illustrate this idea, let’s look at how data is handled in JavaScript from two perspectives: understanding the shape of data and changing data.

Fear and the shape of data

In a dynamic language like JavaScript, it can be hard to know what the shape of your data is. The default approach is to rely on convention. You trust other developers and other systems to give you correct data in agreed upon formats and to follow certain norms within the code base.

I like to call this the “pretend it’s what you want” approach. In high-trust environments, it can work well enough.

But then the fear creeps in. The code grows in complexity. You work with code from developers who follow different conventions. You receive data that you cannot control from upstream in erratic formats. You start seeing null pointer errors. Trust in the code breaks down, and questions about the data start to provoke anxiety rather than confidence.

  • What values does this data actually contain?
  • Can I delete these values without breaking things?
  • Can I pass in this data to this function?

You can see the fear in the code base. It looks like this:

This is defensive programming. It happens when you can no longer trust your own code to provide the data you expect at the appropriate times. Your beautiful code becomes cluttered with defensive checks, you lose readability, and the code becomes more brittle and harder to change. Fear grows, and it is harder and harder to trust that your code actually works.

Optional types: Pretend really hard

One way to stave off the fear is to introduce optional types via TypeScript or Flow. You receive a user and then proclaim joyously that it is of the User type, and henceforth shall be treated only as a User.

This is like pretending really hard. You’ve shifted your trust around. You still trust other systems to give you data in the correct shape. But within your code base, you trust the type that you’ve given to that data and that the compiler will complain if you use that data incorrectly. Instead of trusting developers to know the shape of data and use it appropriately, you’re trusting developers to write and maintain correct types, and you’re trusting the compiler to not lie about those types. More on that later.

Adding types to our example doesn’t solve the underlying problem. It improves trust within the code base by helping to ensure that data is used consistently, but it says nothing about data received from the outside world.

Validation: Trust but validate

In a low trust environment, you may need to introduce data validation at various points.

You could do this by hand, but the validation would be ad hoc, laborious, and error-prone. Or you could write JSON schema definitions and validate with ajv or the like to verify that the data matches your schema. This is less ad hoc and allows other uses like generating documentation, but is likely no less verbose or error-prone because you have to manually write out schemas like this:

Optional types + validation

Or you could introduce both types and validation. Types to stave off fear internally, and validation to be able to trust data from external sources.

To avoid writing essentially the same type definitions for both validation and optional types you can use the TypeScript or Flow compilers directly as libraries, or use another library like runtypes (TS), runtime-types (Flow), or typescript-json-schema (TS). After going through a few hoops you start feeling more trust in your data. But there are deeper issues here, which I will get to later.

Fear and changing data

What about when the data changes? By default in JavaScript data can change willy-nilly. For example, this function receives a document, and then changes the document to format a field properly and to include a new field.

But in this style, the flow is hard to follow, and fear starts to creep in. What if our data is used elsewhere? What if it was already changed elsewhere? What values do I have in my data at this point? How can I trust that the data I have at this point is the data I want at this point and will stay that way? This is a trivial example, but the problem becomes much worse with a large code base or a highly concurrent system.

You turn to optional types, but those types won’t save you. In TypeScript and Flow, both of these functions have the same type:

One of these does what you want; the other burns the city down. As far as these type systems are concerned, these functions do nothing.

Convention: Pretend immutability

So you write better JavaScript. You agree with your team, explicitly or implicitly, to write in an immutable style.

You favor const over var and duplicating values over mutation. You use let to indicate value references that change. You rediscover the ternary operator as a functional alternative to if statements, at least for short lines. You use functions to return new values instead of changing values. You use map, filter, reduce, and other functional constructs to create new data structures without changing the underlying data.

Immutability by convention is convenient, produces idiomatic JavaScript, and works well within the JavaScript ecosystem. But it relies heavily on both trust and discipline from developers. You trust developers to follow conventions like avoiding mutation or indicating clearly where mutation happens. You might want something stronger.

Libraries: Pretend really hard

You can shift the trust partly from other developers to tools by adopting libraries for data transformation and immutable data structures. You might start using a library like Ramda pervasively as a functional utility belt, or adopt lenses à la partial.lenses, monocle-ts, or the like.

One fundamental idea in these types of libraries is that the underlying data is treated as though it were immutable. It’s not – even Ramda only does shallow clones – but if the convention of immutable data is strong enough, then everyone can pretend it is. You may take a slight performance hit from copying data, but you gain some level of trust in the code. This works best if the use of the library and this convention is pervasive.

To enforce actual immutability and avoid the performance hit for changing data, you might also introduce immutable data structures via something like Immutable.js, seamless-immutable or Mori.

This makes the data itself actually immutable, in that only immutable ways to touch the data are exposed. But it only applies to data that is expressed within these data structures. As most of JavaScript relies on classic JavaScript data structures, you end up converting back and forth between the two a lot and you lose that trust whenever you have to use standard data structures.

Both of these approaches have limitations, but most importantly they clash hard with optional types.

Trusting JavaScript

The previous examples pulled out several tools for writing more effective JavaScript: optional types, functional transformations, and immutable data. But in JavaScript these tools come with some severe limitations, and they are hard to use together.

Optional types give a false sense of security

Optional types for JavaScript are optional by design, which means not everything is typed and you can’t trust that everything has a valid type. Flow is unsound and TypeScript is deliberately unsound, which means that in various cases the types are wrong and the compiler doesn’t care.

And optional types in JavaScript lie for other reasons. Some things in JavaScript are just hard or impossible to type out in TypeScript or Flow.

To type these out in TypeScript or Flow, you sacrifice on one or more principles:

  1. Sacrifice type safety, the whole reason you use types: Type them out with any types, which allow any values and essentially disable the type checker for all values in the “path” of any.
  2. Sacrifice usefulness: Make the functions less general in order to provide more specific, accurate types.
  3. Sacrifice other developers’ time: Make the user of the function provide the correct types, as in

Then you add libraries into the mix, with their own type definitions with mixed levels of accuracy. This transfers some trust not to the developers of libraries, but to the developers of type definitions for libraries. Many of these libraries will contain any annotations, and calling those functions will quietly render your trust in types invalid. In Flow, type-checking can also be quietly disabled when a file is missing a @flow annotation.

You can work around this trust issue by adopting type annotations pervasively, disallowing both implicit and explicit any types, setting the linter to complain when files are not type-checked, and otherwise tightening up configurations.

But it’s like plugging holes in a leaky ship. The problem isn’t just that you can’t trust the types in your system, but that you think you can. You rely on the types to tell you when a change breaks something, but because they were quietly disabled by an any type, or by use of a library, or by a soundness issue, it doesn’t. Types in JavaScript are different from types in most other languages people use: They can’t be trusted in the same way.

Ultimately the strength of your types depends on the knowledge and belief of the team in applying them. If the team has a high level of belief and knowledge of types, they can encode a high level of trust into the system. But this is dependent on the team’s attention and discipline to maintain this level of trust, and fear can creep in and destroy that trust in many subtle ways.

Functional programming. Types. JavaScript. Pick two

Optional types and basic functional programming like maps and filters and reduces and so forth work alright together in JavaScript. It’s when you try to go further that you run into problems. Two examples:

Immutable.js is a persistent, immutable data structure library for JavaScript. It provides common data structures for JavaScript that do not rely on modifying the underlying data in-place. It has built-in type definitions for both TypeScript and Flow – go look at them. There are countless any annotations, which disable type-checking for those values. Then there are other types which pass the burden on to the user to provide the correct types. Essentially every time you use the library, you are either opting out of optional types or going to extra lengths to make the types work. This discourages functional programming.

Ramda is a functional utility library for JavaScript. One set of type definitions can be found here, along with this comment (emphasis added):

“Note: many of the functions in Ramda are still hard to properly type in Ramda, with issues mainly centered around partial application, currying, and composition, especially so in the presence of generics. And yes, those are probably why you’d be using Ramda in the first place, making these issues particularly problematic to type Ramda for TypeScript. A few links to issues at TS can be found below.”

Despite the impressive work of people like Giulio Canti, every time you choose even slightly more advanced functional programming concepts, like immutable data structures, function composition, or currying, you are essentially opting out of the type checker or going to extra lengths to make the types work. This discourages functional programming.

Why we can’t have nice things in JavaScript

Immutability works best when it is pervasive. But the JavaScript language and ecosystem is designed around mutable data, you can’t enforce immutability from a library, and optional types in JavaScript are not expressive enough to handle immutability as a library.

Types work best when they are pervasive. But types in JavaScript are optional by design and limit their expressiveness and utility by taking steep trade-offs to maintain compatibility with JavaScript.

Types, immutability, and functional programming can all support each other, just like they do in many languages. Types can be used to enforce immutability, even when the underlying data structures are mutable or the types don’t exist at runtime. Types can help developers connect the piping correctly when using functional composition or transforming data using lenses. Functional transformations can be easier to understand and maintain when you see the types. Functional transformations can be more efficient when you know the underlying data is immutable.

Learning to code with fear

So how do you learn to code with the fear? You write better JavaScript. You start with the base assumption that you can trust little in your code, and learn countless tricks to write more functional JavaScript and avoid the wartier parts of the language. You introduce type validation where necessary. You use immutable data, but only where you have a specific need or you enforce it by convention only. You adopt optional types where it makes sense, but abandon types where functional data handling or immutable data provide greater benefits. You use functional composition or lenses while knowingly opting out of type checking guarantees.

Or you change the game and just use PureScript. Or ReasonML, or Elm, or even ClojureScript. These exist today. Production software runs on them. They work with the JavaScript ecosystem, where necessary. And they provide a higher base level of trust in the code that you write and an environment where immutability, functional programming, and types (where applicable) work well and work together.

Adopting one of these languages is not going to solve all of your problems. It will introduce its own problems. But it might give you a higher level of basic trust in your code, and better tools to increase or decrease that trust as needed. In my next post, I’ll discuss how these ideas play together in PureScript.

But in JavaScript, the fear is always with you.

The post Fear, trust and JavaScript: When types and functional programming fail appeared first on Reaktor.

Read the whole story
iiska
84 days ago
reply
Oulu, Finland
Share this story
Delete

Single Page Application Is Not a Silver Bullet

1 Share

Single-page applications are everywhere. Even blogs, simple html pages (in the beginning something like https://danluu.com/), have turned into big fat monsters – for example, jlongster’s blog has 1206 stars at the moment of writing, and not because of the content (people usually subscribe to blogs rather than star the source): the only reason is that once he implemented it using React & Redux. What is the problem, though? He wants, he makes it, no questions here. The problem is that it is considered normal to make blogs for reading so bloated – of course, there are some people who complain, but the general response is utterly positive. But who cares about blogs – the problem is that nowadays pretty often question is not “to SPA or not to SPA”, rather “which client-side framework should we use for our SPA”.

I am a frontend developer, who all his career was involved into creating quite complicated single-page applications (like Lebara Play, or app.contentful.com – you have to have an account to see the actual app, though) – and for a long time the question “should be a single-page application” did not exist for me: of course it should! But some time ago, at my previous job, my manager came to me asking to investigate our application size – about two years ago we migrated from WordPress to React/Redux/Immutable.js stack (which is a pretty common choice, I guess), and it turned out that average load time increased twice over this time! Alright, maybe it was just our problem, and we are not that good – so I was looking into it for a couple of weeks, and after my research I re-thought a lot in front-end development.

State of the Art

Nowadays a lot of startups, cool companies and personal websites are made using single-page applications – it is pretty common, and nobody is particularly surprised when they don’t see a reaload after clicking on a link inside the application.

Let’s summarize the experience, starting with the pros:

  • no page reloads after clicking on the link
  • with implemented caching subsequent pages open faster, because a lot of info can be already downloaded (like user info, movie details, etc)
  • granular actions are super fast – like add to the favourites, subscribe to the news; of course, it is not rocket science, so you can sprinkle jQuery here and there, but it is much more complicated to maintain
  • frontend is decoupled from backend, makes development of complicated features much easier (especially when we need to interact between screens)
  • possible to create almost “native” (I think nobody ever felt it, though) experience, with fallbacks if no internet is there

Cons:

  • broken “back” button (sometimes it works properly, but in general people don’t trust it)
  • broken “open in a new tab” behaviour – people like to handle links in onClick handler, and browser can’t recognize it as a link (even if the majority of the links are valid, sooner or later you’ll encounter a non-link “link”)
  • sometimes broken “refresh” button – after refreshing you end up in a different UI (usually slightly, but still different)
  • increased TTI (time to interaction), more on this later
  • bad performance on low-end devices (and I suspect higher battery consumption, but can’t really proof this one)

Why is it slow?

Back to the beginning, where I said we found that WordPress was actually faster than our new shiny stack. Our application was not the most sofisticated, but had some interactive parts – it was a financial website, where you can apply for a loan, go to your personal accout, restructure your loan, and change some personal details. Because it is financial and fully online, we had to collect a lot of data, and it was several forms with live validation, on-the-fly validation and some intrinsic flows, so it made it a perfect case for frontend. However, one problem – it was slow; as I mentioned, load time increased twice. So, I was looking into the reasons why it is so big, and is turned out, the size was basically dictated by our react ecosystem – all libraries we needed created this big chunk of js (around 350KB gzipped). Another big chunk was our actual application code – it was another 350kb, and in total we ended up with ~2.6MB non-gzipped data, which is, of course, insane. It means that no matter how optimized our code is, in order to start the application, a browser has to do several things:

  1. Download this 650KB file
  2. Ungzip it
  3. Parse it (this task is extremely pricy for mobiles)
  4. Execute it (so we’ll have React runtime and our components become interactive)

You can read more on this process in Addy’s Osmani article.

At the end, my findings were that we owe this time increase to big bundle size. As I mentioned, though, our vendor chunk was half of the size, so it means that basic functionality (we had pretty “rich” homepage) would require a big chunk already, and can solve the problem only partially.

I have to say that we were modern enough, and in order to help SEO and mobile clients, we had server-side rendering, implemented using Node.js + Express, where we fetched all needed data for the client. And it really helped – the user was able to see the markup (though it does not work on old mobile devices), but the problem is that before the javascript is downloaded, parsed and executed, so React can add event listeners and your page is finally interactive, a lot of time actually passes.

How slow is it?

Let’s take a look at actual sizes of the applications. Usually I am logged out, and I have my adblocker enabled.

Airbnb (individual place page)

Bright engineering team, many articles about migration to React. Let’s see how is it going, on the typical page of individual place:

There are tons of files, and in total it makes it 1.3MB gzipped. Again, 1.3MB to show you a place – pictures and description. Of course, there is a lot of user interaction, but at the end of they day, as a user, I want to look at the place – also I can be logged out. Is it user-friendly? I don’t know, but I’d be happy with static content to just explore what are requirements, to read description, etc. I am pretty sure that aggressive code-splitting allows them to ship features faster, but the price here is user’s comfort and speed.

Twitter (regular feed of logged in user)

Just 3 initial files, and 1 later (I guess lazy loading):

  • init file, 161KB
  • common file, 249KB
  • home page file (page splitting!), 65KB
  • “native bundle”, 44.5KB (not sure what it is, but it was loaded afterwards)

In total 475KB + 44.5KB for some lazy-loaded file. Much better than AirBnB, but still, a lot of stuff, just to show you feed. There is also a mobile version, which feels much lighter, but the size is actually similar.

Medium (Article about cost of JS)

Medium is a cool platform for blogs. So, essentially, it is an advanced place for texts, similar to one you are reading right now. Let’s open an article, which I mentioned before, about cost of JS:

Also 3 files:

  • main bundle, 337KB
  • common bundle, 183KB
  • notes, 22.6 (maybe this amazing functionality to highlight commas)

In total 542KB, just to read an article. You can read it without JS at all, by the way, so it means that it is not that crucial for the main task.

What can be done?

All these websites I’ve mentioned load on my latest 15” MBP with stable internet connection for 2–3 seconds, and become fully usable after another 2 seconds, which is not that bad nowadays. The problem is how normal it has become over the last decade – slack has 0.5s delay between clicking and switching to the channel, and we don’t even notice it anymore.

Don’t take this rant as an attack on the SPAs itself – I like them (at the end of the day, I write one for my day job!), and they allow to create very powerful user experience, which works across all platforms. However, a lot of things I was working on as a SPA, in fact should not be one. For example, some time ago I’ve made a portfolio website to my wife, and of course I did using cool SPA stack – but in fact, it is just a blog with a lot of images and couple of widgets; so I’m guilty of this trend more than many.

Also, feel free to take a look at the first CERN website. Just compare navigation speed when you are on this website, and the responsiveness, when you are leaving it to some external, modern one.

Read the whole story
iiska
107 days ago
reply
Oulu, Finland
Share this story
Delete

Firefox Nightly enables support for FIDO U2F Security Keys

2 Shares

This week, Mozilla enabled support for FIDO U2F (Universal 2nd Factor) security keys in the pre-beta release of Firefox, Firefox Nightly. Firefox is the second largest internet browser by user base. In the near future, 80% of the world’s desktop users, including Chrome and Opera users, will benefit from the open authentication standard and YubiKey support out of the box.

When GitHub made support for U2F in 2015, the open source community voted U2F as the most wanted feature in Firefox. We are delighted to now see it happening. Yubico has helped with U2F integration for Firefox and for other platforms and browsers that have or are in the process of making support, as it is critical for taking the YubiKey and U2F unphishable authentication to the global masses.

In today’s world, software installation brings with it not only added complexity for the user, but also the potential risk of malware. Chrome has already enabled millions of websites and services to deploy FIDO U2F seamlessly, mainly through Google and Facebook social login, to help mitigate that. Now with native support for FIDO U2F security keys in Firefox, millions more will benefit from strong, hardware-based two-factor authentication without the need to download or install client software.

Thanks Mozilla for working on increasing security and usability for internet users!

The post Firefox Nightly enables support for FIDO U2F Security Keys appeared first on Yubico.

Read the whole story
miohtama
228 days ago
reply
Helsinki, Finland
iiska
242 days ago
reply
Oulu, Finland
Share this story
Delete

RESTful DOOM

2 Shares

TL;DR I embedded a RESTful API into the classic 1993 game DOOM, allowing the game to be queried and controlled using HTTP and JSON.

“We fully expect to be the number one cause of decreased productivity in businesses around the world.”

   - ID Software press release (1993).


1993

1993 was an exciting year - Sleepless in Seattle opened in theatres, Microsoft shipped Windows NT 3.1, and Whitney Houston’s ‘I Will Always Love You’ was the best selling song for 2 straight months. Oh, and a game called Doom was released!

Doom was created by a small team at ID Software. Wikipedia describes it as one of the most significant and influential titles in video game history, and growing up I loved playing it. As an adult I couldn’t put down a book called Masters of DOOM, which describes the back story of ID Software.

ID Software has a super cool practice of releasing source code for their games. For the kind of hackers who lurk on /r/gamedev, an ID Software engine is an amazing resource to learn from. And lo, in 1997, the Doom engine source code was released, causing much happiness!

2017

I was having trouble finding a fun API to use in a talk I had to do. I had spent the normal amount of time procrastinating and stressing about having to give the talk, and wasn’t making any progress on building a compelling demo.

Late one night, out of the blue, I had the idea to create an API for Doom, now 24 years old(!), and obviously never designed to have an API. I could have some fun digging around the Doom source code and solve my API problem at the same time!

My random idea became RESTful-DOOM - a version of Doom which really does host a RESTful API! The API allows you to query and manipulate various game objects with standard HTTP requests as the game runs.

There were a few challenges:

  • Build an HTTP+JSON RESTful API server in C.
  • Run the server code inside the Doom engine, without breaking the game loop.
  • Figure out what kinds of things we can manipulate in the game world, and how to interact with them in memory to achieve the desired effect!

I choose chocolate-doom as the base Doom code to build on top of. I like this project because it aims to stick as close to the original experience as possible, while making it easy to compile and run on modern systems.

Hosting an HTTP API server inside Doom

chocolate-doom already uses SDL, so I added an -apiport <port> command line arg and used SDLNet_TCP_Open to open a TCP listen socket on startup. Servicing client connections while the game is running is a bit trickier, because the game must continue to update and render the world many times a second, without delay. We must not make any blocking network calls.

The first change I made was to edit D_ProcessEvents (the Doom main loop), to add a call to our new API servicing method API_RunIO. This calls SDLNet_TCP_Accept which accepts a new client, or immediately returns NULL if there are no clients.
If we have a new client, we add its socket to a SocketSet by calling SDLNet_TCP_AddSocket. Being part of a SocketSet allows us to use the non-blocking SDLNet_CheckSockets every tic to determine if there is data available.
If we do have data, API_ParseRequest attempts to parse the data as an HTTP request, using basic C string functions. I used cJSON and yuarel libraries to parse JSON and URI strings respectively.

Routing an HTTP request involves looking at the method and path, then calling the right implementation for the requested action. Below is a snippet from the API_RouteRequest method:

if (strcmp(path, "api/player") == 0)
{
    if (strcmp(method, "PATCH") == 0) 
    {
        return API_PatchPlayer(json_body);
    }
    else if (strcmp(method, "GET") == 0)
    {
        return API_GetPlayer();
    }
    else if (strcmp(method, "DELETE") == 0) {
        return API_DeletePlayer();
    }
    return API_CreateErrorResponse(405, "Method not allowed");
}

Each action implementation (for example API_PatchPlayer) returns an api_response_t containing a status code and JSON response body.

Putting it all together, this is what the call graph looks like when handling a request for PATCH /api/player:

D_ProcessEvents();
  API_RunIO();
    SDLNet_CheckSockets();
    SDLNet_TCP_Recv();
    API_ParseRequest();
    API_RouteRequest();
      API_PatchPlayer();
    API_SendResponse();

Interfacing with Doom entities

Building an API into a game not designed for it is actually quite easy when the game is written in straight C. There are no private fields or class hierarchies to deal with. And the extern keyword makes it easy to reference global Doom variables in our API handling code, even if it feels a bit dirty ;)

cJSON library is used to generate the JSON formatted response data from API calls.

We want the API to provide access to the current map, map objects (scenery, powerups, monsters), doors, and the player. To do these things, we must understand how the Doom engine handles them.

The current episode and map are stored as global int variables. By updating these values, then calling the existing G_DeferedInitNew, we can trigger Doom to switch smoothly to any map and episode we like.

Map objects (mobj_t) implement both scenery items and monsters. I added an id field which gets initialized to a unique value for each new object. This is the id used in the API for routes like /api/world/objects/:id.

To create a new map object, we call the existing P_SpawnMobj with a position and type. This returns us an mobj_t* that we can update with other properties from the API request.

The local player (player_t) is stored in the first index of a global array of players. By updating fields of the player, we can control things like health and weapon used. Behind the scenes, a player is also an mobj_t.

A door in Doom is a line_t with a special door flag. To find all doors, we iterate through all line_t in the map, returning all lines which are marked as a door. To open or close the door, we call the existing EV_VerticalDoor to toggle the door state.

API Specification

An API spec describes the HTTP methods, routes, and data types that the API supports. For example, it will tell you the type of data to send in a POST call to /api/world/objects, and the type of data you should expect in response.
I wrote the API spec in RAML 1.0. It is also hosted in a public API Portal for easier reading.

Putting it all together

So now we have an HTTP+JSON server inside Doom, interfacing with Doom objects in memory, and have written a public API specification for it. Phew!
We can now query and manipulate this 24 year old game from any REST API client - heres a video proving exactly that! Enjoy ;)

restful-doom on GitHub


Read the whole story
iiska
243 days ago
reply
Oulu, Finland
Share this story
Delete

USB Cables

6 Comments and 21 Shares
Tag yourself, I'm "frayed."
Read the whole story
iiska
245 days ago
reply
Oulu, Finland
Share this story
Delete
5 public comments
expatpaul
244 days ago
reply
Painfully true
Belgium
mooglemoogle
245 days ago
reply
I’m “Carries data but not power”
Virginia
CaffieneKitty
243 days ago
I'm "Heavy and not very flexible" :-P
deezil
245 days ago
reply
I need USB-C cables to become cheaper, but basically, if it's not "the good one", it gets thrown in the garbage. Monoprice has them for too cheap to worry about them.
Louisville, Kentucky
alt_text_bot
245 days ago
reply
Tag yourself, I'm "frayed."
endlessmike
240 days ago
Heavy and not very flexible
Covarr
245 days ago
reply
And then there's that weird proprietary cable I've had since like 2004 that looks at a glance like micro USB but isn't, so I get halfway across the country for my vacation with no way to charge anything at all and have to buy spares at the airport for exorbitant prices.
Moses Lake, WA
skittone
244 days ago
Throw it away.
bodly
244 days ago
Or label it.
JimB
241 days ago
My mate threw his away, then wondered why he could no longer connect his Panasonic camera to the computer...
Next Page of Stories