rowid,title,contents,year,author,author_slug,published,url,topic
257,The (Switch)-Case for State Machines in User Interfaces,"You’re tasked with creating a login form. Email, password, submit button, done.
“This will be easy,” you think to yourself.
Login form by Selecto
You’ve made similar forms many times in the past; it’s essentially muscle memory at this point. You’re working closely with a designer, who gives you a beautiful, detailed mockup of a login form. Sure, you’ll have to translate the pixels to meaningful, responsive CSS values, but that’s the least of your problems.
As you’re writing up the HTML structure and CSS layout and styles for this form, you realize that you don’t know what the successful “logged in” page looks like. You remind the designer, who readily gives it to you. But then you start thinking more and more about how the login form is supposed to work.
What if login fails? Where do those errors show up?
Should we show errors differently if the user forgot to enter their email, or password, or both?
Or should the submit button be disabled?
Should we validate the email field?
When should we show validation errors – as they’re typing their email, or when they move to the password field, or when they click submit? (Note: many, many login forms are guilty of this.)
When should the errors disappear?
What do we show during the login process? Some loading spinner?
What if loading takes too long, or a server error occurs?
Many more questions come up, and you (and your designer) are understandably frustrated. The lack of upfront specification opens the door to scope creep, which readily finds itself at home in all the unexplored edge cases.
Modeling Behavior
Describing all the possible user flows and business logic of an application can become tricky. Ironically, user stories might not tell the whole story – they often leave out potential edge-cases or small yet important bits of information.
However, one important (and very old) mathematical model of computation can be used for describing the behavior and all possible states of a user interface: the finite state machine.
The general idea, as it applies to user interfaces, is that all of our applications can be described (at some level of abstraction) as being in one, and only one, of a finite number of states at any given time. For example, we can describe our login form above in these states:
start - not submitted yet
loading - submitted and logging in
success - successfully logged in
error - login failed
Additionally, we can describe an application as accepting a finite number of events – that is, all the possible events that can be “sent” to the application, either from the user or some other external entity:
SUBMIT - pressing the submit button
RESOLVE - the server responds, indicating that login is successful
REJECT - the server responds, indicating that login failed
Then, we can combine these states and events to describe the transitions between them. That is, when the application is in one state, an an event occurs, we can specify what the next state should be:
From the start state, when the SUBMIT event occurs, the app should be in the loading state.
From the loading state, when the RESOLVE event occurs, login succeeded and the app should be in the success state.
If login fails from the loading state (i.e., when the REJECT event occurs), the app should be in the error state.
From the error state, the user should be able to retry login: when the SUBMIT event occurs here, the app should go to the loading state.
Otherwise, if any other event occurs, don’t do anything and stay in the same state.
That’s a pretty thorough description, similar to a user story! It’s also a bit more symbolic than a user story (e.g., “when the SUBMIT event occurs” instead of “when the user presses the submit button”), and that’s for a reason. By representing states, events, and transitions symbolically, we can visualize what this state machine looks like:
Every state is represented by a box, and every event is connected to a transition arrow that connects two states. This makes it intuitive to follow the flow and understand what the next state should be given the current state and an event.
From Visuals to Code
Drawing a state machine doesn’t require any special software; in fact, using paper and pencil (in case anything changes!) does the job quite nicely. However, one common problem is handoff: it doesn’t matter how detailed a user story or how well-designed a visualization is, it eventually has to be coded in order for it to become part of a real application.
With the state machine model described above, the same visual description can be mapped directly to code. Traditionally, and as the title suggests, this is done using switch/case statements:
function loginMachine(state, event) {
switch (state) {
case 'start':
if (event === 'SUBMIT') {
return 'loading';
}
break;
case 'loading':
if (event === 'RESOLVE') {
return 'success';
} else if (event === 'REJECT') {
return 'error';
}
break;
case 'success':
// Accept no further events
break;
case 'error':
if (event === 'SUBMIT') {
return 'loading';
}
break;
default:
// This should never occur
return undefined;
}
}
console.log(loginMachine('start', 'SUBMIT'));
// => 'loading'
This is fine (I suppose) but personally, I find it much easier to use objects:
const loginMachine = {
initial: ""start"",
states: {
start: {
on: { SUBMIT: 'loading' }
},
loading: {
on: {
REJECT: 'error',
RESOLVE: 'success'
}
},
error: {
on: {
SUBMIT: 'loading'
}
},
success: {}
}
};
function transition(state, event) {
return machine
.states[state] // Look up the state
.on[event] // Look up the next state based on the event
|| state; // If not found, return the current state
}
console.log(transition('start', 'SUBMIT'));
As you might have noticed, the loginMachine is a plain JS object, and can be written in JSON. This is important because it allows the machine to be visualized by a 3rd-party tool, as demonstrated here:
A Common Language Between Designers and Developers
Although finite state machines are a fundamental part of computer science, they have an amazing potential to bridge the application specification gap between designers and developers, as well as project managers, stakeholders, and more. By designing a state machine visually and with code, designers and developers alike can:
identify all possible states, and potentially missing states
describe exactly what should happen when an event occurs on a given state, and prevent that event from having unintended side-effects in other states (ever click a submit button more than once?)
eliminate impossible states and identify states that are “unreachable” (have no entry transition) or “sunken” (have no exit transition)
add features with full confidence of knowing what other states it might affect
simplify redundant states or complex user flows
create test paths for almost every possible user flow, and easily identify edge cases
collaborate better by understanding the entire application model equally.
Not a New Idea
I’m not the first to suggest that state machines can help bridge the gap between design and development.
Vince MingPu Shao wrote an article about designing UI states and communicating with developers effectively with finite state machines
User flow diagrams, which visually describe the paths that a user can take through an app to achieve certain goals, are essentially state machines. Numerous tools, from Sketch plugins to standalone apps, exist for creating them.
In 1999, Ian Horrocks wrote a book titled “Constructing the User Interface with Statecharts”, which takes state machines to the next level and describes the inherent difficulties (and solutions) with creating complex UIs. The ideas in the book are still relevant today.
More than a decade earlier, David Harel published “Statecharts: A Visual Formalism for Complex Systems”, in which the statechart - an extended hierarchical state machine model - is born.
State machines and statecharts have been used for complex systems and user interfaces, both physical and digital, for decades, and are especially prevalent in other industries, such as game development and embedded electronic systems. Even NASA uses statecharts for the Curiosity Rover and more, citing many benefits:
Visualized modeling
Precise diagrams
Automatic code generation
Comprehensive test coverage
Accommodation of late-breaking requirements changes
Moving Forward
It’s time that we improve how we communicate between designers and developers, much less improve the way we develop UIs to deliver the best, bug-free, optimal user experience. There is so much more to state machines and statecharts than just being a different way of designing and coding. For more resources:
The World of Statecharts is a comprehensive guide by Erik Mogensen in using statecharts in your applications
The Statechart Community on Spectrum is always full of interesting ideas and questions related to state machines, statecharts, and software modeling
I gave a talk at React Rally over a year ago about how state machines (finite automata) can improve the way we develop applications. The latest one is from Reactive Conf, where I demonstrate how statecharts can be used to automatically generate test cases.
I have also been working on XState, which is a library for “state machines and statecharts for the modern web”. You can create and visualize statecharts in JavaScript, and use them in any framework (and soon enough, multiple different languages).
I’m excited about the future of developing web and mobile applications with statecharts, especially with regard to faster design/development cycles, auto-generated testing, better error prevention, comprehensive analytics, and even the use of model-based reinforcement learning and artificial intelligence to greatly improve the user experience.",2018,David Khourshid,davidkhourshid,2018-12-12T00:00:00+00:00,https://24ways.org/2018/state-machines-in-user-interfaces/,code
242,Creating My First Chrome Extension,"Writing a Chrome Extension isn’t as scary at it seems!
Not too long ago, I used a Chrome extension called 20 Cubed. I’m far-sighted, and being a software engineer makes it difficult to maintain distance vision. So I used 20 Cubed to remind myself to look away from my screen and rest my eyes. I loved its simple interface and design. I loved it so much, I often forgot to turn it off in the middle of presentations, where it would take over my entire screen. Oops.
Unfortunately, the developer stopped updating the extension and removed it from Chrome’s extension library. I was so sad. None of the other eye rest extensions out there matched my design aesthetic, so I decided to create my own! Want to do the same?
Fortunately, Google has some respectable documentation on how to create an extension. And remember, Chrome extensions are just HTML, CSS, and JavaScript. You can add libraries and frameworks, or you can just code the “old-fashioned” way. Sky’s the limit!
Setup
But first, some things you’ll need to know about before getting started:
Callbacks
Timeouts
Chrome Dev Tools
Developing with Chrome extension methods requires a lot of callbacks. If you’ve never experienced the joy of callback hell, creating a Chrome extension will introduce you to this concept. However, things can get confusing pretty quickly. I’d highly recommend brushing up on that subject before getting started.
Hyperbole and a Half
Timeouts and Intervals are another thing you might want to brush up on. While creating this extension, I didn’t consider the fact that I’d be juggling three timers. And I probably would’ve saved time organizing those and reading up on the Chrome extension Alarms documentation beforehand. But more on that in a bit.
On the note of organization, abstraction is important! You might have any combination of the following:
The Chrome extension options page
The popup from the Chrome Menu
The windows or tabs you create
The background scripts
And that can get unwieldy. You might also edit the existing tabs or windows in the browser, which you’ll probably want as a separate script too. Note that this tutorial only covers creating your own customized window rather than editing existing windows or tabs.
Alright, now that you know all that up front, let’s get going!
Documentation
TL;DR READ THE DOCS.
A few things to get started:
Read Google’s primer on browser extensions
Have a look at their Getting started tutorial
Check out their overview on Chrome Extensions
This overview discusses the Chrome extension files, architecture, APIs, and communication between pages. Funnily enough, I only discovered the Overview page after creating my extension.
The manifest.json file gives the browser information about the extension, including general information, where to find your extension files and icons, and API permissions required. Here’s what my manifest.json looked like, for example:
https://github.com/jennz0r/eye-rest/blob/master/manifest.json
Because I’m a visual learner, I found the images that describe the extension’s architecture most helpful.
To clarify this diagram, the background.js file is the extension’s event handler. It’s constantly listening for browser events, which you’ll feed to it using the Chrome Extension API. Google says that an effective background script is only loaded when it is needed and unloaded when it goes idle.
The Popup is the little window that appears when you click on an extension’s icon in the Chrome Menu. It consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under page_action: { ""default_popup"": FILE_NAME_HERE }.
The Options page is exactly as it says. This displays customizable options only visible to the user when they either right-click on the Chrome menu and choose “Options” under an extension. This also consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under options_page: FILE_NAME_HERE.
Content scripts are any scripts that will interact with any web windows or tabs that the user has open. These scripts will also interact with any tabs or windows opened by your extension.
Debugging
A quick note: don’t forget the debugging tutorial!
Just like any other Chrome window, every piece of an extension has an inspector and dev tools. If (read: when) you run into errors (as I did), you’re likely to have several inspector windows open – one for the background script, one for the popup, one for the options, and one for the window or tab the extension is interacting with.
For example, I kept seeing the error “This request exceeds the MAX_WRITE_OPERATIONS_PER_HOUR quota.” Well, it turns out there are limitations on how often you can sync stored information.
Another error I kept seeing was “Alarm delay is less than minimum of 1 minutes. In released .crx, alarm “ALARM_NAME_HERE” will fire in approximately 1 minutes”. Well, it turns out there are minimum interval times for alarms.
Chrome Extension creation definitely benefits from debugging skills. Especially with callbacks and listeners, good old fashioned console.log can really help!
Me adding a ton of `console.log`s while trying to debug my alarms.
Eye Rest Functionality
Ok, so what is the extension I created? Again, it’s a way to rest your eyes every twenty minutes for twenty seconds. So, the basic functionality should look like the following:
If the extension is running AND
If the user has not clicked Pause in the Popup HTML AND
If the counter in the Popup HTML is down to 00:00 THEN
Open a new window with Timer HTML AND
Start a 20 sec countdown in Timer HTML AND
Reset the Popup HTML counter to 20:00
If the Timer HTML is down to 0 sec THEN
Close that window. Rinse. Repeat.
Sounds simple enough, but wow, these timers became convoluted! Of all the Chrome extensions I decided to create, I decided to make one that’s heavily dependent on time, intervals, and having those in sync with each other. In other words, I made this unnecessarily complicated and didn’t realize until I started coding.
For visual reference of my confusion, check out the GitHub repository for Eye Rest. (And yes, it’s a pun.)
API
Now let’s discuss the APIs that I used to build this extension.
Alarms
What even are alarms? I didn’t know either.
Alarms are basically Chrome’s setTimeout and setInterval. They exist because, as Google says…
DOM-based timers, such as window.setTimeout() or window.setInterval(), are not honored in non-persistent background scripts if they trigger when the event page is dormant.
For more information, check out this background migration doc.
One interesting note about alarms in Chrome extensions is that they are persistent. Garbage collection with Chrome extension alarms seems unreliable at best. I didn’t have much luck using the clearAll method to remove alarms I created on previous extension loads or installs. A workaround (read: hack) is to specify a unique alarm name every time your extension is loaded and clearing any other alarms without that unique name.
Background Scripts
For Eye Rest, I have two background scripts. One is my actual initializer and event listener, and the other is a helpers file.
I wanted to share a couple of functions between my Background and Popup scripts. Specifically, the clearAndCreateAlarm function. I wanted my background script to clear any existing alarms, create a new alarm, and add remaining time until the next alarm to local storage immediately upon extension load. To make the function available to the Background script, I added helpers.js as the first item under background > scripts in my manifest.json.
I also wanted my Popup script to do the same things when the user has unpaused the extension’s functionality. To make the function available to the Popup script, I just include the helpers script in the Popup HTML file.
Other APIs
Windows
I use the Windows API to create the Timer window when the time of my alarm is up. The window creation is initiated by my Background script.
One day, while coding late into the evening, I found it very confusing that the window.create method included url as an option. I assumed it was meant to be an external web address. A friend pondered that there must be an option to specify the window’s HTML. Until then, it hadn’t dawned on me that the url could be relative. Duh. I was tired!
I pass the timer.html as the url option, as well as type, size, position, and other visual options.
Storage
Maybe you want to pass information back and forth between the Background script and your Popup script? You can do that using Chrome or local storage. One benefit of using local storage over Chrome’s storage is avoiding quotas and write operation maximums.
I wanted to pass the time at which the latest alarm was set, the time to the next alarm, and whether or not the timer is paused between the Background and Popup scripts. Because the countdown should change every second, it’s quite complicated and requires lots of writes. That’s why I went with the user’s local storage. You can see me getting and setting those variables in my Background, Helper, and Popup scripts. Just search for date, nextAlarmTime, and isPaused.
Declarative Content
The Declarative Content API allows you to show your extension’s page action based on several type of matches, without needing to take a host permission or inject a content script. So you’ll need this to get your extension to work in the browser!
You can see me set this in my Background script. Because I want my extension’s popup to appear on every page one is browsing, I leave the page matchers empty.
There are many more APIs for Chrome apps and extensions, so make sure to surf around and see what features are available!
The Extension
Here’s what my original Popup looked like before I added styles.
And here’s what it looks like with new styles. I guess I’m going for a Nickelodeon feel.
And here’s the Timer window and Popup together!
Publishing
Publishing is a cinch. You just zip up your files, create a new or use an existing Google Developer account, upload the files, add some details, and pay a one time $5 fee. That’s all! Then your extension will be available on the Chrome extension store! Neato :D
My extension is now available for you to install.
Conclusion
I thought creating a time based Chrome Extension would be quick and easy. I was wrong. It was more complicated than I thought! But it’s definitely achievable with some time, persistence, and good ole Google searches.
Eventually, I’d like to add more interactive elements to Eye Rest. For example, hitting the YouTube API to grab a silly or cute video as a reward for looking away during the 20 sec countdown and not closing the timer window. This harkens back to one of my first web projects, Toothtimer, from 2012. Or maybe a way to change the background colors of the Timer and Popup!
Either way, with Eye Rest’s framework built out, I’m feeling fearless about future feature adds! Building this Chrome extension took some broken nails, achy shoulders, and tired eyes, but now Eye Rest can tell me to give my eyes a break every 20 minutes.",2018,Jennifer Wong,jenniferwong,2018-12-05T00:00:00+00:00,https://24ways.org/2018/my-first-chrome-extension/,code
254,What I Learned in Six Years at GDS,"When I joined the Government Digital Service in April 2012, GOV.UK was just going into public beta. GDS was a completely new organisation, part of the Cabinet Office, with a mission to stop wasting government money on over-complicated and underperforming big IT projects and instead deliver simple, useful services for the public.
Lots of people who were experts in their fields were drawn in by this inspiring mission, and I learned loads from working with some true leaders. Here are three of the main things I learned.
1. What is the user need?
The main discipline I learned from my time at GDS was to always ask ‘what is the user need?’ It’s very easy to build something that seems like a good idea, but until you’ve identified what problem you are solving for the user, you can’t be sure that you are building something that is going to help solve an actual problem.
A really good example of this is GOV.UK Notify. This service was originally conceived of as a status tracker; a “where’s my stuff” for government services. For example, if you apply for a passport online, it can take up to six weeks to arrive. After a few weeks, you might feel anxious and phone the Home Office to ask what’s happening. The idea of the status tracker was to allow you to get this information online, saving your time and saving government money on call centres.
The project started, as all GDS projects do, with a discovery. The main purpose of a discovery is to identify the users’ needs. At the end of this discovery, the team realised that a status tracker wasn’t the way to address the problem. As they wrote in this blog post:
Status tracking tools are often just ‘channel shift’ for anxiety. They solve the symptom and not the problem. They do make it more convenient for people to reduce their anxiety, but they still require them to get anxious enough to request an update in the first place.
What would actually address the user need would be to give you the information before you get anxious about where your passport is. For example, when your application is received, email you to let you know when to expect it, and perhaps text you at various points in the process to let you know how it’s going. So instead of a status tracker, the team built GOV.UK Notify, to make it easy for government services to incorporate text, email and even letter notifications into their processes.
Making sure you know your user
At GDS user needs were taken very seriously. We had a user research lab on site and everyone was required to spend two hours observing user research every six weeks. Ideally you’d observe users working with things you’d built, but even if they weren’t, it was an incredibly valuable experience, and something you should seek out if you are able to.
Even if we think we understand our users very well, it is very enlightening to see how users actually use your stuff. Partly because in technology we tend to be power users and the average user doesn’t use technology the same way we do. But even if you are building things for other developers, someone who is unfamiliar with it will interact with it in a way that may be very different to what you have envisaged.
User needs is not just about building things
Asking the question “what is the user need?” really helps focus on why you are doing what you are doing. It keeps things on track, and helps the team think about what the actual desired end goal is (and should be).
Thinking about user needs has helped me with lots of things, not just building services. For example, you are raising a pull request. What’s the user need? The reviewer needs to be able to easily understand what the change you are proposing is, why you are proposing that change and any areas you need particular help on with the review.
Or you are writing an email to a colleague. What’s the user need? What are you hoping the reader will learn, understand or do as a result of your email?
2. Make things open: it makes things better
The second important thing I learned at GDS was ‘make things open: it makes things better’. This works on many levels: being open about your strategy, blogging about what you are doing and what you’ve learned (including mistakes), and – the part that I got most involved in – coding in the open.
Talking about your work helps clarify it
One thing we did really well at GDS was blogging – a lot – about what we were working on. Blogging about what you are working on is is really valuable for the writer because it forces you to think logically about what you are doing in order to tell a good story. If you are blogging about upcoming work, it makes you think clearly about why you’re doing it; and it also means that people can comment on the blog post. Often people had really useful suggestions or clarifying questions.
It’s also really valuable to blog about what you’ve learned, especially if you’ve made a mistake. It makes sure you’ve learned the lesson and helps others avoid making the same mistakes. As well as blogging about lessons learned, GOV.UK also publishes incident reports when there is an outage or service degradation. Being open about things like this really engenders an atmosphere of trust and safe learning; which helps make things better.
Coding in the open has a lot of benefits
In my last year at GDS I was the Open Source Lead, and one of the things I focused on was the requirement that all new government source code should be open. From the start, GDS coded in the open (the GitHub organisation still has the non-intuitive name alphagov, because it was created by the team doing the original Alpha of GOV.UK, before GDS was even formed).
When I first joined GDS I was a little nervous about the fact that anyone could see my code. I worried about people seeing my mistakes, or receiving critical code reviews. (Setting people’s mind at rest about these things is why it’s crucial to have good standards around communication and positive behaviour - even a critical code review should be considerately given).
But I quickly realised there were huge advantages to coding in the open. In the same way as blogging your decisions makes you think carefully about whether they are good ones and what evidence you have, the fact that anyone in the world could see your code (even if, in practice, they probably won’t be looking) makes everyone raise their game slightly. The very fact that you know it’s open, makes you make it a bit better.
It helps with lots of other things as well, for example it makes it easier to collaborate with people and share your work. And now that I’ve left GDS, it’s so useful to be able to look back at code I worked on to remember how things worked.
Share what you learn
It’s sometimes hard to know where to start with being open about things, but it gets easier and becomes more natural as you practice. It helps you clarify your thoughts and follow through on what you’ve decided to do. Working at GDS when this was a very important principle really helped me learn how to do this well.
3. Do the hard work to make it simple (tech edition)
‘Start with user needs’ and ‘Make things open: it makes things better’ are two of the excellent government design principles. They are all good, but the third thing that I want to talk about is number 4: ‘Do the hard work to make it simple’, and specifically, how this manifests itself in the way we build technology.
At GDS, we worked very hard to do the hard work to make the code, systems and technology we built simple for those who came after us. For example, writing good commit messages is taken very seriously. There is commit message guidance, and it was not unusual for a pull request review to ask for a commit message to be rewritten to make a commit message clearer.
We worked very hard on making pull requests good, keeping the reviewer in mind and making it clear to the user how best to review it.
Reviewing others’ pull requests is the highest priority so that no-one is blocked, and teams have screens showing the status of open pull requests (using fourth wall) and we even had a ‘pull request seal’, a bot that publishes pull requests to Slack and gets angry if they are uncommented on for more than two days.
Making it easier for developers to support the site
Another example of doing the hard work to make it simple was the opsmanual. I spent two years on the web operations team on GOV.UK, and one of the things I loved about that team was the huge efforts everyone went to to be open and inclusive to developers.
The team had some people who were really expert in web ops, but they were all incredibly helpful when bringing me on board as a developer with no previous experience of web ops, and also patiently explaining things whenever other devs in similar positions came with questions.
The main artefact of this was the opsmanual, which contained write-ups of how to do lots of things. One of the best things was that every alert that might lead to someone being woken up in the middle of the night had a link to documentation on the opsmanual which detailed what the alert meant and some suggested actions that could be taken to address it.
This was important because most of the devs on GOV.UK were on the on-call rota, so if they were woken at 3am by an alert they’d never seen before, the opsmanual information might give them everything they needed to solve it, without the years of web ops training and the deep familiarity with the GOV.UK infrastructure that came with working on it every day.
Developers are users too
Doing the hard work to make it simple means that users can do what they need to do, and this applies even when the users are your developer peers. At GDS I really learned how to focus on simplicity for the user, and how much better this makes things work.
These three principles help us make great things
I learned so much more in my six years at GDS. For example, the civil service has a very fair way of interviewing. I learned about the importance of good comms, working late, responsibly and the value of content design.
And the real heart of what I learned, the guiding principles that help us deliver great products, is encapsulated by the three things I’ve talked about here: think about the user need, make things open, and do the hard work to make it simple.",2018,Anna Shipman,annashipman,2018-12-08T00:00:00+00:00,https://24ways.org/2018/what-i-learned-in-six-years-at-gds/,business
256,Develop Your Naturalist Superpowers with Observable Notebooks and iNaturalist,"We’re going to level up your knowledge of what animals you might see in an area at a particular time of year - a skill every naturalist* strives for - using technology! Using iNaturalist and Observable Notebooks we’re going to prototype seasonality graphs for particular species in an area, and automatically create a guide to what animals you might see in each month.
*(a Naturalist is someone who likes learning about nature, not someone who’s a fan of being naked, that’s a ‘Naturist’… different thing!)
Looking for critters in rocky intertidal habitats
One of my favourite things to do is going rockpooling, or as we call it over here in California, ‘tidepooling’. Amounting to the same thing, it’s going to a beach that has rocks where the tide covers then uncovers little pools of water at different times of the day. All sorts of fun creatures and life can be found in this ‘rocky intertidal habitat’
A particularly exciting creature that lives here is the Nudibranch, a type of super colourful ‘sea slug’. There are over 3000 species of Nudibranch worldwide. (The word “nudibranch” comes from the Latin nudus, naked, and the Greek βρανχια / brankhia, gills.)
They are however quite tricky to find! Even though they are often brightly coloured and interestingly shaped, some of them are very small, and in our part of the world in the Bay Area in California their appearance in our rockpools is seasonal. We see them more often in Summer months, despite the not-as-low tides as in our Winter and Spring seasons.
My favourite place to go tidepooling here is Pillar Point in Half Moon bay (at other times of the year more famously known for the surf competition ‘Mavericks’). The rockpools there are rich in species diversity, of varied types and water-coverage habitat zones as well as being relatively accessible.
I was rockpooling at Pillar Point recently with my parents and we talked to a lady who remarked that she hadn’t seen any Nudibranchs on her visit this time. I realised that having an idea of what species to find where, and at what time of year is one of the many superpower goals of every budding Naturalist.
Using technology and the croudsourced species observations of the iNaturalist community we can shortcut our way to this superpower!
Finding nearby animals with iNaturalist
We’re going to be getting our information about what animals you can see in Pillar Point using iNaturalist. iNaturalist is a really fun platform that helps connect people to nature and report their findings of life in the outdoors. It is also a community of nature-loving people who help each other identify and confirm those observations. iNaturalist is a project run as a joint initiative by the California Academy of Sciences and the National Geographic Society.
I’ve been using iNaturalist for over two years to record and identify plants and animals that I’ve found in the outdoors. I use their iPhone app to upload my pictures, which then uses machine learning algorithms to make an initial guess at what it is I’ve seen. The community is really active, and I often find someone else has verified or updated my species guess pretty soon after posting.
This process is great because once an observation has been identified by at least two people it becomes ‘verified’ and is considered research grade. Research grade observations get exported and used by scientists, as well as being indexed by the Global Biodiversity Information Facility, GBIF.
iNaturalist has a great API and API explorer, which makes interacting and prototyping using iNaturalist data really fun. For example, if you go to the API explorer and expand the Observations : Search and fetch section and then the GET /observations API, you get a selection of input boxes that allow you to play with options that you can then pass to the API when you click the ‘Try it out’ button.
You’ll then get a URL that looks a bit like
https://api.inaturalist.org/v1/observations?captive=false &geo=true&verifiable=true&taxon_id=47113&lat=37.495461&lng=-122.499584 &radius=5&order=desc&order_by=created_at
which you can call and interrrogate using a programming language of your choice.
If you would like to see an all-JavaScript application that uses the iNaturalist API, take a look at OwlsNearMe.com which Simon and I built one weekend earlier this year. It gets your location and shows you all iNaturalist observations of owls near you and lists which species you are likely to see (not adjusted for season).
Rapid development using Observable Notebooks
We’re going to be using Observable Notebooks to prototype our examples, pulling data down from iNaturalist. I really like using visual notebooks like Observable, they are great for learning and building things quickly. You may be familiar with Jupyter notebooks for Python which is similar but takes a bit of setup to get going - I often use these for prototyping too. Observable is amazing for querying and visualising data with JavaScript and since it is a hosted product it doesn’t require any setup at all.
You can follow along and play with this example on my Observable notebook. If you create an account there you can fork my notebook and create your own version of this example.
Each ‘notebook’ consists of a page with a column of ‘cells’, similar to what you get in a spreadsheet. A cell can contain Markdown text or JavaScript code and the output of evaluating the cell appears above the code that generated it. There are lots of tutorials out there on Observable Notebooks, I like this code introduction one from Observable (and D3) creator Mike Bostock.
Developing your Naturalist superpowers
If you have an idea of what plants and critters you might see in a place at the time you visit, you can hone in on what you want to study and train your Naturalist eye to better identify the life around you.
For our example, we care about wildlife we can see at Pillar Point, so we need a way of letting the iNaturalist API know which area we are interested in.
We could use a latitide, longitude and radius for this, but a rectangular bounding box is a better shape for the reef. We can use this tool to draw the area we want to search within: boundingbox.klokantech.com
The tool lets you export the bounding box in several forms using the dropdown at the bottom left under the map givese We are going to use the ‘DublinCore’ format as it’s closest to the format needed by the iNaturalist API.
westlimit=-122.50542; southlimit=37.492805; eastlimit=-122.492738; northlimit=37.499811
A quick map primer:
The higher the latitude the more north it is
The lower the latitude the more south it is
Latitude 0 = the equator
The higher the longitude the more east it is of Greenwich
The lower the longitude the more west it is of Greenwich
Longitude 0 = Greenwich
In the iNaturalst API we want to use the parameters nelat, nelng, swlat, swlng to create a query that looks inside a bounding box of Pillar Point near Half Moon Bay in California:
nelat = highest latitude = north limit = 37.499811
nelng = highest longitude = east limit = -122.492738
swlat = smallest latitude = south limit = 37.492805
swlng = smallest longitude = west limit = 122.50542
As API parameters these look like this:
?nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=122.50542
These parameters in this format can be used for most of the iNaturalist API methods.
Nudibranch seasonality in Pillar Point
We can use the iNaturalist observation_histogram API to get a count of Nudibranch observations per week-of-year across all time and within our Pillar Point bounding box.
In addition to the geographic parameters that we just worked out, we are also sending the taxon_id of 47113, which is iNaturalists internal number associated with the Nudibranch taxon. By using this we can get all species which are under the parent ‘Order Nudibranchia’.
Another useful piece of naturalist knowledge is understanding the biological classification scheme of Taxanomic Rank - roughly, when a species has a Latin name of two words eg ‘Glaucus Atlanticus’ the first Latin word is the ‘Genus’ like a family name ‘Glaucus’, and the second word identifies that particular species, like a given name ‘Atlanticus’.
The two Latin words together indicate a specific species, the term we use colloquially to refer to a type of animal often differs wildly region to region, and sometimes the same common name in two countries can refer to two different species. The common names for the Glaucus Atlanticus (which incidentally is my favourite sea slug) include: sea swallow, blue angel, blue glaucus, blue dragon, blue sea slug and blue ocean slug! Because this gets super confusing, Scientists like using this Latin name format instead.
The following piece of code asks the iNaturalist Histogram API to return per-week counts for verified observations of Nudibranchs within our Pillar Point bounding box:
pillar_point_counts_per_week = fetch(
""https://api.inaturalist.org/v1/observations/histogram?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=week_of_year&verifiable=true""
).then(response => {
return response.json();
})
Our next step is to take this data and draw a graph! We’ll be using Vega-Lite for this, which is a fab JavaScript graphing libary that is also easy and fun to use with Observable Notebooks.
(Here is a great tutorial on exploring data and drawing graphs with Observable and Vega-Lite)
The iNaturalist API returns data that looks like this:
{
""total_results"": 53,
""page"": 1,
""per_page"": 53,
""results"": {
""week_of_year"": {
""1"": 136,
""2"": 20,
""3"": 150,
""4"": 65,
""5"": 186,
""6"": 74,
""7"": 47,
""8"": 87,
""9"": 64,
""10"": 56,
But for our Vega-Lite graph we need data that looks like this:
[{
""week"": ""01"",
""value"": 136
}, {
""week"": ""02"",
""value"": 20
}, ...]
We can convert what we get back from the API to the second format using a loop that iterates over the object keys:
objects_to_plot = {
let objects = [];
Object.keys(pillar_point_counts_per_week.results.week_of_year).map(function(week_index) {
objects.push({
week: `Wk ${week_index.toString()}`,
observations: pillar_point_counts_per_week.results.week_of_year[week_index]
});
})
return objects;
}
We can then plug this into Vega-Lite to draw us a graph:
vegalite({
data: {values: objects_to_plot},
mark: ""bar"",
encoding: {
x: {field: ""week"", type: ""nominal"", sort: null},
y: {field: ""observations"", type: ""quantitative""}
},
width: width * 0.9
})
It’s worth noting that we have a lot of observations of Nudibranchs particularly at Pillar Point due in no small part to the intertidal monitoring research that Alison Young and Rebecca Johnson facilitate for the California Achademy of Sciences.
So, what if we want to look for the seasonality of observations of a particular species of adorable sea slug? We want our interface to have a select box with a list of all the species you might find at any time of year. We can do this using the species_counts API to create us an object with the iNaturalist species ID and common & Latin names.
pillar_point_nudibranches = {
let api_results = await fetch(
""https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&verifiable=true""
).then(r => r.json())
let species_list = api_results.results.map(i => ({
value: i.taxon.id,
label: `${i.taxon.preferred_common_name} (${i.taxon.name})`
}));
return species_list
}
We can create an interactive select box by importing code from Jeremy Ashkanas’ Observable Notebook: add import {select} from ""@jashkenas/inputs"" to a cell anywhere in our notebook. Observable is magic: like a spreadsheet, the order of the cells doesn’t matter - if one cell is referenced by any other cell then when that cell updates all the other cells refresh themselves. You can also import and reference one notebook from another!
viewof select_species = select({
title: ""Which Nudibranch do you want to see seasonality for?"",
options: [{value: """", label: ""All the Nudibranchs!""}, ...pillar_point_nudibranches],
value: """"
})
Then we go back to our old favourite, the histogram API just like before, only this time we are calling it with the value created by our select box ${select_species} as taxon_id instead of the number 47113.
pillar_point_counts_per_month_per_species = fetch(
`https://api.inaturalist.org/v1/observations/histogram?taxon_id=${select_species}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=month_of_year&verifiable=true`
).then(r => r.json())
Now for the fun graph bit! As we did before, we re-format the result of the API into a format compatible with Vega-Lite:
objects_to_plot_species_month = {
let objects = [];
Object.keys(pillar_point_counts_per_month_per_species.results.month_of_year).map(function(month_index) {
objects.push({
month: (new Date(2018, (month_index - 1), 1)).toLocaleString(""en"", {month: ""long""}),
observations: pillar_point_counts_per_month_per_species.results.month_of_year[month_index]
});
})
return objects;
}
(Note that in the above code we are creating a date object with our specific month in, and using toLocalString() to get the longer English name for the month. Because the JavaScript Date object counts January as 0, we use month_index -1 to get the correct month)
And we draw the graph as we did before, only now if you interact with the select box in Observable the graph will dynamically update!
vegalite({
data: {values: objects_to_plot_species_month},
mark: ""bar"",
encoding: {
x: {field: ""month"", type: ""nominal"", sort:null},
y: {field: ""observations"", type: ""quantitative""}
},
width: width * 0.9
})
Now we can see when is the best time of year to plan to go tidepooling in Pillar Point if we want to find a specific species of Nudibranch.
This tool is great for planning when we to go rockpooling at Pillar Point, but what about if you are going this month and want to pre-train your eye with what to look for in order to impress your friends with your knowledge of Nudibranchs?
Well… we can create ourselves a dynamic guide that you can with a list of the species, their photo, name and how many times they have been observed in that month of the year!
Our select box this time looks as follows, simpler than before but assigning the month value to the variable selected_month.
viewof selected_month = select({
title: ""When do you want to see Nudibranchs?"",
options: [
{ label: ""Whenever"", value: """" },
{ label: ""January"", value: ""1"" },
{ label: ""February"", value: ""2"" },
{ label: ""March"", value: ""3"" },
{ label: ""April"", value: ""4"" },
{ label: ""May"", value: ""5"" },
{ label: ""June"", value: ""6"" },
{ label: ""July"", value: ""7"" },
{ label: ""August"", value: ""8"" },
{ label: ""September"", value: ""9"" },
{ label: ""October"", value: ""10"" },
{ label: ""November"", value: ""11"" },
{ label: ""December"", value: ""12"" },
],
value: """"
})
We then can use the species_counts API to get all the relevant information about which species we can see in month=${selected_month}. We’ll be able to reference this response object and its values later with the variable we just created, eg: all_species_data.results[0].taxon.name.
all_species_data = fetch(
`https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&month=${selected_month}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&verifiable=true`
).then(r => r.json())
You can render HTML directly in a notebook cell using Observable’s html tagged template literal:
If you go to Pillar Point ${
{"""": """",
""1"":""in January"",
""2"":""in Febrary"",
""3"":""in March"",
""4"":""in April"",
""5"":""in May"",
""6"":""in June"",
""7"":""in July"",
""8"":""in August"",
""9"":""in September"",
""10"":""in October"",
""11"":""in November"",
""12"":""in December"",
}[selected_month]
} you might see…
${all_species_data.results.map(s => `
${s.taxon.name}
Seen ${s.count} times
`)}
These few lines of HTML are all you need to get this exciting dynamic guide to what Nudibranchs you will see in each month!
Play with it yourself in this Observable Notebook.
Conclusion
I hope by playing with these examples you have an idea of how powerful it can be to prototype using Observable Notebooks and how you can use the incredible crowdsourced community data and APIs from iNaturalist to augment your naturalist skills and impress your friends with your new ‘knowledge of nature’ superpower.
Lastly I strongly encourage you to get outside on a low tide to explore your local rocky intertidal habitat, and all the amazing critters that live there.
Here is a great introduction video to tidepooling / rockpooling, by Rebecca Johnson and Alison Young from the California Academy of Sciences.",2018,Natalie Downe,nataliedowne,2018-12-18T00:00:00+00:00,https://24ways.org/2018/observable-notebooks-and-inaturalist/,code
264,Dynamic Social Sharing Images,"Way back when social media was new, you could be pretty sure that whatever you posted would be read by those who follow you. If you’d written a blog post and you wanted to share it with those who follow you, you could post a link and your followers would see it in their streams. Oh heady days!
With so many social channels and a proliferation of content and promotions flying past in everyone’s streams, it’s no longer enough to share content on social media, you have to actively sell it if you want it to be seen. You really need to make the most of every opportunity to catch a reader’s attention if you’re trying to get as many eyes as possible on that sweet, sweet social content.
One of the best ways to grab attention with your posts or tweets is to include an image. There’s heaps of research that says that having images in your posts helps them stand out to followers. Reports I found showed figures from anything from 35% to 150% improvement from just having image in a post. Unfortunately, the details were surrounded with gross words like engagement and visual marketing assets and so I had to close the page before I started to hate myself too much.
So without hard stats to quote, we’ll call it a rule of thumb. The rule of thumb is that posts with images will grab more attention than those without, so it makes sense that when adding pages to a website, you should make sure that they have social media sharing images associated with them.
Adding sharing images
The process for declaring an image to be used in places like Facebook and Twitter is very simple, and at this point is familiar to many of us. You add a meta tag to the head of the page to point to the location of the image to use. When a link to the page is added to a post, the social network will fetch the page, look for the meta tag and then use the image you specified.
There’s a good post on this over at CSS-Tricks if you need to bone up on the details of this and other similar meta tags for social media sharing.
This is all fine and well for content that has a very obvious choice of image to go along with it, but what if you don’t necessarily have an image? One approach is to use stock photography, but that’s not going to be right for every situation.
This was something we faced with 24 ways in 2017. We wanted to add images to the tweets we post each day announcing a new article. Some articles have images, but not all, and there tended not to be any consistency in terms of imagery from one article to the next. We always have an author photograph, but those don’t usually lend themselves directly to being the main ‘hero’ image for an article.
Putting his thinking cap on, Paul came up with a design for an image that used the author photo along with a quote extracted from the article.
One of the hand-made sharing images from 2017
Each day we would pick a quote from the article, and Paul would manually compose an image to be uploaded to the site. The results were great, but the whole process was a bit too labour intensive and relied on an individual (Paul) being available each day to do the work. I thought we could probably improve this.
Hatching a new plan
One initial idea I came up with was to script the image editor to dynamically build a new image by pulling content from our database. Sketch has plugins available to pull JSON content into a design, and our CMS can easily output JSON data, so that was one possibility.
The more I thought about this and how much I wish graphic design tools worked just a little bit more like CSS, the obvious solution hit me. We should just build it with CSS!
In fact, as the author name and image already exist in our CMS, and the visual styling is based on the design of the website, couldn’t this just be another page on the site generated by the CMS?
Breaking it down, I figured the steps needed would be something like:
Create the CSS to lay out a component that could be turned into an image
Add a new field to articles in the CMS to hold a handpicked quote
Build a new article template in the CMS to output the author name and quote dynamically for any article
… um … screenshot?
I thought I’d get cracking and see if I could figure out the final steps later.
Building the page
The first thing to tackle was the basic HTML and CSS to lay out the components for our image. That bit was really easy, as I just asked Paul to do it. Everyone should have a Paul.
Paul’s code uses a fixed dimension container in the HTML, set to 600 × 315px. This is to make it the correct aspect ratio for Facebook’s recommended image size. It’s useful to remember here that it doesn’t need to be responsive or robust, as the page only needs to lay out correctly for a screenshot and a fixed size in a known browser.
With the markup and CSS in place, I turned this into a new template. Our CMS can easily display content through any number of templates, so I created a version of the article template that was totally stripped down. It only included the author details and the quote, along with Paul’s markup.
I also added the quote as a new field on the article in the CMS, so each ‘image’ could be quickly and easily customised in the editing process.
I added a new field to the article template to capture the quote.
With very little effort, we quickly had a page to dynamically generate our ‘image’ right from the CMS. You can see any of them by adding /sharing onto the end of an article URL for any 2018 article.
Our automatically generated layout direct from the CMS
It soon became clear that the elusive Step 4 was going to be the tricky part. I can create a small page on the site that looks like an image, but how should I go about turning it into one? An obvious route is to screenshot the page by hand, but that’s going back to some of the manual steps I was trying to eliminate, and also opens up a possibility for errors to be made. But it did lead me to the thought… how could I automatically take a screenshot?
Enter Puppeteer
Puppeteer is a Node.js library that provides a nice API onto Headless Chrome. What is Headless Chrome, you ask? It’s just a version of the Chrome browser than runs from the command line without ever drawing anything to a user interface window. It loads pages, renders CSS, runs JavaScript, pretty much every normal thing that Chrome on the desktop does, but without a clicky user interface.
Headless Chrome can be used for all sorts of things such as running automated tests on front-end code after making changes, or… get this… rendering pages that can be used for screenshots. The actual process of writing some code to control Chrome and to take the screenshot is where Puppeteer comes in. Puppeteer puts a friendly layer in front of big old scary Chrome to enable us to interact with it using simple JavaScript code running in Node.
Using Puppeteer, I can write a small script that will repeatably turn a URL into an image. So simple is it to do this, that’s it’s actually Puppeteer’s ‘hello world’ example.
First you install Puppeteer. It downloads a compatible headless browser (actually Chromium) as a dependancy, so you don’t need to worry about installing that. At the command line:
npm i puppeteer
Then save a new file as example.js with this code:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
await page.screenshot({path: 'example.png'});
await browser.close();
})();
and then run it using Node:
node example.js
This will output an image file example.png to disk, which contains a screenshot of, in this case https://example.com. The logic of the code is reasonably easy to follow:
Launch a browser
Open up a new page
Goto a URL
Take a screenshot
Close the browser
The async function and await keywords are a way to have the script pause and wait for normally asynchronous code to return before proceeding. That’s useful with actions like loading a web page that might take some time to complete. They’re used with Promises, and the effect is to make asynchronous code behave as if it’s synchronous. You can read more about async and await at MDN if you’re interested.
That’s a good proof-of-concept using the basic Puppeteer example. I can take a screenshot of a URL! But what happens if I put the URL of my new special page in there?
Our content is up in the corner of the image with lots of empty space.
That’s not great. It’s okay, but not great. It looks like that, by default, Puppeteer takes a screenshot with a resolution of 800 × 600, so we need to find out how to adjust that. Fortunately, the docs aren’t the worst and I was able to find the page.setViewport() method pretty easily.
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://24ways.org/2018/clip-paths-know-no-bounds/sharing');
await page.setViewport({
width: 600,
height: 315
});
await page.screenshot({path: 'example.png'});
await browser.close();
})();
This worked! The screenshot is now 600 × 315 as expected. That’s exactly what we asked for. Trouble is, that’s a bit low res and it is nearly 2019 after all. While in those docs, I noticed the deviceScaleFactor option that can be passed to page.setViewport(). Setting that to 2 gives us an image of the same area of the screen, but with twice as many pixels.
await page.setViewport({
width: 600,
height: 315,
deviceScaleFactor: 2
});
Perfect! We now have a programmatic way of turning a URL into an image.
Improving the script
Rather than having a script with a fixed URL in it that outputs an image called example.png, the next step is to make that a bit more dynamic. The aim here is to have a script that we can run with a URL as an argument and have it output an image for that one page. That way we can run it manually, or hook it into part of our site’s build process to automate the generation of the image.
Our goal is to call the script like this:
node shoot-sharing-image.js https://24ways.org/2018/clip-paths-know-no-bounds/
And I want the image to come out with the name clip-paths-know-no-bounds.png. To do that, I need to have my script look for command arguments, and then to split the URL up to grab the slug from it.
// Get the URL and the slug segment from it
const url = process.argv[2];
const segments = url.split('/');
// Get the second-to-last segment (the slug)
const slug = segments[segments.length-2];
We can then use these variables later in the script, remembering to add sharing back onto the end of the URL to get our dedicated page.
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url + 'sharing');
await page.setViewport({
width: 600,
height: 315,
deviceScaleFactor: 2
});
await page.screenshot({path: slug + '.png'});
await browser.close();
})();
Once you’re generating the image with Node, there’s all sorts of things you can do with it. An obvious step is to move it to the correct location within your site or project.
You can also run optimisations on the file. I’m using imagemin with pngquant to reduce the file size a little.
const imagemin = require('imagemin');
const imageminPngquant = require('imagemin-pngquant');
await imagemin([slug + '.png'], 'build', {
plugins: [
imageminPngquant({quality: '75-90'})
]
});
You can see the completed example as a gist.
Integrating it with your CMS
So we now have a command we can run to take a URL and generate a custom image for that URL. It’s in a format that can be called by any sort of build script, or triggered from a publishing hook in a CMS. Exactly how you do that is going to depend on the way your site is built and the technology stack you’re using, but it’s likely not too hard as long as you can run a command as part of the process.
For 24 ways this year, I’ve been running the script by hand once each article is ready. My script adds the file to a git repo and pushes to a deployment remote that is configured to automatically deploy static assets to our server. Along with our theme of making incremental improvements, next year I’ll look to automate this one step further.
We may also look at having a few slightly different layouts to choose from, so that each day isn’t exactly the same as the last. Interestingly, we could even try some A/B tests to see if there’s any particular format of image or type of quote that does a better job of grabbing attention. There are lots of possibilities!
By using a bit of ingenuity, some custom CMS templates, and the very useful Puppeteer project, we’ve been able to reliably produce dynamic social media sharing images for all of our articles. In doing so, we reduced the dependancy on any individual for producing those images, and opened up a world of possibilities in how we use those images.
I hope you’ll give it a try!",2018,Drew McLellan,drewmclellan,2018-12-24T00:00:00+00:00,https://24ways.org/2018/dynamic-social-sharing-images/,code
241,Jank-Free Image Loads,"There are a few fundamental problems with embedding images in pages of hypertext; perhaps chief among them is this: text is very light and loads rather fast; images are much heavier and arrive much later. Consequently, millions (billions?) of times a day, a hapless Web surfer will start reading some text on a page, and then —
Your browser doesn’t support HTML5 video. Here is
a link to the video instead.
— oops! — an image pops in above it, pushing said text down the page, and our poor reader loses their place.
By default, partially-loaded pages have the user experience of a slippery fish, or spilled jar of jumping beans. For the rest of this article, I shall call that jarring, no-good jumpiness by its name: jank. And I’ll chart a path into a jank-free future – one in which it’s easy and natural to author elements that load like this:
Your browser doesn’t support HTML5 video. Here is
a link to the video instead.
Jank is a very old problem, and there is a very old solution to it: the width and height attributes on . The idea is: if we stick an image’s dimensions right into the HTML, browsers can know those dimensions before the image loads, and reserve some space on the layout for it so that nothing gets bumped down the page when the image finally arrives.
width
Specifies the intended width of the image in pixels. When given together with the height, this allows user agents to reserve screen space for the image before the image data has arrived over the network.
—The HTML 3.2 Specification, published on January 14 1997
Unfortunately for us, when width and height were first spec’d and implemented, layouts were largely fixed and images were usually only intended to render at their fixed, actual dimensions. When image sizing gets fluid, width and height get weird:
See the Pen fluid width + fixed height = distortion by Eric Portis (@eeeps) on CodePen.
width and height are too rigid for the responsive world. What we need, and have needed for a very long time, is a way to specify fixed aspect ratios, to pair with our fluid widths.
I have good news, bad news, and great news.
The good news is, there are ways to do this, now, that work in every browser. Responsible sites, and responsible developers, go through the effort to do them.
The bad news is that these techniques are all terrible, cumbersome hacks. They’re difficult to remember, difficult to understand, and they can interact with other pieces of CSS in unexpected ways.
So, the great news: there are two on-the-horizon web platform features that are trying to make no-jank, fixed-aspect-ratio, fluid-width images a natural part of the web platform.
aspect-ratio in CSS
The first proposed feature? An aspect-ratio property in CSS!
This would allow us to write CSS like this:
img {
width: 100%;
}
.thumb {
aspect-ratio: 1/1;
}
.hero {
aspect-ratio: 16/9;
}
This’ll work wonders when we need to set aspect ratios for whole classes of images, which are all sized to fit within pre-defined layout slots, like the .thumb and .hero images, above.
Alas, the harder problem, in my experience, is not images with known-ahead-of-time aspect ratios. It’s images – possibly user generated images – that can have any aspect ratio. The really tricky problem is unknown-when-you’re-writing-your-CSS aspect ratios that can vary per-image. Using aspect-ratio to reserve space for images like this requires inline styles:
And inline styles give me the heebie-jeebies! As a web developer of a certain age, I have a tiny man in a blue beanie permanently embedded deep within my hindbrain, who cries out in agony whenever I author a style="""" attribute. And you know what? The old man has a point! By sticking super-high-specificity inline styles in my content, I’m cutting off my, (or anyone else’s) ability to change those aspect ratios, for whatever reason, later.
How might we specify aspect ratios at a lower level? How might we give browsers information about an image’s dimensions, without giving them explicit instructions about how to style it?
I’ll tell you: we could give browsers the intrinsic aspect ratio of the image in our HTML, rather than specifying an extrinsic aspect ratio!
A brief note on intrinsic and extrinsic sizing
What do I mean by “intrinsic” and “extrinsic?”
The intrinsic size of an image is, put simply, how big it’d be if you plopped it onto a page and applied no CSS to it whatsoever. An 800×600 image has an intrinsic width of 800px.
The extrinsic size of an image, then, is how large it ends up after CSS has been applied. Stick a width: 300px rule on that same 800×600 image, and its intrinsic size (accessible via the Image.naturalWidth property, in JavaScript) doesn’t change: its intrinsic size is still 800px. But this image now has an extrinsic size (accessible via Image.clientWidth) of 300px.
It surprised me to learn this year that height and width are interpreted as presentational hints and that they end up setting extrinsic dimensions (albeit ones that, unlike inline styles, have absolutely no specificity).
CSS aspect-ratio lets us avoid setting extrinsic heights and widths – and instead lets us give images (or anything else) an extrinsic aspect ratio, so that as soon as we set one dimension (possibly to a fluid width, like 100%!), the other dimension is set automatically in relation to it.
The last tool I’m going to talk about gets us out of the extrinsic sizing game all together — which, I think, is only appropriate for a feature that we’re going to be using in HTML.
intrinsicsize in HTML
The proposed intrinsicsize attribute will let you do this:
That tells the browser, “hey, this image.jpg that I’m using here – I know you haven’t loaded it yet but I’m just going to let you know right away that it’s going to have an intrinsic size of 800×600.” This gives the browser enough information to reserve space on the layout for the image, and ensures that any and all extrinsic sizing instructions, specified in our CSS, will layer cleanly on top of this, the image’s intrinsic size.
You may ask (I did!): wait, what if my references multiple resources, which all have different intrinsic sizes? Well, if you’re using srcset, intrinsicsize is a bit of a misnomer – what the attribute will do then, is specify an intrinsic aspect ratio:
In the future (and behind the “Experimental Web Platform Features” flag right now, in Chrome 71+), asking this image for its .naturalWidth would not return 3 – it will return whatever 75vw is, given the current viewport width. And Image.naturalHeight will return that width, divided by the intrinsic aspect ratio: 3/2.
Can’t wait
I seem to have gotten myself into the weeds a bit. Sizing on the web is complicated!
Don’t let all of these details bury the big takeaway here: sometime soon (🤞 2019‽ 🤞), we’ll be able to toss our terrible aspect-ratio hacks into the dustbin of history, get in the habit of setting aspect-ratios in CSS and/or intrinsicsizes in HTML, and surf a less-frustrating, more-performant, less-janky web. I can’t wait!",2018,Eric Portis,ericportis,2018-12-21T00:00:00+00:00,https://24ways.org/2018/jank-free-image-loads/,code
252,Turn Jekyll up to Eleventy,"Sometimes it pays not to over complicate things. While many of the sites we use on a daily basis require relational databases to manage their content and dynamic pages to respond to user input, for smaller, simpler sites, serving pre-rendered static HTML is usually a much cheaper — and more secure — option.
The JAMstack (JavaScript, reusable APIs, and prebuilt Markup) is a popular marketing term for this way of building websites, but in some ways it’s a return to how things were in the early days of the web, before developers started tinkering with CGI scripts or Personal HomePage. Indeed, my website has always served pre-rendered HTML; first with the aid of Movable Type and more recently using Jekyll, which Anna wrote about in 2013.
By combining three approachable languages — Markdown for content, YAML for data and Liquid for templating — the ergonomics of Jekyll found broad appeal, influencing the design of the many static site generators that followed. But Jekyll is not without its faults. Aside from notoriously slow build times, it’s also built using Ruby. While this is an elegant programming language, it is yet another ecosystem to understand and manage, and often alongside one we already use: JavaScript. For all my time using Jekyll, I would think to myself “this, but in Node”. Thankfully, one of Santa’s elves (Zach Leatherman) granted my Atwoodian wish and placed such a static site generator under my tree.
Introducing Eleventy
Eleventy is a more flexible alternative Jekyll. Besides being written in Node, it’s less strict about how to organise files and, in addition to Liquid, supports other templating languages like EJS, Pug, Handlebars and Nunjucks. Best of all, its build times are significantly faster (with future optimisations promising further gains).
As content is saved using the familiar combination of YAML front matter and Markdown, transitioning from Jekyll to Eleventy may seem like a reasonable idea. Yet as I’ve discovered, there are a few gotchas. If you’ve been considering making the switch, here are a few tips and tricks to help you on your way1.
Note: Throughout this article, I’ll be converting Matt Cone’s Markdown Guide site as an example. If you want to follow along, start by cloning the git repository, and then change into the project directory:
git clone https://github.com/mattcone/markdown-guide.git
cd markdown-guide
Before you start
If you’ve used tools like Grunt, Gulp or Webpack, you’ll be familiar with Node.js but, if you’ve been exclusively using Jekyll to compile your assets as well as generate your HTML, now’s the time to install Node.js and set up your project to work with its package manager, NPM:
Install Node.js:
Mac: If you haven’t already, I recommend installing Homebrew, a package manager for the Mac. Then in the Terminal type brew install node.
Windows: Download the Windows installer from the Node.js website and follow the instructions.
Initiate NPM: Ensure you are in the directory of your project and then type npm init. This command will ask you a few questions before creating a file called package.json. Like RubyGems’s Gemfile, this file contains a list of your project’s third-party dependencies.
If you’re managing your site with Git, make sure to add node_modules to your .gitignore file too. Unlike RubyGems, NPM stores its dependencies alongside your project files. This folder can get quite large, and as it contains binaries compiled to work with the host computer, it shouldn’t be version controlled. Eleventy will also honour the contents of this file, meaning anything you want Git to ignore, Eleventy will ignore too.
Installing Eleventy
With Node.js installed and your project setup to work with NPM, we can now install Eleventy as a dependency:
npm install --save-dev @11ty/eleventy
If you open package.json you should see the following:
…
""devDependencies"": {
""@11ty/eleventy"": ""^0.6.0""
}
…
We can now run Eleventy from the command line using NPM’s npx command. For example, to covert the README.md file to HTML, we can run the following:
npx eleventy --input=README.md --formats=md
This command will generate a rendered HTML file at _site/README/index.html. Like Jekyll, Eleventy shares the same default name for its output directory (_site), a pattern we will see repeatedly during the transition.
Configuration
Whereas Jekyll uses the declarative YAML syntax for its configuration file, Eleventy uses JavaScript. This allows its options to be scripted, enabling some powerful possibilities as we’ll see later on.
We’ll start by creating our configuration file (.eleventy.js), copying the relevant settings in _config.yml over to their equivalent options:
module.exports = function(eleventyConfig) {
return {
dir: {
input: ""./"", // Equivalent to Jekyll's source property
output: ""./_site"" // Equivalent to Jekyll's destination property
}
};
};
A few other things to bear in mind:
Whereas Jekyll allows you to list folders and files to ignore under its exclude property, Eleventy looks for these values inside a file called .eleventyignore (in addition to .gitignore).
By default, Eleventy uses markdown-it to parse Markdown. If your content uses advanced syntax features (such as abbreviations, definition lists and footnotes), you’ll need to pass Eleventy an instance of this (or another) Markdown library configured with the relevant options and plugins.
Layouts
One area Eleventy currently lacks flexibility is the location of layouts, which must reside within the _includes directory (see this issue on GitHub).
Wanting to keep our layouts together, we’ll move them from _layouts to _includes/layouts, and then update references to incorporate the layouts sub-folder. We could update the layout: frontmatter property in each of our content files, but another option is to create aliases in Eleventy’s config:
module.exports = function(eleventyConfig) {
// Aliases are in relation to the _includes folder
eleventyConfig.addLayoutAlias('about', 'layouts/about.html');
eleventyConfig.addLayoutAlias('book', 'layouts/book.html');
eleventyConfig.addLayoutAlias('default', 'layouts/default.html');
return {
dir: {
input: ""./"",
output: ""./_site""
}
};
}
Determining which template language to use
Eleventy will transform Markdown (.md) files using Liquid by default, but we’ll need to tell Eleventy how to process other files that are using Liquid templates. There are a few ways to achieve this, but the easiest is to use file extensions. In our case, we have some files in our api folder that we want to process with Liquid and output as JSON. By appending the .liquid file extension (i.e. basic-syntax.json becomes basic-syntax.json.liquid), Eleventy will know what to do.
Variables
On the surface, Jekyll and Eleventy appear broadly similar, but as each models its content and data a little differently, some template variables will need updating.
Site variables
Alongside build settings, Jekyll let’s you store common values in its configuration file which can be accessed in our templates via the site.* namespace. For example, in our Markdown Guide, we have the following values:
title: ""Markdown Guide""
url: https://www.markdownguide.org
baseurl: """"
repo: http://github.com/mattcone/markdown-guide
comments: false
author:
name: ""Matt Cone""
og_locale: ""en_US""
Eleventy’s configuration uses JavaScript which is not suited to storing values like this. However, like Jekyll, we can use data files to store common values. If we add our site-wide values to a JSON file inside a folder called _data and name this file site.json, we can keep the site.* namespace and leave our variables unchanged.
{
""title"": ""Markdown Guide"",
""url"": ""https://www.markdownguide.org"",
""baseurl"": """",
""repo"": ""http://github.com/mattcone/markdown-guide"",
""comments"": false,
""author"": {
""name"": ""Matt Cone""
},
""og_locale"": ""en_US""
}
Page variables
The table below shows a mapping of common page variables. As a rule, frontmatter properties are accessed directly, whereas derived metadata values (things like URLs, dates etc.) get prefixed with the page.* namespace:
Jekyll
Eleventy
page.url
page.url
page.date
page.date
page.path
page.inputPath
page.id
page.outputPath
page.name
page.fileSlug
page.content
content
page.title
title
page.foobar
foobar
When iterating through pages, frontmatter values are available via the data object while content is available via templateContent:
Jekyll
Eleventy
item.url
item.url
item.date
item.date
item.path
item.inputPath
item.name
item.fileSlug
item.id
item.outputPath
item.content
item.templateContent
item.title
item.data.title
item.foobar
item.data.foobar
Ideally the discrepancy between page and item variables will change in a future version (see this GitHub issue), making it easier to understand the way Eleventy structures its data.
Pagination variables
Whereas Jekyll’s pagination feature is limited to paginating posts on one page, Eleventy allows you to paginate any collection of documents or data. Given this disparity, the changes to pagination are more significant, but this table shows a mapping of equivalent variables:
Jekyll
Eleventy
paginator.page
pagination.pageNumber
paginator.per_page
pagination.size
paginator.posts
pagination.items
paginator.previous_page_path
pagination.previousPageHref
paginator.next_page_path
pagination.nextPageHref
Filters
Although Jekyll uses Liquid, it provides a set of filters that are not part of the core Liquid library. There are quite a few — more than can be covered by this article — but you can replicate them by using Eleventy’s addFilter configuration option. Let’s convert two used by our Markdown Guide: jsonify and where.
The jsonify filter outputs an object or string as valid JSON. As JavaScript provides a native JSON method, we can use this in our replacement filter. addFilter takes two arguments; the first is the name of the filter and the second is the function to which we will pass the content we want to transform:
// {{ variable | jsonify }}
eleventyConfig.addFilter('jsonify', function (variable) {
return JSON.stringify(variable);
});
Jekyll’s where filter is a little more complicated in that it takes two additional arguments: the key to look for, and the value it should match:
{{ site.members | where: ""graduation_year"",""2014"" }}
To account for this, instead of passing one value to the second argument of addFilter, we can instead pass three: the array we want to examine, the key we want to look for and the value it should match:
// {{ array | where: key,value }}
eleventyConfig.addFilter('where', function (array, key, value) {
return array.filter(item => {
const keys = key.split('.');
const reducedKey = keys.reduce((object, key) => {
return object[key];
}, item);
return (reducedKey === value ? item : false);
});
});
There’s quite a bit going on within this filter, but I’ll try to explain. Essentially we’re examining each item in our array, reducing key (passed as a string using dot notation) so that it can be parsed correctly (as an object reference) before comparing its value to value. If it matches, item remains in the returned array, else it’s removed. Phew!
Includes
As with filters, Jekyll provides a set of tags that aren’t strictly part of Liquid either. This includes one of the most useful, the include tag. LiquidJS, the library Eleventy uses, does provide an include tag, but one using the slightly different syntax defined by Shopify. If you’re not passing variables to your includes, everything should work without modification. Otherwise, note that whereas with Jekyll you would do this:
{% include include.html value=""key"" %}
{{ include.value }}
in Eleventy, you would do this:
{% include ""include.html"", value: ""key"" %}
{{ value }}
A downside of Shopify’s syntax is that variable assignments are no longer scoped to the include and can therefore leak; keep this in mind when converting your templates as you may need to make further adjustments.
Tweaking Liquid
You may have noticed in the above example that LiquidJS expects the names of included files to be quoted (else it treats them as variables). We could update our templates to add quotes around file names (the recommended approach), but we could also disable this behaviour by setting LiquidJS’s dynamicPartials option to false. Additionally, Eleventy doesn’t support the include_relative tag, meaning you can’t include files relative to the current document. However, LiquidJS does let us define multiple paths to look for included files via its root option.
Thankfully, Eleventy allows us to pass options to LiquidJS:
eleventyConfig.setLiquidOptions({
dynamicPartials: false,
root: [
'_includes',
'.'
]
});
Collections
Jekyll’s collections feature lets authors create arbitrary collections of documents beyond pages and posts. Eleventy provides a similar feature, but in a far more powerful way.
Collections in Jekyll
In Jekyll, creating collections requires you to add the name of your collections to _config.yml and create corresponding folders in your project. Our Markdown Guide has two collections:
collections:
- basic-syntax
- extended-syntax
These correspond to the folders _basic-syntax and _extended-syntax whose content we can iterate over like so:
{% for syntax in site.extended-syntax %}
{{ syntax.title }}
{% endfor %}
Collections in Eleventy
There are two ways you can set up collections in 11ty. The first, and most straightforward, is to use the tag property in content files:
---
title: Strikethrough
syntax-id: strikethrough
syntax-summary: ""~~The world is flat.~~""
tag: extended-syntax
---
We can then iterate over tagged content like this:
{% for syntax in collections.extended-syntax %}
{{ syntax.data.title }}
{% endfor %}
Eleventy also allows us to configure collections programmatically. For example, instead of using tags, we can search for files using a glob pattern (a way of specifying a set of filenames to search for using wildcard characters):
eleventyConfig.addCollection('basic-syntax', collection => {
return collection.getFilteredByGlob('_basic-syntax/*.md');
});
eleventyConfig.addCollection('extended-syntax', collection => {
return collection.getFilteredByGlob('_extended-syntax/*.md');
});
We can extend this further. For example, say we wanted to sort a collection by the display_order property in our document’s frontmatter. We could take the results of collection.getFilteredByGlob and then use JavaScript’s sort method to sort the result:
eleventyConfig.addCollection('example', collection => {
return collection.getFilteredByGlob('_examples/*.md').sort((a, b) => {
return a.data.display_order - b.data.display_order;
});
});
Hopefully, this gives you just a hint of what’s possible using this approach.
Using directory data to manage defaults
By default, Eleventy will maintain the structure of your content files when generating your site. In our case, that means /_basic-syntax/lists.md is generated as /_basic-syntax/lists/index.html. Like Jekyll, we can change where files are saved using the permalink property. For example, if we want the URL for this page to be /basic-syntax/lists.html we can add the following:
---
title: Lists
syntax-id: lists
api: ""no""
permalink: /basic-syntax/lists.html
---
Again, this is probably not something we want to manage on a file-by-file basis but again, Eleventy has features that can help: directory data and permalink variables.
For example, to achieve the above for all content stored in the _basic-syntax folder, we can create a JSON file that shares the name of that folder and sits inside it, i.e. _basic-syntax/_basic-syntax.json and set our default values. For permalinks, we can use Liquid templating to construct our desired path:
{
""layout"": ""syntax"",
""tag"": ""basic-syntax"",
""permalink"": ""basic-syntax/{{ title | slug }}.html""
}
However, Markdown Guide doesn’t publish syntax examples at individual permanent URLs, it merely uses content files to store data. So let’s change things around a little. No longer tied to Jekyll’s rules about where collection folders should be saved and how they should be labelled, we’ll move them into a folder called _content:
markdown-guide
└── _content
├── basic-syntax
├── extended-syntax
├── getting-started
└── _content.json
We will also add a directory data file (_content.json) inside this folder. As directory data is applied recursively, setting permalink to false will mean all content in this folder and its children will no longer be published:
{
""permalink"": false
}
Static files
Eleventy only transforms files whose template language it’s familiar with. But often we may have static assets that don’t need converting, but do need copying to the destination directory. For this, we can use pass-through file copy. In our configuration file, we tell Eleventy what folders/files to copy with the addPassthroughCopy option. Then in the return statement, we enable this feature by setting passthroughFileCopy to true:
module.exports = function(eleventyConfig) {
…
// Copy the `assets` directory to the compiled site folder
eleventyConfig.addPassthroughCopy('assets');
return {
dir: {
input: ""./"",
output: ""./_site""
},
passthroughFileCopy: true
};
}
Final considerations
Assets
Unlike Jekyll, Eleventy provides no support for asset compilation or bundling scripts — we have plenty of choices in that department already. If you’ve been using Jekyll to compile Sass files into CSS, or CoffeeScript into Javascript, you will need to research alternative options, options which are beyond the scope of this article, sadly.
Publishing to GitHub Pages
One of the benefits of Jekyll is its deep integration with GitHub Pages. To publish an Eleventy generated site — or any site not built with Jekyll — to GitHub Pages can be quite involved, but typically involves copying the generated site to the gh-pages branch or including that branch as a submodule. Alternatively, you could use a continuous integration service like Travis or CircleCI and push the generated site to your web server. It’s enough to make your head spin! Perhaps for this reason, a number of specialised static site hosts have emerged such as Netlify and Google Firebase. But remember; you can publish a static site almost anywhere!
Going one louder
If you’ve been considering making the switch, I hope this brief overview has been helpful. But it also serves as a reminder why it can be prudent to avoid jumping aboard bandwagons.
While it’s fun to try new software and emerging technologies, doing so can require a lot of work and compromise. For all of Eleventy’s appeal, it’s only a year old so has little in the way of an ecosystem of plugins or themes. It also only has one maintainer. Jekyll on the other hand is a mature project with a large community of maintainers and contributors supporting it.
I moved my site to Eleventy because the slowness and inflexibility of Jekyll was preventing me from doing the things I wanted to do. But I also had time to invest in the transition. After reading this guide, and considering the specific requirements of your project, you may decide to stick with Jekyll, especially if the output will essentially stay the same. And that’s perfectly fine!
But these go to 11.
Information provided is correct as of Eleventy v0.6.0 and Jekyll v3.8.5 ↩",2018,Paul Lloyd,paulrobertlloyd,2018-12-11T00:00:00+00:00,https://24ways.org/2018/turn-jekyll-up-to-eleventy/,content
263,Securing Your Site like It’s 1999,"Running a website in the early years of the web was a scary business. The web was an evolving medium, and people were finding new uses for it almost every day. From book stores to online auctions, the web was an expanding universe of new possibilities.
As the web evolved, so too did the knowledge of its inherent security vulnerabilities. Clever tricks that were played on one site could be copied on literally hundreds of other sites. It was a normal sight to log in to a website to find nothing working because someone had breached its defences and deleted its database. Lessons in web security in those days were hard-earned.
What follows are examples of critical mistakes that brought down several early websites, and how you can help protect yourself and your team from the same fate.
Bad input validation: Trusting anything the user sends you
Our story begins in the most unlikely place: Animal Crossing. Animal Crossing was a 2001 video game set in a quaint town, filled with happy-go-lucky inhabitants that co-exist peacefully. Like most video games, Animal Crossing was the subject of many fan communities on the early web.
One such unofficial web forum was dedicated to players discussing their adventures in Animal Crossing. Players could trade secrets, ask for help, and share pictures of their virtual homes. This might sound like a model community to you, but you would be wrong.
One day, a player discovered a hidden field in the forum’s user profile form. Normally, this page allows users to change their name, their password, or their profile photo. This person discovered that the hidden field contained their unique user ID, which identifies them when the forum’s backend saves profile changes to its database. They discovered that by modifying the form to change the user ID, they could make changes to any other player’s profile.
Needless to say, this idyllic online community descended into chaos. Users changed each other’s passwords, deleted each other’s messages, and attacked each-other under the cover of complete anonymity. What happened?
There aren’t any official rules for developing software on the web. But if there were, my golden rule would be:
Never trust user input. Ever.
Always ask yourself how users will send you data that isn’t what it seems to be. If the nicest community of gamers playing the happiest game on earth can turn on each other, nowhere on the web is safe.
Make sure you validate user input to make sure it’s of the correct type (e.g. string, number, JSON string) and that it’s the length that you were expecting. Don’t forget that user input doesn’t become safe once it is stored in your database; any data that originates from outside your network can still be dangerous and must be escaped before it is inserted into HTML.
Make sure to check a user’s actions against what they are allowed to do. Create a clear access control policy that defines what actions a user may take, and to whose data they are allowed access to. For example, a newly-registered user should not be allowed to change the user profile of a web forum’s owner.
Finally, never rely on client-side validation. Validating user input in the browser is a convenience to the user, not a security measure. Always assume the user has full control over any data sent from the browser and make sure you validate any data sent to your backend from the outside world.
SQL injection: Allowing the user to run their own database queries
A long time ago, my favourite website was a web forum dedicated to the Final Fantasy video game series. Like the users of the Animal Crossing forum, I’d while away many hours arguing with other people on the internet about my favourite characters, my favourite stories, and the greatest controversies of the day.
One day, I noticed people were acting strangely. Users were being uncharacteristically nasty and posting in private areas of the forum they wouldn’t normally have access to. Then messages started disappearing, and user accounts for well-respected people were banned.
It turns out someone had discovered a way of logging in to any other user account, using a secret password that allowed them to do literally anything they wanted. What was this password that granted untold power to those who wielded it?
' OR '1'='1
SQL is a computer language that is used to query databases. When you fill out a login form, just like the one above, your username and your password are usually inserted into an SQL query like this:
SELECT COUNT(*)
FROM USERS
WHERE USERNAME='Alice'
AND PASSWORD='hunter2'
This query selects users from the database that match the username Alice and the password hunter2. If there is at least one user matching record, the user will be granted access. Let’s see what happens when we use our magic password instead!
SELECT COUNT(*)
FROM USERS
WHERE USERNAME='Admin'
AND PASSWORD='' OR '1'='1'
Does the password look like part of the query to you? That’s because it is! This password is a deliberate attempt to inject our own SQL into the query, hence the term SQL injection. The query is now looking for users matching the username Admin, with a password that is blank, or 1=1. In an SQL query, 1=1 is always true, which makes this query select every single record in the database. As long as the forum software is checking for at least one matching user, it will grant the person logging in access. This password will work for any user registered on the forum!
So how can you protect yourself from SQL injection?
Never build SQL queries by concatenating strings. Instead, use parameterised query tools. PHP offers prepared statements, and Node.JS has the knex package. Alternatively, you can use an ORM tool, such as Propel or sequelize.
Expert help in the form of language features or software tools is a key ally for securing your code. Get all the help you can!
Cross site request forgery: Getting other users to do your dirty work for you
Do you remember Netflix? Not the Netflix we have now, the Netflix that used to rent you DVDs by mailing them to you. My next story is about how someone managed to convince Netflix users to send him their DVDs - free of charge.
Have you ever clicked on a hyperlink, only to find something that you weren’t expecting? If you were lucky, you might have just gotten Rickrolled. If you were unlucky…
Let’s just say there are older and fouler things than Rick Astley in the dark places of the web.
What if you could convince people to visit a page you controlled? And what if those people were Netflix users, and they were logged in? In 2006, Dave Ferguson did just that. He created a harmless-looking page with an image on it:
Did you notice the source URL of the image? It’s deliberately crafted to add a particular DVD to your queue. Sprinkle in a few more requests to change the user’s name and shipping address, and you could ship yourself DVDs completely free of charge!
This attack is possible when websites unconditionally trust a user’s session cookies without checking where HTTP requests come from.
The first check you can make is to verify that a request’s origin and referer headers match the location of the website. These headers can’t be programmatically set.
Another check you can use is to add CSRF tokens to your web forms, to verify requests have come from an actual form on your website. Tokens are long, unpredictable, unique strings that are generated by your server and inserted into web forms. When users complete a form, the form data sent to the server can be checked for a recently generated token. This is an effective deterrent of CSRF attacks because CSRF tokens aren’t stored in cookies.
You can also set SameSite=Strict when setting cookies with the Set-Cookie HTTP header. This communicates to browsers that cookies are not to be sent with cross-site requests. This is a relatively new feature, though it is well supported in evergreen browsers.
Cross site scripting: Someone else’s code running on your website
In 2005, Samy Kamkar became famous for having lots of friends. Lots and lots of friends.
Samy enjoyed using MySpace which, at the time, was the world’s largest social network. Social networks at that time were more limited than today. For instance, MySpace let you upload photos to your photo gallery, but capped the limit at twelve. Twelve photos. At least you didn’t have to wade through photos of avocado toast back then…
Samy discovered that MySpace also locked down the kinds of content that you could post on your MySpace page. He discovered he could inject and tags into his headline, but was filtered. MySpace wasn’t about to let someone else run their own code on MySpace.
Intrigued, Samy set about finding out exactly what he could do with and tags. He found that you could add style properties to tags to style them with CSS.
This code only worked in Internet Explorer and in some versions of Safari, but that was plenty of people to befriend. However, MySpace was prepared for this: they also filtered the word javascript from .
Samy discovered that by inserting a line break into his code, MySpace would not filter out the word javascript. The browser would continue to run the code just fine! Samy had now broken past MySpace’s first line of defence and was able to start running code on his profile page. Now he started looking at what he could do with that code.
alert(document.body.innerHTML)
Samy wondered if he could inspect the page’s source to find the details of other MySpace users to befriend. To do this, you would normally use document.body.innerHTML, but MySpace had filtered this too.
alert(eval('document.body.inne' + 'rHTML'))
This isn’t a problem if you build up JavaScript code inside a string and execute it using the eval() function. This trick also worked with XMLHttpRequest.onReadyStateChange, which allowed Samy to send friend requests to the MySpace API and install the JavaScript code on his new friends’ pages.
One final obstacle stood in his way. The same origin policy is a security mechanism that prevents scripts hosted on one domain interacting with sites hosted on another domain.
if (location.hostname == 'profile.myspace.com') {
document.location = 'http://www.myspace.com'
+ location.pathname + location.search
}
Samy discovered that only the http://www.myspace.com domain would accept his API requests, and requests from http://profile.myspace.com were being blocked by the browser’s same-origin policy. By redirecting the browser to http://www.myspace.com, he discovered that he could load profile pages and successfully make requests to MySpace’s API. Samy installed this code on his profile page, and he waited.
Over the course of the next day, over a million people unwittingly installed Samy’s code into their MySpace profile pages and invited their friends. The load of friend requests on MySpace was so large that the site buckled and shut down. It took them two hours to remove Samy’s code and patch the security holes he exploited. Samy was raided by the United States secret service and sentenced to do 90 days of community service.
This is the power of installing a little bit of JavaScript on someone else’s website. It is called cross site scripting, and its effects can be devastating. It is suspected that cross-site scripting was to blame for the 2018 British Airways breach that leaked the credit card details of 380,000 people.
So how can you help protect yourself from cross-site scripting?
Always sanitise user input when it comes in, using a library such as sanitize-html. Open source tools like this benefit from hundreds of hours of work from dozens of experienced contributors. Don’t be tempted to roll your own protection. MySpace was prepared, but they were not prepared enough. It makes no sense to turn this kind of help down.
You can also use an auto-escaping templating language to make sure nobody else’s HTML can get into your pages. Both Angular and React will do this for you, and they are extremely convenient to use.
You should also implement a content security policy to restrict the domains that content like scripts and stylesheets can be loaded from. Loading content from sites not under your control is a significant security risk, and you should use a CSP to lock this down to only the sources you trust. CSP can also block the use of the eval() function.
For content not under your control, consider setting up sub-resource integrity protection. This allows you to add hashes to stylesheets and scripts you include on your website. Hashes are like fingerprints for digital files; if the content changes, so does the fingerprint. Adding hashes will allow your browser to keep your site safe if the content changes without you knowing.
npm audit: Protecting yourself from code you don’t own
JavaScript and npm run the modern web. Together, they make it easy to take advantage of the world’s largest public registry of open source software. How do you protect yourself from code written by someone you’ve never met? Enter npm audit.
npm audit reviews the security of your website’s dependency tree. You can start using it by upgrading to the latest version of npm:
npm install npm -g
npm audit
When you run npm audit, npm submits a description of your dependencies to the Registry, which returns a report of known vulnerabilities for the packages you have installed.
If your website has a known cross-site scripting vulnerability, npm audit will tell you about it. What’s more, if the vulnerability has been patched, running npm audit fix will automatically install the patched package for you!
Securing your site like it’s 2019
The truth is that since the early days of the web, the stakes of a security breach have become much, much higher. The web is so much more than fandom and mailing DVDs - online banking is now mainstream, social media and dating websites store intimate information about our personal lives, and we are even inviting the internet into our homes.
However, we have powerful new allies helping us stay safe. There are more resources than ever before to teach us how to write secure code. Tools like Angular and React are designed with security features baked-in from the start. We have a new generation of security tools like npm audit to watch over our dependencies.
As we roll over into 2019, let’s take the opportunity to reflect on the security of the code we write and be grateful for the everything we’ve learned in the last twenty years.",2018,Katie Fenn,katiefenn,2018-12-01T00:00:00+00:00,https://24ways.org/2018/securing-your-site-like-its-1999/,code
261,Surviving—and Thriving—as a Remote Worker,"Remote work is hot right now. Many people even say that remote work is the future. Why should a company limit itself to hiring from a specific geographic location when there’s an entire world of talent out there?
I’ve been working remotely, full-time, for five and a half years. I’ve reached the point where I can’t even fathom working in an office. The idea of having to wake up at a specific time and commute into an office, work for eight hours, and then commute home, feels weirdly anachronistic. I’ve grown attached to my current level of freedom and flexibility.
However, it took me a lot of trial and error to reach success as a remote worker — and sometimes even now, I slip up. Working remotely requires a great amount of discipline, independence, and communication. It can feel isolating, especially if you lean towards the more extroverted side of the social spectrum. Remote working isn’t for everyone, but most people, with enough effort, can make it work — or even thrive. Here’s what I’ve learned in over five years of working remotely.
Experiment with your environment
As a remote worker, you have almost unprecedented control of your environment. You can often control the specific desk and chair you use, how you accessorize your home office space — whether that’s a dedicated office, a corner of your bedroom, or your kitchen table. (Ideally, not your couch… but I’ve been there.) Hate fluorescent lights? Change your lightbulbs. Cover your work area in potted plants. Put up blackout curtains and work in the dark like a vampire. Whatever makes you feel most comfortable and productive, and doesn’t completely destroy your eyesight.
Working remotely doesn’t always mean working from home. If you don’t have a specific reason you need to work from home (like specialized equipment), try working from other environments (which is especially helpful it you have roommates, or children). Cafes are the quintessential remote worker hotspot, but don’t just limit yourself to your favorite local haunt. More cities worldwide are embracing co-working spaces, where you can rent either a roaming spot or a dedicated desk. If you’re a social person, this is a great way to build community in your work environment. Most have phone rooms, so you can still take calls.
Co-working spaces can be expensive, and not everyone has either the extra income, or work-provided stipend, to work from one. Local libraries are also a great work location. They’re quiet, usually have free wi-fi, and you have the added bonus of being able to check out books after work instead of, ahem, spending too much money on Kindle books. (I know most libraries let you check out ebooks, but reader, I am impulsive and impatient person. When I want a book now, I mean now.)
Just be polite — make sure your headphones don’t leak, and don’t work from a library if you have a day full of calls.
Remember, too, that you don’t have to stay in the same spot all day. It’s okay to go out for lunch and then resume work from a different location. If you find yourself getting restless, take a walk. Wash some dishes while you mull through a problem. Don’t force yourself to sit at your desk for eight hours if that doesn’t work for you.
Set boundaries
If you’re a workaholic, working remotely can be a challenge. It’s incredibly easy to just… work. All the time. My work computer is almost always with me. If I remember at 11pm that I wanted to do something, there’s nothing but my own willpower keeping me from opening up my laptop and working until 2am. Some people are naturally disciplined. Some have discipline instilled in them as children. And then some, like me, are undisciplined disasters that realize as adults that wow, I guess it’s time to figure this out, eh?
Learning how to set boundaries is one of the most important lessons I’ve learned working remotely. (And honestly, it’s something I still struggle with).
For a long time, I had a bad habit of waking up, checking my phone for new Slack messages, seeing something I need to react to, and then rolling over to my couch with my computer. Suddenly, it’s noon, I’m unwashed, unfed, starting to get a headache, and wondering why suddenly I hate all of my coworkers. Even when I finally tear myself from my computer to shower, get dressed, and eat, the damage is done. The rest of my day is pretty much shot.
I recently had a conversation with a coworker, in which she remarked that she used to fill her empty time with work. Wake up? Scroll through Slack and email before getting out of bed. Waiting in line for lunch? Check work. Hanging out on her couch in the evening? You get the drift. She was only able to break the habit after taking a three month sabbatical, where she had no contact with work the entire time.
I too had just returned from my own sabbatical. I took her advice, and no longer have work Slack on my phone, unless I need it for an event. After the event, I delete it. I also find it too easy to fill empty time with work. Now, I might wake up and procrastinate by scrolling through other apps, but I can’t get sucked into work before I’m even dressed. I’ve gotten pretty good at forbidding myself from working until I’m ready, but building any new habit requires intentionality.
Something else I experimented with for a while was creating a separate account on my computer for social tasks, so if I wanted to hang out on my computer in the evening, I wouldn’t get distracted by work. It worked exceptionally well. The only problems I encountered were technical, like app licensing and some of my work proxy configurations. I’ve heard other coworkers have figured out ways to work through these technical issues, so I’m hoping to give it another try soon.
You might noticed that a lot of these ideas are just hacks for making myself not work outside of my designated work times. It’s true! If you’re a more disciplined person, you might not need any of these coping mechanisms. If you’re struggling, finding ways to subvert your own bad habits can be the difference between thriving or burning out.
Create intentional transition time
I know it’s a stereotype that people who work from home stay in their pajamas all day, but… sometimes, it’s very easy to do. I’ve found that in order to reach peak focus, I need to create intentional transition time.
The most obvious step is changing into different clothing than I woke up in. Ideally, this means getting dressed in real human clothing. I might decide that it’s cold and gross out and I want to work in joggers and a hoody all day, but first, I need to change out of my pajamas, put on a bra, and then succumb to the lure of comfort.
I’ve found it helpful to take similar steps at the end of my day. If I’ve spent the day working from home, I try to end my day with something that occupies my body, while letting my mind unwind. Often, this is doing some light cleaning or dinner prep. If I try to go straight into another mentally heavy task without allowing myself this transition time, I find it hard to context switch.
This is another reason working from outside your home is advantageous. Commutes, even if it’s a ten minute walk down the road, are great transition time. Lunch is a great transition time. You can decompress between tasks by going out for lunch, or cooking and eating lunch in your kitchen — not next to your computer.
Embrace async
If you’re used to working in an office, you’ve probably gotten pretty used to being able to pop over to a colleague’s desk if you need to ask a question. They’re pretty much forced to engage with you at that point. When you’re working remotely, your coworkers might not be in the same timezone as you. They might take an hour to finish up a task before responding to you, or you might not get an answer for your entire day because dangit Gary’s in Australia and it’s 3am there right now.
For many remote workers, that’s part of the package. When you’re not co-located, you have to build up some patience and tolerance around waiting. You need to intentionally plan extra time into your schedule for waiting on answers.
Asynchronous communication is great. Not everyone can be present for every meeting or office conversation — and the same goes for working remotely. However, when you’re remote, you can read through your intranet messages later or scroll back a couple hours in Slack. My company has a bunch of internal blogs (“p2s”) where we record major decisions and hold asynchronous conversations. I feel like even if I missed a meeting, or something big happened while I was asleep, I can catch up later. We have a phrase — “p2 or it didn’t happen.”
Working remotely has made me a better communicator largely because I’ve gotten into the habit of making written updates. I’ve also trained myself to wait before responding, which allows me to distance myself from what could potentially be an emotional reaction. (On the internet, no one can see you making that face.) Having the added space that comes from not being in the same physical location with somebody else creates an opportunity to rein myself in and take the time to craft an appropriate response, without having the pressure of needing to reply right meow. Lean into it!
(That said, if you’re stuck, sometimes the best course of action is to hop on a video call with someone and hash out the details. Use the tools most appropriate for the problem. They invented Zoom for a reason.)
Seek out social opportunities
Even introverts can feel lonely or isolated. When you work remotely, there isn’t a built-in community you’re surrounded by every day. You have to intentionally seek out social opportunities that an office would normally provide.
I have a couple private Slack channels where I can joke around with work friends. Having that kind of safe space to socialize helps me feel less alone. (And, if the channels get too noisy, I can mute them for a couple hours.)
Every now and then, I’ll also hop on a video call with some work friends and just hang out for a little while. It feels great to actually see someone laugh.
If you work from a co-working space, that space likely has events. My co-working space hosts social hours, holiday parties, and sometimes even lunch-and-learns. These events are great opportunities for making new friends and forging professional connections outside of work.
If you don’t have access to a co-working space, your town or city likely has meetups. Create a Meetup.com account and search for something that piques your interest. If you’ve been stuck inside your house for days, heads-down on a hard deadline, celebrate by getting out of the house. Get coffee or drinks with friends. See a show. Go to a religious service. Take a cooking class. Try yoga. Find excuses to be around someone other than your cats. When you can’t fall back on your work to provide community, you need to build your own.
These are tips that I’ve found help me, but not everyone works the same way. Remember that it’s okay to experiment — just because you’ve worked one way, doesn’t mean that’s the best way for you. Check in with yourself every now and then. Are you happy with your work environment? Are you feeling lonely, down, or exhausted? Try switching up your routine for a couple weeks and jot down how you feel at the end of each day. Look for patterns. You deserve to have a comfortable and productive work environment!
Hope to see you all online soon 🙌",2018,Mel Choyce,melchoyce,2018-12-09T00:00:00+00:00,https://24ways.org/2018/thriving-as-a-remote-worker/,process
250,Build up Your Leadership Toolbox,"Leadership. It can mean different things to different people and vary widely between companies. Leadership is more than just a job title. You won’t wake up one day and magically be imbued with all you need to do a good job at leading. If we don’t have a shared understanding of what a Good Leader looks like, how can we work on ourselves towards becoming one? How do you know if you even could be a leader? Can you be a leader without the title?
What even is it?
I got very frustrated way back in my days as a senior developer when I was given “advice” about my leadership style; at the time I didn’t have the words to describe the styles and ways in which I was leading to be able to push back. I heard these phrases a lot:
you need to step up
you need to take charge
you need to grab the bull by its horns
you need to have thicker skin
you need to just be more confident in your leading
you need to just make it happen
I appreciate some people’s intent was to help me, but honestly it did my head in. WAT?! What did any of this even mean. How exactly do you “step up” and how are you evaluating what step I’m on? I am confident, what does being even more confident help achieve with leading? Does that not lead you down the path of becoming an arrogant door knob? >___<
While there is no One True Way to Lead, there is an overwhelming pattern of people in positions of leadership within tech industry being held by men. It felt a lot like what people were fundamentally telling me to do was to be more like an extroverted man. I was being asked to demonstrate more masculine associated qualities (#notallmen). I’ll leave the gendered nature of leadership qualities as an exercise in googling for the reader.
I’ve never had a good manager and at the time had no one else to ask for help, so I turned to my trusted best friends. Books.
I <3 books
I refused to buy into that style of leadership as being the only accepted way to be. There had to be room for different kinds of people to be leaders and have different leadership styles.
There are three books that changed me forever in how I approach and think about leadership.
Primal leadership, by Daniel Goleman, Richard Boyatzis and Annie McKee
Quiet, by Susan Cain
Daring Greatly - How the Courage to be Vulnerable transforms the way we live, love, parent and Lead, by Brené Brown
I recommend you read them. Ignore the slightly cheesy titles and trust me, just read them.
Primal leadership helped to give me the vocabulary and understanding I needed about the different styles of leadership there are, how and when to apply them.
Quiet really helped me realise how much I was being undervalued and misunderstood in an extroverted world. If I’d had managers or support from someone who valued introverts’ strengths, things would’ve been very different. I would’ve had someone telling others to step down and shut up for a change rather than pushing on me to step up and talk louder over everyone else. It’s OK to be different and needing different things like time to recharge or time to think before speaking. It also improved my ability to work alongside my more extroverted colleagues by giving me an understanding of their world so I could communicate my needs in a language they would get.
Brené Brown’s book I am forever in debt to. Her work gave me the courage to stand up and be my own kind of leader. Even when no-one around me looked or sounded like me, I found my own voice.
It takes great courage to be vulnerable and open about what you can and can’t do. Open about your mistakes. Vocalise what you don’t know and asking for help. In some lights, these are seen as weaknesses and many have tried to use them against me, to pull me down and exclude me for talking about them. Dear reader, it did not work, they failed. The truth is, they are my greatest strengths. The privileges I have, I use for good as best and often as I can.
Just like gender, leadership is not binary
If you google for what a leader is, you’ll get many different answers. I personally think Brené’s version is the best as it is one that can apply to a wider range of people, irrespective of job title or function.
I define a leader as anyone who takes responsibility for finding potential in people and processes, and who has the courage to develop that potential.
Brené Brown
Being a leader isn’t about being the loudest in a room, having veto power, talking over people or ignoring everyone else’s ideas. It’s not about “telling people what to do”. It’s not about an elevated status that you’re better than others. Nor is it about creating a hand wavey far away vision and forgetting to help support people in how to get there.
Being a Good Leader is about having a toolbox of leadership styles and skills to choose from depending on the situation. Knowing how and when to apply them is part of the challenge and difficulty in becoming good at it. It is something you will have to continuously work on, forever. There is no Done.
Leaders are Made, they are not Born.
Be flexible in your leadership style
Typically, the best, most effective leaders act according to one or more of six distinct approaches to leadership and skillfully switch between the various styles depending on the situation.
From the book, Primal Leadership, it gives a summary of 6 leadership styles which are:
Visionary
Coaching
Affiliative
Democratic
Pacesetting
Commanding
Visionary, moves people toward a shared dream or future. When change requires a new vision or a clear direction is needed, using a visionary style of leadership helps communicate that picture. By learning how to effectively communicate a story you can help people to move in that direction and give them clarity on why they’re doing what they’re doing.
Coaching, is about connecting what a person wants and helping to align that with organisation’s goals. It’s a balance of helping someone improve their performance to fulfil their role and their potential beyond.
Affiliative, creates harmony by connecting people to each other and requires effective communication to aid facilitation of those connections. This style can be very impactful in healing rifts in a team or to help strengthen connections within and across teams. During stressful times having a positive and supportive connection to those around us really helps see us through those times.
Democratic, values people’s input and gets commitment through participation. Taking this approach can help build buy-in or consensus and is a great way to get valuable input from people. The tricky part about this style, I find, is that when I gather and listen to everyone’s input, that doesn’t mean the end result is that I have to please everyone.
The next two, sadly, are the ones wielded far too often and have the greatest negative impact. It’s where the “telling people what to do” comes from. When used sparingly and in the right situations, they can be a force for good. However, they must not be your default style.
Pacesetting, when used well, it is about meeting challenging and exciting goals. When you need to get high-quality results from a motivated and well performing team, this can be great to help achieve real focus and drive. Sadly it is so overused and poorly executed it becomes the “just make it happen” and driver of unrealistic workload which contributes to burnout.
Commanding, when used appropriately soothes fears by giving clear direction in an emergency or crisis. When shit is on fire, you want to know that your leadership ability can help kick-start a turnaround and bring clarity. Then switch to another style. This approach is also required when dealing with problematic employees or unacceptable behaviour.
Commanding style seems to be what a lot of people think being a leader is, taking control and commanding a situation. It should be used sparingly and only when absolutely necessary.
Be responsible for the power you wield
If reading through those you find yourself feeling a bit guilty that maybe you overuse some of the styles, or overwhelmed that you haven’t got all of these down and ready to use in your toolbox…
Take a breath. Take responsibility. Take action.
No one is perfect, and it’s OK. You can start right now working on those. You can have a conversation with your team and try being open about how you’re going to try some different styles. You can be vulnerable and own up to mistakes you might’ve made followed with an apology. You can order those books and read them. Those books will give you more examples on those leadership styles and help you to find your own voice.
The impact you can have on the lives of those around you when you’re a leader, is huge. You can help be that positive impact, help discover and develop potential in someone.
Time spent understanding people is never wasted.
Cate Huston.
I believe in you. <3 Mazz.",2018,Mazz Mosley,mazzmosley,2018-12-10T00:00:00+00:00,https://24ways.org/2018/build-up-your-leadership-toolbox/,business
259,Designing Your Future,"I’ve had the pleasure of working for a variety of clients – both large and small – over the last 25 years. In addition to my work as a design consultant, I’ve worked as an educator, leading the Interaction Design team at Belfast School of Art, for the last 15 years.
In July, 2018 – frustrated with formal education, not least the ever-present hand of ‘austerity’ that has ravaged universities in the UK for almost a decade – I formally reduced my teaching commitment, moving from a full-time role to a half-time role.
Making the move from a (healthy!) monthly salary towards a position as a freelance consultant is not without its challenges: one month your salary’s arriving in your bank account (and promptly disappearing to pay all of your bills); the next month, that salary’s been drastically reduced. That can be a shock to the system.
In this article, I’ll explore the challenges encountered when taking a life-changing leap of faith. To help you confront ‘the fear’ – the nervousness, the sleepless nights and the ever-present worry about paying the bills – I’ll provide a set of tools that will enable you to take a leap of faith and pursue what deep down drives you.
In short: I’ll bare my soul and share everything I’m currently working on to – once and for all – make a final bid for freedom.
This isn’t easy. I’m sharing my innermost hopes and aspirations, and I might open myself up to ridicule, but I believe that by doing so, I might help others, by providing them with tools to help them make their own leap of faith.
The power of visualisation
As designers we have skills that we use day in, day out to imagine future possibilities, which we then give form. In our day-to-day work, we use those abilities to design products and services, but I also believe we can use those skills to design something every bit as important: ourselves.
In this article I’ll explore three tools that you can use to design your future:
Product DNA
Artefacts From the Future
Tomorrow Clients
Each of these tools is designed to help you visualise your future. By giving that future form, and providing a concrete goal to aim for, you put the pieces in place to make that future a reality.
Brian Eno – the noted musician, producer and thinker – states, “Humans are capable of a unique trick: creating realities by first imagining them, by experiencing them in their minds.” Eno helpfully provides a powerful example:
When Martin Luther King said, “I have a dream,” he was inviting others to dream that dream with him. Once a dream becomes shared in that way, current reality gets measured against it and then modified towards it.
The dream becomes an invisible force which pulls us forward. By this process it starts to come true. The act of imagining something makes it real.
When you imagine your future – designing an alternate, imagined reality in your mind – you begin the process of making that future real.
Product DNA
The first tool, which I use regularly – for myself and for client work – is a tool called Product DNA. The intention of this tool is to identify beacons from which you can learn, helping you to visualise your future.
We all have heroes – individuals or organisations – that we look up to. Ask yourself, “Who are your heroes?” If you had to pick three, who would they be and what could you learn from them? (You probably have more than three, but distilling down to three is an exercise in itself.)
Earlier this year, when I was putting the pieces in place for a change in career direction, I started with my heroes. I chose three individuals that inspired me:
Alan Moore: the author of ‘Do Design: Why Beauty is Key to Everything’;
Mark Shayler: the founder of Ape, a strategic consultancy; and
Seth Godin: a writer and educator I’ve admired and followed for many years.
Looking at each of these individuals, I ‘borrowed’ a little DNA from each of them. That DNA helped me to paint a picture of the kind of work I wanted to do and the direction I wanted to travel.
Moore’s book - ‘Do Design’ – had a powerful influence on me, but the primary inspiration I drew from him was the sense of gravitas he conveyed in his work. Moore’s mission is an important one and he conveys that with an appropriate weight of expression.
Shayler’s work appealed to me for its focus on equipping big businesses with a startup mindset. As he puts it: “I believe that you can do the things that you do better.” That sense – of helping others to be their best selves – appealed to me.
Finally, the words Godin uses to describe himself – “An Author, Entrepreneur and Most of All, a Teacher” – resonated with me. The way he positions himself, as, “most of all, a teacher,” gave me the belief I needed that I could work as an educator, but beyond the ivory tower of academia.
I’ve been exploring each of these individuals in depth, learning from them and applying what I learn to my practice. They don’t all know it, but they are all ‘mentors from afar’.
In a moment of serendipity – and largely, I believe, because I’d used this tool to explore his work – I was recently invited by Alan Moore to help him develop a leadership programme built around his book.
The key lesson here is that not only has this exercise helped me to design my future and give it tangible form, it’s also led to a fantastic opportunity to work with Alan Moore, a thinker who I respect greatly.
Artefacts From the Future
The second tool, which I also use regularly, is a tool called ‘Artefacts From the Future’. These artefacts – especially when designed as ‘finished’ pieces – are useful for creating provocations to help you see the future more clearly.
‘Artefacts From the Future’ can take many forms: they might be imagined magazine articles, news items, or other manifestations of success. By imagining these end points and giving them form, you clarify your goals, establishing something concrete to aim for.
Earlier this year I revisited this tool to create a provocation for myself. I’d just finished Alla Kholmatova’s excellent book on ‘Design Systems’, which I would recommend highly. The book wasn’t just filled with valuable insights, it was also beautifully designed.
Once I’d finished reading Kholmatova’s book, I started thinking: “Perhaps it’s time for me to write a new book?” Using the magic of ‘Inspect Element’, I created a fictitious page for a new book I wanted to write: ‘Designing Delightful Experiences’.
I wrote a description for the book, considering how I’d pitch it.
This imagined page was just what I needed to paint a picture in my mind of a possible new book. I contacted the team at Smashing Magazine and pitched the idea to them. I’m happy to say that I’m now working on that book, which is due to be published in 2019.
Without this fictional promotional page from the future, the book would have remained as an idea – loosely defined – rolling around my mind. By spending some time, turning that idea into something ‘real’, I had everything I needed to tell the story of the book, sharing it with the publishing team at Smashing Magazine.
Of course, they could have politely informed me that they weren’t interested, but I’d have lost nothing – truly – in the process.
As designers, creating these imaginary ‘Artefacts From the Future’ is firmly within our grasp. All we need to do is let go a little and allow our imaginations to wander.
In my experience, working with clients and – to a lesser extent, students – it’s the ‘letting go’ part that’s the hard part. It can be difficult to let down your guard and share a weighty goal, but I’d encourage you to do so. At the end of the day, you have nothing to lose.
The key lesson here is that your ‘Artefacts From the Future’ will focus your mind. They’ll transform your unformed ideas into ‘tangible evidence’ of future possibilities, which you can use as discussion points and provocations, helping you to shape your future reality.
Tomorrow Clients
The third tool, which I developed more recently, is a tool called ‘Tomorrow Clients’. This tool is designed to help you identify a list of clients that you aspire to work with.
The goal is to pinpoint who you would like to work with – in an ideal world – and define how you’d position yourself to win them over. Again, this involves ‘letting go’ and allowing your mind to imagine the possibilities, asking, “What if…?”
Before I embarked upon the design of my new website, I put together a ‘soul searching’ document that acted as a focal point for my thinking. I contacted a number of designers for a second opinion to see if my thinking was sound.
One of my graduates – Chris Armstrong, the founder of Niice – replied with the following: “Might it be useful to consider five to ten companies you’d love to work for, and consider how you’d pitch yourself to them?”
This was just the provocation I needed. To add a little focus, I reduced the list to three, asking: “Who would my top three clients be?”
By distilling the list down I focused on who I’d like to work for and how I’d position myself to entice them to work with me. My list included: IDEO, Adobe and IBM. All are companies I admire and I believed each would be interesting to work for.
This exercise might – on the surface – appear a little like indulging in fantasy, but I believe it helps you to clarify exactly what it is you are good at and, just as importantly, put that in to words.
For each company, I wrote a short pitch outlining why I admired them and what I thought I could add to their already existing skillset.
Focusing first on Adobe, I suggested establishing an emphasis on educational resources, designed to help those using Adobe’s creative tools to get the most out of them.
A few weeks ago, I signed a contract with the team working on Adobe XD to create a series of ‘capsule courses’, focused on UX design. The first of these courses – exploring UI design – will be out in 2019.
I believe that Armstrong’s provocation – asking me to shift my focus from clients I have worked for in the past to clients I aspire to work for in the future – made all the difference.
The key lesson here is that this exercise encouraged me to raise the bar and look to the future, not the past. In short, it enabled me to proactively design my future.
In closing…
I hope these three tools will prove a welcome addition to your toolset. I use them when working with clients, I also use them when working with myself.
I passionately believe that you can design your future. I also firmly believe that you’re more likely to make that future a reality if you put some thought into defining what it looks like.
As I say to my students and the clients I work with: It’s not enough to want to be a success, the word ‘success’ is too vague to be a destination. A far better approach is to define exactly what success looks like.
The secret is to visualise your future in as much detail as possible. With that future vision in hand as a map, you give yourself something tangible to translate into a reality.",2018,Christopher Murphy,christophermurphy,2018-12-15T00:00:00+00:00,https://24ways.org/2018/designing-your-future/,process
258,Mistletoe Offline,"It’s that time of year, when we gather together as families to celebrate the life of the greatest person in history. This man walked the Earth long before us, but he left behind words of wisdom. Those words can guide us every single day, but they are at the forefront of our minds during this special season.
I am, of course, talking about Murphy, and the golden rule he gave unto us:
Anything that can go wrong will go wrong.
So true! I mean, that’s why we make sure we’ve got nice 404 pages. It’s not that we want people to ever get served a File Not Found message, but we acknowledge that, despite our best efforts, it’s bound to happen sometime. Murphy’s Law, innit?
But there are some Murphyesque situations where even your lovingly crafted 404 page won’t help. What if your web server is down? What if someone is trying to reach your site but they lose their internet connection? These are all things than can—and will—go wrong.
I guess there’s nothing we can do about those particular situations, right?
Wrong!
A service worker is a Murphy-battling technology that you can inject into a visitor’s device from your website. Once it’s installed, it can intercept any requests made to your domain. If anything goes wrong with a request—as is inevitable—you can provide instructions for the browser. That’s your opportunity to turn those server outage frowns upside down. Take those network connection lemons and make network connection lemonade.
If you’ve got a custom 404 page, why not make a custom offline page too?
Get your server in order
Step one is to make …actually, wait. There’s a step before that. Step zero. Get your site running on HTTPS, if it isn’t already. You won’t be able to use a service worker unless everything’s being served over HTTPS, which makes sense when you consider the awesome power that a service worker wields.
If you’re developing locally, service workers will work fine for localhost, even without HTTPS. But for a live site, HTTPS is a must.
Make an offline page
Alright, assuming your site is being served over HTTPS, then step one is to create an offline page. Make it as serious or as quirky as is appropriate for your particular brand. If the website is for a restaurant, maybe you could put the telephone number and address of the restaurant on the custom offline page (unsolicited advice: you could also put this on the home page, you know). Here’s an example of the custom offline page for this year’s Ampersand conference.
When you’re done, publish the offline page at suitably imaginative URL, like, say /offline.html.
Pre-cache your offline page
Now create a JavaScript file called serviceworker.js. This is the script that the browser will look to when certain events are triggered. The first event to handle is what to do when the service worker is installed on the user’s device. When that happens, an event called install is fired. You can listen out for this event using addEventListener:
addEventListener('install', installEvent => {
// put your instructions here.
}); // end addEventListener
In this case, you want to make sure that your lovingly crafted custom offline page is put into a nice safe cache. You can use the Cache API to do this. You get to create as many caches as you like, and you can call them whatever you want. Here, I’m going to call the cache Johnny just so I can refer to it as JohnnyCache in the code:
addEventListener('install', installEvent => {
installEvent.waitUntil(
caches.open('Johnny')
.then( JohnnyCache => {
JohnnyCache.addAll([
'/offline.html'
]); // end addAll
}) // end open.then
); // end waitUntil
}); // end addEventListener
I’m betting that your lovely offline page is linking to a CSS file, maybe an image or two, and perhaps some JavaScript. You can cache all of those at this point:
addEventListener('install', installEvent => {
installEvent.waitUntil(
caches.open('Johnny')
.then( JohnnyCache => {
JohnnyCache.addAll([
'/offline.html',
'/path/to/stylesheet.css',
'/path/to/javascript.js',
'/path/to/image.jpg'
]); // end addAll
}) // end open.then
); // end waitUntil
}); // end addEventListener
Make sure that the URLs are correct. If just one of the URLs in the list fails to resolve, none of the items in the list will be cached.
Intercept requests
The next event you want to listen for is the fetch event. This is probably the most powerful—and, let’s be honest, the creepiest—feature of a service worker. Once it has been installed, the service worker lurks on the user’s device, waiting for any requests made to your site. Every time the user requests a web page from your site, a fetch event will fire. Every time that page requests a style sheet or an image, a fetch event will fire. You can provide instructions for what should happen each time:
addEventListener('fetch', fetchEvent => {
// What happens next is up to you!
}); // end addEventListener
Let’s write a fairly conservative script with the following logic:
Whenever a file is requested,
First, try to fetch it from the network,
But if that doesn’t work, try to find it in the cache,
But if that doesn’t work, and it’s a request for a web page, show the custom offline page instead.
Here’s how that translates into JavaScript:
// Whenever a file is requested
addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
fetchEvent.respondWith(
// First, try to fetch it from the network
fetch(request)
.then( responseFromFetch => {
return responseFromFetch;
}) // end fetch.then
// But if that doesn't work
.catch( fetchError => {
// try to find it in the cache
caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
return responseFromCache;
// But if that doesn't work
} else {
// and it's a request for a web page
if (request.headers.get('Accept').includes('text/html')) {
// show the custom offline page instead
return caches.match('/offline.html');
} // end if
} // end if/else
}) // end match.then
}) // end fetch.catch
); // end respondWith
}); // end addEventListener
I am fully aware that I may have done some owl-drawing there. If you need a more detailed breakdown of what’s happening at each point in the code, I’ve written a whole book for you. It’s the perfect present for Murphymas.
Hook up your service worker script
You can publish your service worker script at /serviceworker.js but you still need to tell the browser where to look for it. You can do that using JavaScript. Put this in an existing JavaScript file that you’re calling in to every page on your site, or add this in a script element at the end of every page’s HTML:
if (navigator.serviceWorker) {
navigator.serviceWorker.register('/serviceworker.js');
}
That tells the browser to start installing the service worker, but not without first checking that the browser understands what a service worker is. When it comes to JavaScript, feature detection is your friend.
You might already have some JavaScript files in a folder like /assets/js/ and you might be tempted to put your service worker script in there too. Don’t do that. If you do, the service worker will only be able to handle requests made to for files within /assets/js/. By putting the service worker script in the root directory, you’re making sure that every request can be intercepted.
Go further!
Nicely done! You’ve made sure that if—no, when—a visitor can’t reach your website, they’ll get your hand-tailored offline page. You have temporarily defeated the forces of chaos! You have briefly fought the tide of entropy! You have made a small but ultimately futile gesture against the inevitable heat-death of the universe!
This is just the beginning. You can do more with service workers.
What if, every time you fetched a page from the network, you stored a copy of that page in a cache? Then if that person tries to reach that page later, but they’re offline, you could show them the cached version.
Or, what if instead of reaching out the network first, you checked to see if a file is in the cache first? You could serve up that cached version—which would be blazingly fast—and still fetch a fresh version from the network in the background to pop in the cache for next time. That might be a good strategy for images.
So many options! The hard part isn’t writing the code, it’s figuring out the steps you want to take. Once you’ve got those steps written out, then it’s a matter of translating them into JavaScript.
Inevitably there will be some obstacles along the way—usually it’s a misplaced curly brace or a missing parenthesis. Don’t be too hard on yourself if your code doesn’t work at first. That’s just Murphy’s Law in action.",2018,Jeremy Keith,jeremykeith,2018-12-04T00:00:00+00:00,https://24ways.org/2018/mistletoe-offline/,code
246,Designing Your Site Like It’s 1998,"It’s 20 years to the day since my wife and I started Stuff & Nonsense, our little studio and my outlet for creative ideas on the web. To celebrate this anniversary—and my fourteenth contribution to 24 ways— I’d like to explain how I would’ve developed a design for Planes, Trains and Automobiles, one of my favourite Christmas films.
My design for Planes, Trains and Automobiles is fixed at 800px wide.
Developing a